Planning in real-time strategy (RTS) games is challenging due to their very large state and action spaces. Action abstractions have shown to be a promising approach for dealing with this challenge. Previous approaches induce action abstractions from a small set of hand-crafted strategies, which are used by algorithms to search only on the actions returned by the strategies. Previous works use a set of expert-designed strategies for inducing action abstractions. The main drawback of this approach is that it limits the agent behaviour to the knowledge encoded in the strategies. In this research, we focus on learning novel and effective strategies for RTS games, to induce action abstractions. In addition to being effective, we are interested in learning strategies that can be easily interpreted by humans, allowing a better understanding of the workings of the resulting agent.