Hand-coded finite-state machines and behavior trees are the go-to techniques for artificial intelligence (AI) developers that want full control over their character's bearing. However, manually crafting behaviors for computer-controlled agents is a tedious and parameter-dependent task. From a high-level view, the process of designing agent AI by hand usually starts with the determination of a suitable set of action sequences. Once the AI developer has identified these sequences he merges them into a complete behavior by specifying appropriate transitions between them. Automated techniques, such as learning, tree search and planning, are on the other end of the AI toolset's spectrum. They do not require the manual definition of action sequences and adapt to parameter changes automatically. Yet AI developers are reluctant to incorporate them in games because of their performance footprint and lack of immediate designer control. We propose a method that, given the symbolic definition of a problem domain, can automatically extract a transparent behavior model from Goal-Oriented Action Planning (GOAP). The method first observes the behavior exhibited by GOAP in a Monte-Carlo simulation and then evolves a suitable behavior tree using a genetic algorithm. The generated behavior trees are comprehensible, refinable and as performant as hand-crafted ones.