Genetic programming (GP) systems have traditionally used a fixed training population to evolve best-of-run programs according to problem-specific fitness criteria. The ideal GP training population would be sufficiently representative of each of the potentially difficult situations encountered during subsequent program use to allow the resulting best-of-run programs to handle each test situation in an optimized manner. Practical considerations limit the size of the training population, thus reducing the percentage of situations explicitly anticipated by that population. As a result, best-of-run programs may fail to exhibit sufficiently optimized performance during subsequent program testing. This paper summarizes an investigation into the effects of creating a new randomly generated training population prior to the fitness evaluation of each generation of programs. Test results suggest that this alternative approach to training can bolster generalization of evolved solutions, improving the mean program performance while significantly reducing variance in the fitness of best-of-run programs.