A big challenge for creating human-level game AI is building agents capable of operating in imperfect information environments. In real-time strategy games the technological progress of an opponent and locations of enemy units are partially observable. To overcome this limitation, we explore a particle-based approach for estimating the location of enemy units that have been encountered. We represent state estimation as an optimization problem, and automatically learn parameters for the particle model by mining a corpus of expert StarCraft replays. The particle model tracks opponent units and provides conditions for activating tactical behaviors in our StarCraft bot. Our results show that incorporating a learned particle model improves the performance of EISBot by 10% over baseline approaches.