Track:
Contents
Downloads:
Abstract:
One of the main drawbacks of the LRTA* real-time heuristic search algorithm is slow convergence. Backtracking as introduced by SLA* is one way of speeding up the convergence, although at the cost of sacrificing first-trial performance. The backtracking mechanism of SLA* consists of back-propagating updated heuristic values to previously visited states while the algorithm retracts its steps. In this paper we separate these hitherto intertwined aspects, and investigate the benefits of each independently. We present back-propagating search variants that do value back-propagation without retracting their steps. Our empirical evaluation shows that in some domains the value back-propagation is the key to improved efficiency while in others the retracting role is the main contributor. Furthermore, we evaluate learning performance of selected search variants during intermediate trial runs and quantify the importance of loop elimination for such a comparison. For example, our results indicate that the first-trial performance of LRTA* in pathfinding domains is much better than previously perceived in the literature.