Abstract Markov Policy (AMP) is a model for representing the execution of an abstract plan in noisy and uncertain domains. Methods for recognising an abstract policy from a sequence of noisy observations thus can be used for online plan recognition under uncertainty. In this paper, we extend previous work on policy recognition and consider a general type of abstract policies, including those with non-deterministic terminating conditions and factored representations of the state space. We analyse the structure of the stochastic model representing the execution of the general AMP and provide an efficient hybrid Rao-Blackwellised sampling method for policy recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while the stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms.