Observation decoding aims at discovering the underlying state trajectory of an acting agent from a sequence of observations. This task is at the core of various recognition activities that exploit planning as resolution method but there is a general lack of formal approaches that reason about the partial information received by the observer or leverage the distribution of the observations emitted by the sensors. In this paper, we formalize the observation decoding task exploiting a probabilistic sensor model to build more accurate hypothesis about the behaviour of the acting agent. Our proposal extends the expressiveness of former recognition approaches by accepting observation sequences where one observation of the sequence can represent the reading of more than one variable, thus enabling observations over actions and partially observable states simultaneously. We formulate the probability distribution of the observations perceived when the agent performs an action or visits a state as a classical cost planning task that is solved with an optimal planner. The experiments will show that exploiting a sensor model increases the accuracy of predicting the agent behaviour in four different contexts.