The notion of rationality, defined as a behavior that maximizes the agent’s expected utility, is usually taken to pertain to the courses of physical action available to the agent. And while there is no reason why it should not be useful in providing normative theories of activities that are not purely physical like planning, learning, inference, and gathering information, the formalization of the latter types of activities in decision-theoretic terms is not straightforward. The basic difficulty is that these activities are not directly aimed at changing the physical environment, based on which the utility gain could be conveniently expressed, but rather they change the state of the agent itself. Thus, their purpose is more of an indirect effect of putting the agent in a better position for subsequent physical interaction with its environment, and it is the general and robust formalization of this effect that remains a challenge. In our work we have noticed that it is very difficult to even begin to address these issues on a fundamental level without postulating a particular, although possibly quite abstract, architecture for an agent in question. In the framework we have investigated an agent is represented as a following tuple: (IS, K, P, BA, I, R). In this paper, we briefly describe the components of this representation and the relations among them.