An important component in the vision of ubiquitous computing is universal interaction: the ability to use arbitrary interactive devices, such as cell phones and palmtops, to interact with arbitrary appliances such as TVs, printers, and lights. We believe that these interactive devices can and should enable personalized agents to learn about the real-world behavior of their users by observing the appliance operations they invoke. The agents can then apply this knowledge to support useful and interesting features, including: (1) predicting appliance related tasks and automatically performing them on the behalf of users and (2) presenting appliance interfaces that reflect the situational preferences of users as inferred from their past interactions. In this paper, we motivate and present an architecture for integrating personalized agents in our universal interaction infrastructure. Specifically, we present the following: (1) reasons for supporting universal interaction and personalized agents in this domain, (2) a general architecture for universal interaction, (3) a framework for supporting personalized agents on top of this general architecture, (4) the current state of our implementation of this framework, and (5) open research issues we are currently exploring.