Most real-world data is stored in relational form. In contrast, most statistical learning methods, e.g., Bayesian network learning, work only with "flat" data representations, forcing us to convert our data into a form that loses much of the relational structure. The recently introduced framework of probabilistic relational models (PRMs) allow us to represent much richer dependency structures, involving multiple entities and the relations between them; they allow the properties of an entity to depend probabilistically on properties of related entities. Friedman et al. showed how to learn PRMs that model the attribute uncertainty in relational data, and presented techniques for learning both parameters and probabilistic dependency structure for the attributes in a relational model. In this work, we propose methods for handling structural uncertainty in PRMs. Structural uncertainty is uncertainty over which entities are related in our domain. We propose two mechanisms for modeling structural uncertainty: reference uncertainty and existence uncertainty. We describe the appropriate conditions for using each model and present learning algorithms for each. We conclude with some preliminary experimental results comparing and contrasting the use of these mechanism for learning PRMs in domains with structural uncertainty.