The notion of distributed knowledge is used to express what a group of agents would know if they were to combine their information. The paper considers the application of this notion to systems in which there are constraints on how an agent's actions may cause changes to another agent's observations. Intuitively, in such a setting, one would like that anything an agent knows about other agents must be distributed knowledge to the agents that can causally affect it. In prior work, we have argued that the definition of intransitive noninterference --- a notion of causality used in the literature on computer security --- is flawed because it fails to satisfy this property, and have proposed alternate definitions of causality that we have shown to be better behaved with respect to the theory of intransitive noninterference. In this paper we refine this understanding, and show that in order for the converse of the property to hold, one also needs a novel notion of distributed knowledge, as well as a new notion of what it means for a proposition to be ``about" other agents.