Learning a Bayesian network from data is a model specific task, and thus requires careful consideration of contextual information, namely, contextual independencies. In this paper, we study the role of hidden variables in learning causal models from data. We show how statistical methods can help us discover these hidden variables. We suggest hidden variables are wrongly ignored in inference, because they are context-specific. We show that contextual consideration can help us learn more about true causal relationships hidden in the data. We present a method for correcting models by finding hidden contextual variables, as well as a means for refining the current, incomplete model.