The selective transfer of task knowledge is studied within the context of multiple task learning (MTL) neural networks. Given a consolidated MTL network of previously learned tasks and a new primary task, T0, a measure of task relatedness is derived. The existing consolidated MTL network representation is fixed and an output for task T0 is connected to the hidden nodes of the network and trained. The cosine similarity between the hidden to output weight vectors for T0 and the weight vectors for each of the previously learned tasks is used as measure of task relatedness. The most related tasks are then used to learn T0 within a new MTL network using the task rehearsal method. Results of an empirical study on two synthetic domains of invariant concept tasks demonstrate the method’s ability to selectively transfer knowledge from the most related tasks so as to develop hypotheses with superior generalization.