This paper describes a self-organizing neural model that is capable of autonomously learning to control robots with redundant degrees of freedom. The self-organized learning process is inspired by a fundamental principle prevalent in biological systems: action perception cycles wherein self-generated movement commands activate correlated visual, spatial and motor information. These self-generated movements or motor babbling are used to learn an internal invariant coordinate transformation between vision and motor systems. This invariance is achieved by the model by exploiting redundancy in the degrees of freedom available to the robotic system to learn a sensory-motor transform that is robust to a wide range of perturbations and failures in both sensory and motor parameters. To demonstrate the generality of the neural model, the learning process was tested on three different redundant robot systems with three different functional goals. The first robot system was a computer model of three degrees of freedom robot arm whose goal was to learn to reach for targets in 2-D space using the self-organizing neural model. The motor babbling process enabled the learning of a transform between changes in joint angles of the robot arm to the changes in the perceived direction of movement of the robot end-effector. The second robot system was a computer model of a head-neck-eye robotic stereo platform whose goal was to learn to saccade to 3-D targets. The motor babbling process in this case enabled the learning of a transform between changes in joint angles of the head, neck and eye to the changes in the perceived direction of movements of a 3-D target in the stereo camera. The third robot system was a real hexapod robotic platform with eighteen degrees of freedom whose goal was to learn to move to 3D targets in the real world. In this case, the robot self-generated movements using a central pattern generator (CPG) and learned the transform between joint angle changes of the limbs in contact with the ground and the corresponding changes in the location of targets in the stereo camera images. Computer simulations (for the first two robot platforms) and real experiments on the hexapod robot system show that the resulting learned controller is highly fault-tolerant and robust to previously unseen disturbances much like biological systems while successfully performing its respective functions. Examples of robust performance for the simulated robots ranged from using a pointer to reach in 2-D, saccades to 3-D targets despite loss of degrees of freedom in the head, neck or eye movements and changes in focal length of the stereo camera during saccades. For the hexapod robot, robust performance of moving towards 3-D targets was exhibited despite a wide variety of disturbances including reduced degrees of freedom (such as inability to turn and push off the ground due joint locks), changes in stereo camera separation and changes in camera focal lengths. None of these disturbances were encountered during the learning phase for either the simulated or real robot systems. These results point to the general nature of the learned transform in its ability to control autonomous robots with redundant degrees of freedom in a robust and fault-tolerant fashion. This type of robustness is a hallmark of biological systems and the results of our simulations and experiments suggest that learning the invariant sensory motor transform from changes in sensory to the motor parameters is necessary for robust functional performance in dynamically changing environments with unforeseen situations and conditions.