A significant component of any intelligent environment is the human - machine interface. It is highly likely that in the future such an interface will, for the majority of applications, closely model human to human communication. In fact we may expect that the human - machine interface will increasingly mimic the behavior and appearance of humans. Two years ago BT set up the Maya project. The aim of this project_was to research into the spoken language and kinesic1 aspects of such an interface and to provide an effective computational research framework. Due to scale of the problem, Maya collaborates closely with a number of other groups within BT, these include speech synthesis (Page and Breen 1996) or (Edgington, Lowry, Jackson, Breen and Minnis 1996) recognition (Pawlewski 1996), understanding (Wyard 1996) and virtual humans (Breen 1996) or (Breen, Bowers and Welsh 1996). This paper provides an insight into the underlying principles governing developments within the Maya project. The paper begins with an introduction to a number of the issues affecting natural human - machine discourse. It then briefly describes the computational framework being developed within the Maya project and uses the example of speech synthesis to argue that advanced research into spoken language and Kinesics is best achieved within such an integrated framework.