Robots can recognize pitch, perceived loudness, or simple categories such as positive or negative attitude in real-time from speech. Existing research has demonstrated specialized systems able to perform these operations. Additionally, off-line classification has shown that affect can be classified at rates that are better than random although worse than human performance. Taken together, this previous research indicates the possibility of real-time classification of a variety of affective states. We are exploring this problem with the eventual goal of designing an field-programmable gate array (FPGA) based system that will rapidly process relevant features in parallel.