We describe a "direct modeling" approach to using prosody in various speech technology tasks. The approach does not involve any hand-labeling or modeling of prosodic events such as pitch accents or boundary tones. Instead, prosodic features are extracted directly from the speech signal and from the output of an automatic speech recognizer. Machine learning techniques then determine a prosodic model, which is integrated with lexical and other information to predict the target classes of interest. We discuss task-specific modeling and results for a line of research covering four general application areas: (1) structural tagging (finding sentence boundaries, disfluencies), (2) pragmatic and paralinguistic tagging (classifying dialog acts, emotion, and "hot spots"), (3) speaker recognition, and (4) word recognition itself. To provide an idea of performance on realworld data, we focus on spontaneous (rather than read or acted) speech from a variety of contexts-including human-human telephone conversations, game-playing, human-computer dialog, and multi-party meetings.