Prosodic modeling is very important for improved naturalness in text to speech synthesis systems. This paper describes an automatic data-driven methodology for prosodic modeling that can be incorporated into a text-to-speech system. This methodology models both fundamental frequency and suprasegmental duration from a monospeaker recorded corpus. The proposed automatic methodology has the advantage that can be adapted to a specific corpus or a particular speaker. The results of the automatic methodology are compared with our previous manual methodology with the same prosodic data. A greater variability in prosodic contourns are obtained as the main result.