Trust between conversational partners is critical for effective communication and collaboration. While numerous studies have examined spoken cues to deceptive speech to understand how untrustworthy speech is produced and perceived, little work has studied the characteristics of trusting speech, i.e. cues that indicate whether a speaker trusts their conversational partner. This is crucial for monitoring a speaker’s perception of their interlocutor, which has implications for conversational outcomes. In this work, we examine trusting speech in both human-human and human-machine dialogues. We study this phenomenon across native speakers of three languages (American English, Mandarin Chinese, and Argentine Spanish) in order to examine how one’s native language affects their production of trusting and mistrusting speech. We identify several stable acoustic-prosodic signals of trusting speech across speakers of different native languages and identify some notable differences. This work sheds light on the nature of trusting speech across settings such as culture, language, and domain. We build predictive models of trusting speech using acoustic-prosodic features, in both within-and cross-cultural settings. We study the interpretability of these models and use those insights to improve classification performance.