With advances in machine learning and speech technologies, conversational agents are becoming increasingly capable of engaging in human-like conversations. However, trust is crucial for effective communication and collaboration, and understanding the signals of trustworthy speech is essential for successful interactions. While researchers across disciplines have sought to discover the signals of trustworthy speech, mostly in human speech, in this paper, we explore the human perception of trustworthy synthesized speech. We present the results of a large-scale crowdsourced perception study, designed to investigate the acoustic-prosodic properties of trustworthy synthesized speech. Highly controlled parameters are manipulated to test the effects of acoustic-prosodic features including pitch, intensity, and speaking rate. To evaluate trust perception in contexts that require vulnerability and trust, a real-world application of emotional support dialogues is used. The findings of this work contribute valuable insights to improve the perceived trustworthiness of conversational agents.