Early, accurate detection of cognitive load can help reduce risk of accidents and injuries, and inform intervention and rehabilitation in recovery. Thus, simple noninvasive biomarkers are desired for determining cognitive load under cognitively complex tasks. In this study, a novel set of vocal biomarkers are introduced for detecting different cognitive load conditions. Our vocal biomarkers use phoneme- and pseudosyllable-based measures, and articulatory and source coordination derived from cross-correlation and temporal coherence of formant and creakiness measures. A ~2-hour protocol was designed to induce cognitive load by stressing auditory working memory. This was done by repeatedly requiring the subject to recall a sentence while holding a number of digits in memory. We demonstrate the power of our speech features to discriminate between high and low load conditions. Using a database consisting of audio from 13 subjects, we apply classification models of cognitive load, showing a ~7% detection equal-error rate from features derived from 40 sentence utterances (~4 minutes of audio).