Motivational Interviewing (MI) is a goal-oriented psychotherapy, employed in cases such as addiction, that helps clients explore and resolve their ambivalence about the problem at hand in a dialog setting. MI session quality is typically assessed with behavioral coding — a time consuming and labor intensive manual annotation system. This paper examines a computational approach to modeling and assessing the quality of MI sessions. Specifically, we pose the utterance level behavioral coding task as a sequence tagging problem and use linear chain CRF models trained on coded session transcripts and Switchboard DAMSL dataset to predict utterance level behavioral codes as well as dialog acts. We then use those utterance level predictions to predict session level behavioral codes of clinical interest characterizing the quality and efficacy of psychotherapy. We experiment with different feature parameterizations and reduced code sets and present an analysis of how standard dialog acts relate to behavioral codes.