Conversational Artificial Intelligence has evolved to facilitate more efficient communication of user preferences through dialogues. This paper delves into Argumentative Conversational AI systems, and, more specifically in the definition of a methodology of selecting and using plausible arguments to support recommendations. We propose a cross-disciplinary model grounded in cognitive pragmatics to enhance recommendation quality. We evaluate this linguistically motivated strategy in isolation using simulated dialogues and collect human judgements to verify that the expected interaction is believable. Next, we test the full interaction model with human users to evaluate its usability. Results indicate high scores for naturalness and argument selection, validating the system's plausibility and effectiveness. Concerning usability, the system is perceived as attractive and reliable although technical issues concerning the system's reactiveness are present.