
AI Chatbots vs Physicians: Comparing Emotional Tone and Readability in Patient Care
AI Chatbots vs Physicians: Analyzing Empathy and Readability
The rapid rise of digital health tools has led many patients in India and globally to consult large language models before visiting a clinic. Consequently, understanding AI chatbot emotional content is becoming essential for modern healthcare providers. A recent cross-sectional study compared 100 physician-answered questions with responses generated by OpenAI’s ChatGPT and Google’s Gemini to evaluate how technology mimics the human touch.
Comparing AI Chatbot Emotional Content with Physician Empathy
Researchers found that the primary emotions in almost all responses were neutral. However, significant differences emerged when examining secondary and tertiary emotional layers. Specifically, Gemini showed higher odds of expressing fear and compassion compared to human doctors. In contrast, the odds of ChatGPT conveying hope were over 80% lower than those of physicians. These results suggest that while AI can simulate various tones, it often lacks the balanced delivery of hope that characterize a doctor's reassurance. Therefore, physicians still maintain a unique advantage in managing patient expectations through nuanced emotional support.
Readability and the Complexity of AI Responses
Beyond the nuances of AI chatbot emotional content, the study highlighted major disparities in length and accessibility. For instance, physicians provided the most concise answers, averaging only 193 words. Meanwhile, Gemini produced the most verbose responses, often exceeding 880 words. Notably, the readability scores followed a concerning pattern for patient education. While physicians wrote at approximately a 9th-grade level, Gemini’s content reached an 11th-grade level. This higher complexity might hinder understanding for patients with limited health literacy. Furthermore, Gemini was significantly more likely to include mandatory medical disclaimers than its counterparts. Consequently, while AI offers detailed information, it may unintentionally create barriers through overly complex language.
Clinical Implications for Modern Practice
The integration of AI into the patient journey is inevitable. However, clinicians must remain aware that AI-generated text often tends to be wordier and less readable than professional medical advice. Moreover, the emotional profile of these models varies significantly. While Gemini may sound more compassionate, it also introduces higher levels of fear-based language. Physicians should use these insights to guide patients on the limitations of AI-driven advice. Ultimately, the goal is to leverage AI for efficiency while preserving the simplicity and hope that define the physician-patient relationship.
Frequently Asked Questions
Is AI more empathetic than a doctor in text responses?
AI models like Gemini often use more words associated with compassion. However, they are significantly less likely to convey hope than human physicians. Doctors provide more concise and balanced emotional support.
Which AI chatbot is easier for patients to understand?
The study found that ChatGPT’s responses are generally easier to read than Gemini’s. Nevertheless, both AI models write at a higher grade level than human physicians, who typically provide the most accessible information.
Why do AI chatbots write longer responses than doctors?
AI models are trained to be comprehensive and often include multiple explanations and disclaimers. Consequently, Gemini responses were nearly four times longer than those from human physicians.
Disclaimer: This content is for informational and educational purposes only. It does not constitute professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified healthcare provider with any questions you may have regarding a medical condition. Refer to the latest local and national guidelines for clinical practice.
References
Burns DT et al. Comparison of Emotional Content in Text Responses From Physicians and AI Chatbots to Patient Health Queries: Cross-Sectional Study. J Med Internet Res. 2026 Mar 06. doi: 10.2196/85516. PMID: 41791109.
Ayub A et al. AI chatbots versus human healthcare professionals: a systematic review and meta-analysis of empathy in patient care. PMC. 2025; 15(4): 202-215.
Kutbi D et al. Evaluating the Accuracy of AI-Model Generated Medical Information by ChatGPT and Gemini in Alignment with International Clinical Guidelines. Frontiers in Digital Health. 2026; 8:102345.
"}

More from MedShots Daily

A cross-sectional study compares the emotional tone, readability, and use of disclaimers between ChatGPT, Gemini, and human physicians in health queries....
2 months ago

Study reveals sarcopenia is more prevalent in AIH (32.8%) than PBC (17.0%) and identifies low PMI as a key predictor of poor survival in AIH patients....
Today

Explore the impact of uterine fibroids on pregnancy and why surgical myomectomy remains the gold standard for fertility restoration over non-invasive method...
Today

New data from the TELESAT PRIOR-HF study shows that remote monitoring reduces heart failure mortality and rehospitalization while saving healthcare costs....
Today

A scoping review highlights the high accuracy (80-98% sensitivity) of POCUS for detecting pediatric hip effusions, improving bedside emergency evaluations....
Today

Young adults with early-onset type 2 diabetes face unique psychosocial challenges, requiring age-appropriate, person-centered care and psychological support...
Today