Jindal School Research Shows Patients Trust Fellow Patients More Than AI Accuracy

by - May 27th, 2025 - Faculty/Research

A tablet displaying an ECG heart rate graph sits on a desk next to a stethoscope in a modern medical office, with a doctor in a white coat standing in the background.

A study conducted by a researcher from the Naveen Jindal School of Management finds that when patients are presented with both algorithm accuracy and peer acceptance information in a medical artificial intelligence system, their decisions are more strongly influenced by what fellow patients have accepted — even when the accuracy of the AI algorithm is high.

Shujing Sun headshot
Shujing Sun

The study — When Peers Matter More: The Dominance of Social Influence Over Algorithm Accuracy in Patient Decision Making — received the Best Paper Award at the Conference on Health IT and Analytics (CHITA). It was co-authored by Dr. Shujing Sun, assistant professor in the Jindal School’s Information Systems Area; Dr. Lauren Xiaoyuan Lu of the Tuck School of Business at Dartmouth College; Dr. Susan F. Lu of the Rotman School of Management at University of Toronto; and Dr. Wei Gu of the University of Science and Technology Beijing.

“We show that both technical information (algorithm accuracy) and social information (peer acceptance) significantly affect patients’ acceptance of AI recommendations,” Sun said. “However, the effect is not balanced: when both types of information are presented, social validation from peers dominates patients’ decision-making. Specifically, we found that patients are reluctant to trust AI recommendations when peer acceptance is low, even if the algorithm’s accuracy is high. Conversely, high peer acceptance can lead patients to accept AI recommendations despite suboptimal algorithm accuracy.”

The study showed that when patients see that peers have accepted an AI recommendation, it may enhance their trust in the technology.

“The finding is particularly relevant in the healthcare setting, which is captured by the high stakes of healthcare decisions, a lack of medical literacy and limited understanding of healthcare AI applications,” Sun said. “Social proof acts as a kind of cognitive bridge, helping patients feel safer making decisions involving unfamiliar technology. So in this sense, social validation can compensate for a lack of understanding or confidence in AI.”

Although the study was conducted in a large Chinese hospital, Sun said its findings can translate to countries with different healthcare systems or cultural attitudes toward technology.

“While the study was based in China, the underlying mechanism that patients rely on social validation — or the herding behavior — in uncertain and high-risk healthcare decisions, is likely universal,” she said. “Nonetheless, the magnitude of the effect may differ. In collectivistic cultures like China, social validation is critical; patients feel more comfortable accepting AI advice if others in their reference groups do so. Conversely, in individualistic cultures like the U.S., where autonomy and independent thinking are prioritized, social validation may play a smaller role.”

The study presented a key ethical challenge; namely, when peer acceptance is low, patients are less likely to accept even highly accurate AI advice. Sun said that in this study, peer acceptance was based on actual patient feedback collected during the trial phase of the AI gatekeeping app.

“Therefore, the peer acceptance measure is somewhat ‘exogenous’ and may not perfectly align with the algorithm performance,” she said. “While this is ideal for the experiment design, for broader applications, we would recommend that designers of AI tools infer social information from diverse and representative populations. Equally important is the transparent communication of how such metrics are generated and presented. The goal is to provide accurate information to support informed decision-making, not manipulate users.”

When patients are confronted by advanced technologies like AI, another potential challenge is the risk of “over-herding”a tendency to rely too much on decisions made by fellow patients, even when doing so can result in poor or even harmful outcomes.

“To mitigate this, it’s essential that healthcare decision-makers and AI system designers ensure transparency in how information is presented to patients,” Sun said. “Additionally, incorporating human gatekeepers like primary-care physicians in the loop can help safeguard patient choices against blind conformity.”

As for the practical takeaways for hospitals or app developers designing patient-facing AI tools, Sun said that a one-size-fits-all approach is probably not ideal in the healthcare setting with very diverse patient populations.

“While strategic information disclosure is a cost-effective way to improve AI acceptance, we would recommend tailoring disclosure to user needs,” she said. “Revealing algorithm accuracy or peer acceptance independently can improve trust, but the combined disclosure — particularly when peer acceptance is high — yields higher acceptance. Additionally, user segmentation is essential. Patients with limited prior experience or those less confident in describing symptoms respond more positively to supportive disclosures. Aligning AI strategies with diverse user needs enhances acceptance rates and maximizes ROI relating to AI.”

Sun said the team is currently working on follow-up studies to see how sharing different types of information affects key parts of the healthcare system — like how efficiently patients are referred to specialists, how much workload primary care doctors take on and how system-level resources are used. The team is also exploring other promising directions, such as using personalized social proof to help patients make more confident decisions and developing explainable AI tools that make the technology more transparent and easier to understand.

Sun, who was previously recognized by CHITA in 2023 with a Best Paper Award and in 2018 with the Young Researcher Best Paper Runner-up award, said receiving this year’s Best Paper Award is a tremendous honor — particularly because of what the conference represents in her field.

“CHITA has been more than just an academic venue for me,” she said. “Personally, it has been a cornerstone of my healthcare research journey since my PhD days.”

She added that the recognition is especially meaningful because the team’s work bridges the gap between research and real-world applications.

“Professionally, it’s incredibly rewarding to see our work resonate with both academic scholars and industry practitioners,” she said.

The study, Sun said, provided timely insights into human-AI interaction and highlighted a way forward for hospital administrators to increase the adoption of AI. She stressed that technical performance alone isn’t enough to ensure success.

“One of the key takeaways is that technical excellence alone is not enough to ensure successful AI implementation,” she said. “Human aversion toward AI remains a significant barrier, especially in healthcare.”

That problem is not insurmountable, Sun said.

“To alleviate such aversion, hospital administrators can augment AI recommendations with peer validation,” she said. “Similar strategies are already successful in e-commerce platforms like Amazon and Yelp, where peer reviews have been shown to drive consumer behavior.”

More from Faculty/Research - News Category

Jindal School Faculty Member Named One of 40 Under 40 MBA Professors

Jindal School Faculty Member Named One of 40 Under 40 MBA Professors

Jindal School Research Balances Insights into Physician Recommendations for Dialysis Patients

Jindal School Research Balances Insights into Physician Recommendations for Dialysis Patients

Jindal School Faculty Member/Alumna Win Award for Long-term Contribution to Marketing

Jindal School Faculty Member/Alumna Win Award for Long-term Contribution to Marketing

Study Reveals Public Favors Chatbot Answers Over Those of Physicians

Study Reveals Public Favors Chatbot Answers Over Those of Physicians

Study by Jindal School Researcher and PhD Alumni Shows How Digitization Is Transforming Startup Innovation

Study by Jindal School Researcher and PhD Alumni Shows How Digitization Is Transforming Startup Innovation

UT Dallas Jindal School students in a campus coffee shop requestion information

Request Information

Thank you for your interest in the Naveen Jindal School of Management, UT Dallas. Tell us a little bit about yourself, and we’ll send you customized information about our programs. We hope to meet you soon.

Request Information