Preliminary Evidence from Jindal School Study: AI May Help Close Citation Gaps

by - August 5th, 2025 - Faculty/Research, Featured

A stylized illustration of a laptop with an AI symbol, a pencil, documents, citation marks, and an upward arrow, symbolizing AI's role in improving academic writing and impact.

Although still early in the review process, a new working paper by researchers from the Naveen Jindal School of Management and the University of Rochester offers initial insights into how generative AI tools like ChatGPT may improve academic writing and narrow citation disparities. As these technologies gain ground in research settings, the study’s preliminary findings add useful context for a rapidly evolving and increasingly important discussion about how scholarly research is produced and evaluated.

The paper — “Writing Matters: Generative AI as an Academic Impact Equalizer” — was written by Drs. Ron Kaniel and Huaxia Rui, both from the University of Rochester; Dr. Pingle Wang, an assistant professor in the Jindal School’s Finance and Managerial Economics Area; and Dr. Shujing Sun, an assistant professor in the Jindal School’s Information Systems Area.

The team analyzed more than 120,000 papers across financial economics, management and political science disciplines. The study finds that writing quality strongly predicts an increase in future citations.

Headshot of Pingle Wang
Pingle Wang

“Improvements post-ChatGPT are especially pronounced among authors from non-English-speaking backgrounds or with fewer prior citations,” Wang said. “This suggests that AI can act as an equalizer in scholarly impact.”

The team validated ChatGPT-based readability scores to ensure they reflect human-like evaluations of writing quality. Wang said recent studies have shown that ChatGPT’s capability is on par with human evaluations in various dimensions. One example, a 2023 study published in Nature’s journal Scientific Reports, finds that ChatGPT generates higher-quality essays than human-written ones.

The research conducted for this current working paper took into account traditional linguistic measures and found they are consistent with ChatGPT-based measures. Examples, including a positive correlation between the two sets of measures and a horse-race comparison, show that ChatGPT has stronger explanatory power.

Given that ChatGPT itself was used to evaluate writing quality, the researchers were aware of the potential bias or circularity in their analysis. To account for this, they used traditional linguistic measures as a robustness check and found consistent results.

Shujing Sun
Shujing Sun

“These traditional measures, while more limited in scope, yielded results consistent with the ChatGPT-based evaluations,” Sun said. “While traditional linguistic measures can often capture specific dimensions of writing quality (e.g., readability, lexical diversity, cohesion), ChatGPT-based measures offer a more holistic evaluation of the writing quality. More importantly, we apply the same measurement framework across all author groups, and the focus is on the relative impact of AI on writing quality and its subsequent influence on academic impact (i.e., citations). Hence, any potential bias related to the writing quality measure itself is likely minimal.”

To isolate the causal impact of ChatGPT adoption on citation outcomes, especially given the lack of a traditional control group, the study compared different groups of authors, based on their proficiency in English.

“We followed the idea of difference-in-differences identification,” Sun said. “In particular, we looked at English institution affiliations and English last names. The idea is that the treatment effect is likely to differ by the authors’ English skills. The findings reveal that the availability of ChatGPT increases the writing quality of all authors, but the effect is stronger among authors with weaker English backgrounds. This is why we conclude that AI is leveling the playing field.”

The results of the study, while still preliminary, suggest ChatGPT helps close citation gaps for non-native English speakers. As to whether this trend will persist as AI tools become ubiquitous across all author groups, Wang was cautious in his answer.

“This is a tough and interesting question for future research,” Wang said. “With the quick evolution of ChatGPT, it becomes more capable in many dimensions. For example, recent reasoning models can help with mathematical proofs. ChatGPT Agent mode allows ChatGPT to autonomously perform tasks on a user’s behalf. In our study, the sample was taken from the model’s early years when ChatGPT first became available and was mostly used for polishing writing.”

When asked whether the observed reduction in citation inequalities might be short-term, with advantaged groups eventually regaining a lead through better AI use or resource access, Wang said it will be an interesting empirical question to ask.

“There will be real productivity shocks to all groups going forward,” Wang said. “But whether it will benefit advantaged groups more down the road is unclear and, as of right now, could go either way.”

Wang said peer reviewers’ awareness of AI-written content could influence their evaluation biases, both positively and negatively.

“There are different ways to use AI to polish the writing,” Wang said. “One extreme is to fully rely on AI-written content without editing. This is likely bad practice and can backfire if reviewers recognize it. On the other hand, using AI to polish the content and internalize the writing can positively affect the publishing and citation outcome. In fact, journals do not forbid AI-assisted writing, and many journals have asked authors to disclose whether they have used AI for editing.”

As for what comes next, Wang said there are many directions this research topic could take.

“In our paper’s conclusion, we stated that AI can have a significant impact on other dimensions of academic research, such as idea generation and hypothesis development. New inequalities could emerge because of that.”

More from Faculty/Research - News Category

How Virtual Reality Reveals the Hidden Cost of Execution Errors

How Virtual Reality Reveals the Hidden Cost of Execution Errors

PhD Student, Professor at Jindal School Win Best Paper for Conflict Research

PhD Student, Professor at Jindal School Win Best Paper for Conflict Research

Study: Unfair Stakeholder Treatment Triggers Whistleblowing

Study: Unfair Stakeholder Treatment Triggers Whistleblowing

Jindal School Faculty Member Named One of 40 Under 40 MBA Professors

Jindal School Faculty Member Named One of 40 Under 40 MBA Professors

Jindal School Research Balances Insights into Physician Recommendations for Dialysis Patients

Jindal School Research Balances Insights into Physician Recommendations for Dialysis Patients

UT Dallas Jindal School students in a campus coffee shop requestion information

Request Information

Thank you for your interest in the Naveen Jindal School of Management, UT Dallas. Tell us a little bit about yourself, and we’ll send you customized information about our programs. We hope to meet you soon.

Request Information