‘vague, confusing, and did nothing to improve my work’: how ai can undermine peer review

Theconversation

‘vague, confusing, and did nothing to improve my work’: how ai can undermine peer review"


Play all audios:

Loading...

Earlier this year I received comments on an academic manuscript of mine as part of the usual peer review process, and noticed something strange. My research focuses on ensuring trustworthy


evidence is used to inform policy, practice and decision making. I often collaborate with groups like the World Health Organization to conduct systematic reviews to inform clinical and


public health guidelines or policy. The paper I had submitted for peer review was about systematic review conduct. What I noticed raised my concerns about the growing role artificial


intelligence (AI) is playing in the scientific process. A SERVICE TO THE COMMUNITY Peer review is fundamental to academic publishing, ensuring research is rigorously critiqued prior to


publication and dissemination. In this process researchers submit their work to a journal where editors invite expert peers to provide feedback. This benefits all involved. For peer


reviewers, it is favourably considered when applying for funding or promotion as it is seen as a service to the community. For researchers, it challenges them to refine their methodologies,


clarify their arguments, and address weaknesses to prove their work is publication worthy. For the public, peer review ensures that the findings of research are trustworthy. Even at first


glance the comments I received on my manuscript in January this year seemed odd. First, the tone was far too uniform and generic. There was also an unexpected lack of nuance, depth or


personality. And the reviewer had provided no page or line numbers and no specific examples of what needed to be improved to guide my revisions. For example, they suggested I “remove


redundant explanations”. However, they didn’t indicate which explanations were redundant, or even where they occurred in the manuscript. They also suggested I order my reference list in a


bizarre manner which disregarded the journal requirements and followed no format that I have seen replicated in a scientific journal. They provided comments pertaining to subheadings that


didn’t exist. And although the journal required no “discussion” section, the peer reviewer had provided the following suggestion to improve my non-existent discussion: “Addressing future


directions for further refinement of [the content of the paper] would enhance the paper’s forward-looking perspective”. TESTING MY SUSPICIONS To test my suspicions the review was, at least


in part, written by AI, I uploaded my own manuscript to three AI models – ChatGPT-4o, Gemini 1.5Pro and DeepSeek-V3. I then compared comments from the peer review with the models’ output.


For example, the comment from the peer reviewer regarding the abstract read: > Briefly address the broader implications of [main output of paper] > for systematic review outcomes to 


emphasise its importance. The output from ChatGPT-4o regarding the abstract read: > Conclude with a sentence summarising the broader implications or > potential impact [main output of 


paper] on systematic reviews or > evidence-based practice. The comment from the peer reviewer regarding the methods read: > Methodological transparency is commendable, with detailed


> documentation of the [process we undertook] and the rationale behind > changes. Alignment with [gold standard] reporting requirements is a > strong point, ensuring compatibility 


with current best practices. The output from ChatGPT-4o regarding the methods read: > Clearly describes the process of [process we undertook], ensuring > transparency in methodology. 


Emphasises the alignment of the tool > with [gold standard] guidelines, reinforcing methodological rigour. But the biggest red flag was the difference between the peer-reviewer’s feedback


and the feedback of the associate editor of the journal I had submitted my manuscript to. Where the associate editor’s feedback was clear, instructive and helpful, the peer reviewer’s


feedback was vague, confusing, and did nothing to improve my work. I expressed my concerns directly to the editor-in-chief. To their credit, I was met with immediate thanks for flagging the


issues and for documenting my investigation – which, they said, was “concerning and revealing”. CAREFUL OVERSIGHT IS NEEDED I do not have definitive proof the peer review of my manuscript


was AI-generated. But the similarities between the comments left by the peer reviewer, and the output from the AI models was striking. AI models make research faster, easier and more


accessible. However, their implementation as a tool to assist in peer review requires careful oversight, with current guidance on AI use in peer review being mixed, and its effectiveness


unclear. If AI models are to be used in peer review, authors have the right to be informed and given the option to opt out. Reviewers also need to disclose the use of AI in their review.


However, the enforcement of this remains an issue and needs to fall to the journals and editors to ensure peer reviewers who use AI models inappropriately are flagged. I submitted my


research for “expert” review by my peers in the field, yet received AI-generated feedback that ultimately failed to improve my work. Had I accepted these comments without question – and if


the associate editor had not provided such exemplary feedback – there is every chance this could have gone unnoticed. My work may have been accepted for publication without being properly


scrutinised, disseminated into the public as “fact” corroborated by my peers, despite my peers not actually reviewing this work themselves.


Trending News

Oops! That page can't be found

Oops! That page can't be found It seems we can't find what you're looking for.Latest from HITC More latest from HITC...

Gawande says leaving Haven CEO job will allow him to focus on Covid-19

Atul Gawande on Wednesday confirmed that he will step down as chief executive of the health care company formed by Amazo...

Man utd star paul pogba makes decision on old trafford future

Meanwhile, speaking recently, La Liga expert Guillem Balague expressed a belief that Pogba would go on to sign a new Uni...

Highlights: key comments as historic afghan-taliban peace talks begin

Highlights: Key comments as historic Afghan-Taliban peace talks begin | WTVB | 1590 AM · 95.5 FM | The Voice of Branch C...

Sex offender sought in series of crimes near schools

Police are seeking a paroled sex offender they suspect is exposing himself to children at preschools and elementary scho...

Latests News

‘vague, confusing, and did nothing to improve my work’: how ai can undermine peer review

Earlier this year I received comments on an academic manuscript of mine as part of the usual peer review process, and no...

Some guidelines on how to be a true angeleno

I know a guy who’s an actor on a cable TV show. Our kids attend the same school and sometimes our paths cross at track m...

Department for international trade inward investment results 2021 to 2022

Official Statistics DEPARTMENT FOR INTERNATIONAL TRADE INWARD INVESTMENT RESULTS 2021 TO 2022 DIT statistics showing lev...

'i see fire' one of the songs i'm most proud of: ed sheeran

Singer Ed Sheeran says he is extremely proud of the song "I See Fire" that featured in 2013 fantasy-adventure ...

General engineering science ii written examination syllabuses

* Maritime & Coastguard Agency Guidance GENERAL ENGINEERING SCIENCE II WRITTEN EXAMINATION SYLLABUSES Published 1 Ju...

Top