Flood of 'junk': How AI is changing scientific publishing

Flood of 'junk': How AI is changing scientific publishing

Scientific sleuth Elisabeth Bik fears that a flood of AI-generated images and text in academic papers could weaken trust in science
Scientific sleuth Elisabeth Bik fears that a flood of AI-generated images and text in academic papers could weaken trust in science. Photo: Amy Osborne / AFP/File
Source: AFP

PAY ATTENTION: Follow our WhatsApp channel to never miss out on the news that matters to you!

An infographic of a rat with a preposterously large penis. Another showing human legs with way too many bones. An introduction that starts: "Certainly, here is a possible introduction for your topic".

These are a few of the most egregious examples of artificial intelligence that have recently made their way into scientific journals, shining a light on the wave of AI-generated text and images washing over the academic publishing industry.

Several experts who track down problems in studies told AFP that the rise of AI has turbocharged the existing problems in the multi-billion-dollar sector.

All the experts emphasised that AI programmes such as ChatGPT can be a helpful tool for writing or translating papers -- if thoroughly checked and disclosed.

But that was not the case for several recent cases that somehow snuck past peer review.

Read also

Musk's misleading election posts viewed 1.2 billion times: study

Earlier this year, a clearly AI-generated graphic of a rat with impossibly huge genitals was shared widely on social media.

It was published in a journal of academic giant Frontiers, which later retracted the study.

Another study was retracted last month for an AI graphic showing legs with odd multi-jointed bones that resembled hands.

While these examples were images, it is thought to be ChatGPT, a chatbot launched in November 2022, that has most changed how the world's researchers present their findings.

A study published by Elsevier went viral in March for its introduction, which was clearly a ChatGPT prompt that read: "Certainly, here is a possible introduction for your topic".

Such embarrassing examples are rare and would be unlikely to make it through the peer review process at the most prestigious journals, several experts told AFP.

Read also

Expect more product placement at Olympics, says IOC

Tilting at paper mills

It is not always so easy to spot the use of AI. But one clue is that ChatGPT tends to favour certain words.

Andrew Gray, a librarian at University College London, trawled through millions of papers searching for the overuse of words such as meticulous, intricate or commendable.

He determined that at least 60,000 papers involved the use of AI in 2023 -- over one percent of the annual total.

"For 2024 we are going to see very significantly increased numbers," Gray told AFP.

Meanwhile, more than 13,000 papers were retracted last year, by far the most in history, according to the US-based group Retraction Watch.

AI has allowed the bad actors in scientific publishing and academia to "industrialise the overflow" of "junk" papers, Retraction Watch co-founder Ivan Oransky told AFP.

Such bad actors include what are known as paper mills.

Read also

After AI, quantum computing eyes its 'Sputnik' moment

These "scammers" sell authorship to researchers, pumping out vast amounts of very poor quality, plagiarised or fake papers, said Elisabeth Bik, a Dutch researcher who detects scientific image manipulation.

Two percent of all studies are thought to be published by paper mills, but the rate is "exploding" as AI opens the floodgates, Bik told AFP.

This problem was highlighted when academic publishing giant Wiley purchased troubled publisher Hindawi in 2021.

Since then, the US firm has retracted more than 11,300 papers related to special issues of Hindawi, a Wiley spokesperson told AFP.

Wiley has now introduced a "paper mill detection service" to detect AI misuse -- which itself is powered by AI.

'Vicious cycle'

Oransky emphasised that the problem was not just paper mills, but a broader academic culture which pushes researchers to "publish or perish".

"Publishers have created 30 to 40 percent profit margins and billions of dollars in profit by creating these systems that demand volume," he said.

Read also

X's AI chatbot spread election misinformation, US officials say

The insatiable demand for ever-more papers piles pressure on academics who are ranked by their output, creating a "vicious cycle," he said.

Many have turned to ChatGPT to save time -- which is not necessarily a bad thing.

Because nearly all papers are published in English, Bik said that AI translation tools can be invaluable to researchers -- including herself -- for whom English is not their first language.

But there are also fears that the errors, inventions and unwitting plagiarism by AI could increasingly erode society's trust in science.

Another example of AI misuse came last week, when a researcher discovered what appeared to be a ChatGPT re-written version of one his own studies had been published in an academic journal.

Samuel Payne, a bioinformatics professor at Brigham Young University in the United States, told AFP that he had been asked to peer review the study in March.

After realising it was "100 percent plagiarism" of his own study -- but with the text seemingly rephrased by an AI programme -- he rejected the paper.

Read also

Inbred, gibberish or just MAD? Warnings rise about AI models

Payne said he was "shocked" to find the plagiarised work had simply been published elsewhere, in a new Wiley journal called Proteomics.

It has not been retracted.

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.