Additionally, the 217-year-old publisher announced the closure of 19 journals due to large-scale research fraud. The fake papers often contained nonsensical phrases generated by AI to avoid plagiarism detection. Examples include “breast cancer” referred to “bosom peril” and “fluid dynamics” written as “gooey stream.” In one paper, “artificial intelligence” was called “counterfeit consciousness.”
These systemic issues of fraud have significantly damaged the legitimacy of scientific research and damaged the integrity of scientific journals. The academic publishing industry, valued at nearly $30 billion, faces a credibility crisis.
Scientists are cutting corners by using AI in a fraudulent manner
Scientists worldwide face immense pressure to publish, as their careers are often defined by the prestige of their peer-reviewed publications. Researchers are clamoring for funds and cutting corners by using irrelevant references and generative AI. For example, a scientific paper is supposed to include citations that acknowledge the original research that informed the current paper. However, some papers feature a list of irrelevant references to make the paper look legitimate. In many cases, researchers rely on AI to generate these citation references, but many of these references either don’t exist or don’t apply to the paper at hand. In one cluster of retracted studies, nearly identical contact emails were registered to a university in China, although few, if any, of the authors were based there.
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
Generative AI is also used to disguise plagiarism throughout new scientific papers. Many of these fraudulent papers contain technical-sounding passages that were generated by AI. These passages are inserted midway through the paper so that the peer review process cannot detect the shenanigans. These tortured AI phrases replace real terms from original research, so that the plagiarism is not detected by screening tools.
Guillaume Cabanac, a computer science researcher at Université Toulouse III-Paul Sabatier in France, developed a tool to identify such issues. The tool is called “Problematic Paper Screener.” It scans a vast body of published literature — about 130 million papers — for various red flags, including “tortured phrases.” Cabanac and his colleagues discovered that researchers who attempt to evade plagiarism detectors often replace key scientific terms with synonyms generated by automatic text generators.
"Generative AI has just handed them a winning lottery ticket,” said Eggleton of IOP Publishing. “They can produce these papers cheaply and at scale, while detection methods have not yet caught up. I can only see this challenge growing."
Approximately 1% of all published science papers are generated by computers
In fact, a researcher at University College London recently found that approximately one per cent of all scientific articles published last year, some 60,000 papers, were written by a computer. In some sectors, this equates to 1 in 5 papers written by a computer.
For example, a recent paper published in Surfaces and Interfaces in the journal Elsevier, contained the line: “certainly, here is a possible introduction for your topic.” Researchers are using AI chat bots and large language models (LLMs) without even looking at what is being transcribed for publication. A quick edit would find that this phrase is written by a computer. Researchers, peer reviewers and publishers are missing basic misnomers generated by AI – so what else about these research papers is either fabricated, plagiarized or made up by a computer?
Scientific integrity consultant Elisabeth Bik said LLMs are designed to generate text, but they are incapable of producing factually accurate ideas. “The problem is that these tools are not good enough yet to trust,” Bik says, pointing to a term called “hallucination.” Bik said the LLMs “make stuff up.”
Blind trust in AI is damaging the integrity of science papers. It takes strong reasoning and discernment skills to navigate the untrustworthy, misconstruing and bloviating nature of AI large language models.
Sources include:
Joannenova.com
ScientificAmerican.com
ARXIV.org
Source link