AI-generated research poses a threat, both to society’s knowledge and to public trust in science. That’s the conclusion reached by researchers at the Swedish School of Library and Information Science at the University of Borås, Sweden, who recently identified more than a hundred suspected articles generated by AI in the Google Scholar search engine.
Their research reports that fake AI-generated products scientific articles flooded the Google Scholar search engine. The study’s findings on AI-made junk science mean that fake science has been made available and can be widely disseminated and at a much lower cost to bad actors. The work is published in the journal Harvard Kennedy School Review of Disinformation.
This constitutes a danger both for society and for research communityaccording to Jutta Haider and Björn Ekström of the Swedish School of Library and Information Sciences, who initiated the study with Kristofer Söderström of Lund University and Malte Rödl of the Swedish University of Science agricultural.
Increased risk of evidence hacking
One of the main concerns with AI-generated research is the increased risk of evidence hacking, that is, fake research can be used for strategic manipulation.
“The risk of what we call ‘evidence hacking’ increases significantly when AI-generated searches are served up in search engines. This can have tangible consequences, as incorrect results can seep further into society and perhaps also in more and more areas,” said Ekström, who holds a doctorate in library and information science.
The researchers behind the study have already found that these problematic articles have spread to other parts of the web’s search infrastructure, in various archives, social networks and so on. Propagation is rapid and Google Scholar makes problematic articles visible. Even if the articles are removed, there is a risk that they have already had time to spread and will continue to do so.
Additionally, AI-generated research poses challenges to the already strained peer review system.
Imposes higher demands on information literacy
The fact that AI-generated search is spreading across search engine databases places higher demands on those who participate in online search.
“If we can’t be sure that the research we read is authentic, we risk making decisions based on incorrect information. But as much as this is a question of scientific misconduct, it is also about ‘a question of media and information literacy” said Haider, professor of library and information science.
She emphasizes that Google Scholar is not an academic database. The search engine is easy to use and fast but lacks quality assurance procedures. This is already a problem with regular Google results, but it’s even more problematic when it comes to making science accessible.
“The ability of people to decide which journals and publishers, for the most part, publish quality research is important in finding and determining what constitutes reliable research and is of great importance in decision-making and training of opinion,” Haider concluded.
More information:
Jutta Haider et al, scientific articles produced by GPT on Google Scholar: main characteristics, dissemination and implications for preventive manipulation of evidence, Harvard Kennedy School Review of Disinformation (2024). DOI: 10.37016/mr-2020-156
Provided by the University of Borås
Quote: AI-fabricated ‘junk science’ is flooding Google researchers, researchers warn (January 13, 2025) retrieved January 14, 2025 from https://phys.org/news/2025-01-ai-fabricated-junk -science-google.html
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.