AI Findings Being Polluted by Bogus Research Studies

bogus research studies

Written by Jackie Bilodeau

I am the Communications Director for CGNET, having returned to CGNET in 2018 after a 10-year stint in the 1990's. I enjoy hiking, music, dance, photography, writing and travel. Read more about my work at CGNET here.

August 21, 2025

In the age of artificial intelligence, the internet is not just a repository of knowledge—it’s the training ground for machines that help us make decisions, write reports, and even conduct research. But what happens when the data feeding these systems is tainted?

A growing body of evidence suggests that fraudulent research studies are being uploaded to the internet at an alarming rate, and AI systems are unknowingly ingesting them. This phenomenon is threatening the integrity of scientific discourse and the reliability of AI-assisted research.

The Rise of Fake Science

Fake science is no new thing; case in point is RFK Jr’s (and others) reliance on the 1998 Lancet paper, which falsely linked the MMR (measles, mumps, rubella) vaccine to autism. That paper was eventually retracted for scientific fraud and data manipulation, and Wakefield lost his medical license as a result. Despite these facts, the anti-vaccine movement has persisted in spreading various myths and misinformation tying vaccines to numerous health conditions, including not only autism but, in some fringe corners, other developmental disorders.

However, recent investigations have uncovered networks of bad actors infiltrating academic publishing to promote fake science. A study published in PNAS (The Proceedings of the National Academy of Sciences of the USA) analyzed over 5 million articles across 70,000 journals and found widespread evidence of fraudulent publications. These include fabricated data, plagiarized content, and manipulated images—often escaping traditional peer review through editor collusion and “paper mills”.

One particularly disturbing case involved the Global International Journal of Innovative Research, which published AI-generated articles falsely attributed to real researchers. These papers featured formulaic writing, missing empirical data, and misleading citations. Turnitin’s AI detection tool flagged many of them as 100% AI-generated.

Google Scholar: A Gateway for Misinformation

Google Scholar, a widely used academic search engine, has become a hotspot for bogus research studies. Unlike traditional databases, Google Scholar lacks rigorous quality control and allows anyone to upload papers. This has led to a flood of AI-generated content appearing alongside legitimate research, especially in controversial fields like health, environment, and computing.

Researchers found that many of these papers contained telltale signs of AI authorship—phrases like “as of my last knowledge update” or “I don’t have access to real-time data”—and yet failed to disclose any use of AI tools. Worse, these papers are often duplicated across multiple repositories and social media platforms, making retraction nearly impossible.

Why This Matters for AI and Society

AI systems trained on publicly available data—including academic papers—are vulnerable to absorbing and amplifying misinformation. This poses serious risks:

  • Evidence Hacking: AI-generated studies can be used to manipulate public opinion or policy by flooding the evidence base with misleading information.
  • Erosion of Trust: When fake studies are indistinguishable from real ones, public trust in science and AI tools diminishes.
  • Compromised Research: Meta-analyses and systematic reviews may unknowingly include fraudulent data, skewing results and delaying progress.

What Can Be Done?

Experts suggest a multi-pronged approach to combat this issue:

  • Stricter Verification: Journals must enhance identity checks for authors and enforce AI disclosure policies.
  • Improved Detection Tools: AI itself can be used to detect AI-generated content, but these tools must be transparent and widely adopted.
  • Reforming Incentives: The “publish or perish” culture in academia incentivizes quantity over quality. Changing how research is evaluated could reduce the pressure to produce at any cost.

Final Thoughts

AI is not the villain—it’s a mirror reflecting the data we feed it. If we allow the internet to be flooded with bogus research, we risk creating a feedback loop of misinformation that undermines both science and technology. The solution lies in vigilance, transparency, and a renewed commitment to research integrity.

 

For over forty years, CGNET has provided state-of-the-art IT services to organizations of all sizes, across the globe. We’ve done it all, from IT and cybersecurity assessments to cloud services management to generative AI user training. Want to learn more about who we are and how we might be able to help you? If so, check out our website or send us a message!

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe