Defaming Artificial Intelligence: Why Is The Guardian Scared of Sound, Objective Science?

By | July 26, 2025

Tony Cox line-by-line comments on The Guardian article 
Louis Anthony Cox, Jr., Ph.D., tcoxdenver@aol.com
July 25, 2025

Introduction

This document provides a detailed, line-by-line response to the June 27, 2025 article published by The Guardian, currently titled “How a New AI Tool Could Amplify Doubt in Pollution Science.” The Guardian has issued multiple post-publication corrections to this article—and further corrections may still be warranted. As it stands, the article still contains numerous false, misleading, and defamatory statements about my work, character, and research.

The Guardian article misrepresents:

  • The purpose and function of the AI technology and AI-based manuscript screening tool I developed;
  • My scientific record, including my methods, funding, presentations and peer-reviewed publications;
  • My professional integrity and competence, by misquoting or selectively framing my public statements – and by attributing to me wholly fictitious remarks that contradict my published work;
  • The technology itself, which the article inaccurately describes and mischaracterizes.

This document serves three purposes:

  1. To help correct the record, using verifiable facts and primary sources;
  2. To clarify what the AI tool is—and is not;
  3. To defend the integrity of rigorous, transparent, and empirically grounded science against irresponsible framing.

The first section provides a summary of key points generated using ChatGPT based on this rebuttal. The second section offers detailed, point-by-point comments an analysis addressing each misstatement or omission.

Why This Response Is Necessary

The Guardian article promotes a narrative grounded more in ideological framing and insinuation than in factual reporting or informed critique. It relies heavily on unvetted commentary from selected “experts” who:

  • Display no evident familiarity with the AI tool (mischaracterizing it as a “bot” or LLM);
  • Make false or unfounded accusations, which the article presents uncritically;
  • Appear to share a strong ideological bias, leading them to misrepresent my motives and to attack my calls for the use of sound, objective, transparent scientific methods in regulatory risk assessment.

Any of the tool’s hundreds of actual users could have provided a far more accurate and informed description of its purpose and operation. Instead, the article elevates uninformed speculation over firsthand knowledge or verifiable evidence.

This response restores that missing context—by providing direct citations, primary sources, and accurate descriptions of both the technology and the science at issue—so that readers can evaluate the facts for themselves.

For full document, please click here