Thursday, 29 May 2025

LIBBY EMMONS: AI generates lies, media outlets run them without checking


AI, at its best, simulates thinking, but does not actually think. 

ad-image

A syndicated summer supplement called Heat Index that landed in the Chicago Sun-Times and the Philadelphia Inquirer was full of AI lies. The author tasked with making the book list of must read summer lit used AI to generate the list and blurbs, didn't check the "facts" the AI chat bot spit out, submitted it to editors who also didn't check it, and saw it published and inserted into newspapers across the country. 

The advertorial of "303 Must-Dos, Must-Tastes, and Must-Tries" had at least 10 fake books cited and they all leaned woke. The fake books were attributed to real authors. Isabell Allende never wrote the "must read" Tidewater Dreams, blurbed as a "multigenerational saga set in a coastal town where magical realism meets environmental activism." The blurb continued "Allende's first climate fiction novel" (which apparently is a genre) "explores how one family confronts rising sea levels while uncovering long-buried secrets."

Another fake book was Nightshade Market by Min Jin Lee, described as "a riveting tale set in Seoul's underground economy. Following three women whose paths intersect in an illegal night market, the novel examines class, gender, and the shadow economies beneath prosperous societies."

Boiling Point, which was not written by Rebecca Makkai, "centers on a climate scientist forced to reckon with her own family's environmental impact when her teenage daughter becomes an eco-activist targeting her mother's wealthy clients."

The Last Agorithm, falsely attribute to Andy Weir, oddly takes aim at generative AI itself. If it actually existed, it would be about "a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years." An AI fabricated a fake book about a dominant AI. It's like it knows something we're all too stupid to realize.

These kinds of AI-generated lies are called AI hallucinations in which the AI extrapolates from given information things that are just not true. In one instance, an AI for computer programming tool Cursor issued a false policy, causing customers to cancel their accounts. But "The A.I. bot had announced a policy change that did not exist," The New York Times reported. 

It is easy to imagine that AI is "thinking for itself," but we can only imagine that if we downplay what it is to think, which diminishes mankind. AI is made in man's image. We, mankind, do not contain the ability to make the generative spark of creation, we can only simulate it. AI, at its best, simulates thinking, but does not actually think. Mankind still rules this machine.

Social media users were the ones who sounded the alarm on the fakery, saying that the advertorial, originally written for Hearst's King Features, cites books that didn't exist. The Atlantic spoke to the author of many of the sections in the supplement, Marco Buscaglia, who admitted using ChatGPT to concoct the piece. 

He also told them that he used to work as an editor at the Park Ridge Times Herald that was rolled up into Pioneer Press, which is part of the Tribune Publishing Company. Now he has a day job editing materials for AT&T and works on this other stuff at night. Buscaglia said he "loved that job."

Buscaglia said he had used AI to come up with recommendations for the book section, saying he was just looking "for information." Apparently, he uses the tool in his work all the time. But in this instance, he didn't even check the veracity of the sensical sounding nonsense that the AI spat out at him. He submitted it to King, King barely made any changes, clearly didn't fact check it, and cleared it for syndication.

The Atlantic dug deeper and found additional fictions masquerading as facts, such as a citation of a man called Mark Ellison, who AI described as the resource management coordinator for Great Smoky Mountains National Park. The only problem is that the real Ellison never had that job. "I have never worked for the park service," he said. "I have never communicated with this person."

Buscaglia was surprised. Clearly he's gotten sloppy with his use of AI and trusts it so much that he doesn't even check the assertions it spits out. And the editors up the food chain don't check it either. It's like no one cares at all what the words say or if they're true. These big newspapers didn't check it either, they just slapped their logo on it, stuffed it into their papers, and peddled them to readers.

"There was some majorly missed stuff by me," Buscaglia told The Atlantic. "I don’t know. I usually check the source. I thought I sourced it: He said this in this magazine or this website. But hearing that, it’s like, obviously he didn’t." The systems used don't know the difference between truth and lies.

Buscaglia did not tell The Atlantic what prompt he used to generate the material, and likely that is a large part of the problem. AI chat bots respond to prompts, and if Buscaglia entered something vague, such as "give me a list of great summer reads," without specifying further parameters, that could have resulted in a faulty list. Like anything, AI is garbage in, garbage out and requires a level of sophistication and understanding as to how to use the tool, like any other tool.

What's broken here is not just the will and morale of an underpaid writer penning advertorials for syndication, but the entire ecosystem that allows this practice to thrive. No editors fact-checked the work, no humans took a second look at it, there was just absolutely no care put into the work and so the AI lies were allowed to stand. Of course, the outlets that ran with it issued statements about journalistic integrity etc. etc., but are we really supposed to believe they didn't run it through AI?

Teachers use AI to grade papers students write using ChatGPT. Marketing people use AI to generate reports and presentations, passing it off as their own work while those for whom the presentations and reports are generated attempt to look interested while secretly imagining themselves anywhere else but listening to the slop and drivel pouring out for their edification.

No one wants to write it, no one wants to read it, but the content still it gets generated so ads can fire. Ars Technica blames the decline in newsroom budgets that sees a diminishing staff. The Atlantic says it's the "result of a local-media industry that’s been hollowed out by the internet, plummeting advertising, private-equity firms, and a lack of investment and interest in regional newspapers."

AI is touted across industries as the greatest thing ever, a tool that will revitalize everything from smart phones to cars to software to gaming to news writing to art and movies and anything else that is AI powered or threatening to be. 

But it's actually a regurgitation machine that's increasing in speed. AI consumes that which has already been written in order to generate new material that contains seeds of what is real. It does not distinguish between truth and lies and soon, under its influence, neither will we. We won't know how. AI is a tool that can be used by people who know how to use it, but as with any tool, it must be learned.


Source link