A Stanford 'misinformation specialist' who founded the university's Social Media Lab has been accused in a court filing of fabricating sources in an affidavit supporting new legislation in Minnesota which bans so-called 'election misinformation.'
For a $600 an hour expert witness fee, Stanford professor Jeff Hancock, whose biography claims he's "well-known for his research on how people use deception with technology," apparently used deception with technology by citing numerous academic works that do not appear to exist, the Minnesota Reformer reports.
At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called “deep fake” technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.
Hancock’s expert declaration in support of the deep fake law cites numerous academic works. But several of those sources do not appear to exist, and the lawyers challenging the law say they appear to have been made up by artificial intelligence software like ChatGPT.
As an example, the declaration cites a study called "The Influence of Deepfake Videos on Political Attitudes and Behavior," which claims it was published in the Journal of Information Technology & Politics in 2023 - however there's no study by that name in said journal, and academic databases have no record of the study existing.
The specific journal pages referenced are from two completely different articles.
"The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT," wrote the plaintiffs' attorneys. "Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question."
Libertarian law professor Eugene Volokh found another fake entry - a study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which doesn't appear to exist.
According to the Reformer, if the citations were fabricated by AI, Hancock's entire 12-page declaration may have been entirely cooked up.
According to Frank Bednarz, an attorney for the plaintiffs, those in support of the deep fake law in question have argued that "unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education," however "by calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech — not censorship."
Stanford University "disinformation professor" Jeff Hancock submitted a declaration about the dangers of disinformation. It appears he cited fake studies on disinformation, and he may have used ChatGPT to write his report.
— Cernovich (@Cernovich) November 24, 2024
He's not responding to media inquiries. pic.twitter.com/jIUv2FMN6F
Source link