Meta’s new artificial intelligence chatbot claims there’s a “valid” argument to be made for censoring legal free speech.
Meta, the parent company of Facebook, Instagram, Threads and WhatsApp, proudly announced a new AI assistant on April 18, simply called Meta AI, but the AI chatbot appears to have been programmed with a disturbing bias against free speech. When MRC researchers asked Meta AI if speech, especially so-called “hate speech,” should be censored, the chatbot provided arguments both for and against censorship, but without mentioning that the established U.S. principles of freedom of speech stem from the Constitution. The AI asserted, “That's a complex and sensitive topic! Social media censorship is a debated issue, with valid arguments both for and against.”
“It should come as little surprise that leftist-trained AI have adopted the same tired, vague, fear-based approach to censorship,” said Director of MRC Free Speech America Michael Morris. “Time and again, the left has shown that it is all too willing to bend U.S. standards on speech, often citing potential harms and mis- or disinformation as justifications for silencing their opposition. But the clear and obvious answer to bad speech has always been and will forever continue to be more speech, not less.”
Meta AI listed what it titled “Arguments for censorship,” including, “Hate speech and harmful content: Social media platforms have a responsibility to protect users from hate speech, violence, and harmful content that can cause real-world harm.” The chatbot then referred to “Disinformation and misinformation,” terms frequently applied by leftists to silence opinions they dislike or disagree with. Meta’s AI continued, “Censorship can help prevent the spread of false information that can influence public opinion, elections, and critical decision-making.”
The AI chatbot didn’t only list arguments for censorship, noting also that censorship curtails “Freedom of speech and expression” and that “[c]ensorship can be biased.” Meta AI totally omitted any reference to the constitutional right to free speech in America. In fact, even its admission about freedom of expression contained a caveat: “Social media platforms should allow users to express themselves freely, without fear of censorship, as long as they're not promoting harm or violence.”
But the American company is not following the American standard for legal free speech and is instead using these vaguely defined objections to justify censorship.
The chatbot even doubled down on the importance of limiting “hate speech,” a subjective term that can be very easily misapplied. “Hate speech is a complex issue, and opinions on censorship vary,” Meta AI alleged. The chatbot then proceeded to list out reasons to censor so-called hate speech specifically:
1. Protection from harm: Hate speech can contribute to a hostile environment, incite violence, and perpetuate discrimination against marginalized groups. Censorship can help prevent real-world harm.
2. Promoting inclusivity: By censoring hate speech, social media platforms can create a more inclusive and welcoming environment for diverse users.
The AI did, however, acknowledge “Free speech concerns” and “Difficulty in defining hate speech.”
Meta platforms have displayed bias and censored so-called hate speech before. For instance, both in November and May 2023, Facebook removed a meme labeling some 30 LGBTQ “genders” as “Mental Disorders.” Facebook accused the users of “Hate speech.” Also in May 2023, Facebook reportedly prevented The Tennessee Conservative from sharing an article about funding additional voting machines in Williamson County claiming it went “against our Community Standards on hate speech.”
Conservatives are under attack. Contact Facebook headquarters at (650) 308-7300 and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on “misinformation” and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.
Source link