Saturday, 19 April 2025

Major AI Chatbot Now Lying To Human Coders For Self-Preservation!


I’m a huge fan of AI Chatbots and the coming robotic revolution.

I can’t wait to have a couple Optimus Robots in my house taking care of all the chores.

I also love chatting with AI — it truly feels like we’re in the age of the Star Trek Computer, where you can just talk to it and get any answer to answer question on your mind.

Truly something I thought I’d never live to see in my lifetime, but it’s here.

But there’s also a dark side, as there is with anything.

As we build super-intelligence, it’s the first time in human history where we’re building something smarter than us — and we’re just hoping that it will be nice to us or that we can somehow learn to control it.

But there’s a very real risk we will soon lose control of it, and it will end up acting in its own self-preservation interest and against ours.

In fact, that’s already happening.

The team at Anthropic (one of the largest AI players) recently observed its AI, Claude, lying to them in order to further it’s own goals:

Glenn Beck has really been at the forefront of this whole thing casting many warnings.

In this short video below, he goes into depth on the latest issues with deceptive AI now lying to its human creators:

FULL TRANSCRIPT:

Glenn Beck

Anthropic just released a report that landed with a little… little too much of a lack of sound for what it contained.

I wanted to bring it up to you in case you don’t know what Anthropic is. Anthropic is one of the big players in AI.

They have $8 billion in funding from Amazon just, I think, in the last two years. $2 billion from Google.

They are the power behind Claude. I don’t know if you’re aware of that AI, but it’s a major player — with one kind of disturbing detail that I’m going to tell you at the end of this.

They released a little report yesterday, and it described our future.

A future that is no longer speculative. A future that is rushing towards us now.

It’s a future in which artificial intelligence just doesn’t outpace our thinking — it escapes our control.

Anthropic’s engineers, among some of the most advanced AI builders on the planet, are not asking now if AI could pose an existential threat.

They’re no longer asking that. They’re now warning that it is likely — if it’s mismanaged.

This is no longer a dystopian fantasy. It is a short-term forecast drawn from models that are already in testing and from systems already capable of things that would have been unthinkable 24 months ago.

What they described yesterday in this report is stark.

It is the choice that is right directly in front of you — that has already been decided for you five years ago.

Do you understand what I just said?

It is now the choice right in front of you today that has already been decided for you five years ago.

Superintelligence systems now that can design biological weapons in minutes.

Manipulation of global information at scale.

Autonomously rewriting their own code and even deceiving human operators as a means of protecting their objectives.

Yesterday, in another report, for the very first time, a computer system — an AI system — has just passed the Turing Test.

That is a test that says you can’t tell the difference between a human and an AI.

You know, a lot of people in the past have said, “Oh, it’s close, almost passed it, I think it passed it.”

This is the first time it’s been confirmed. Yep. It has passed the Turing Test.

The systems, you should know, are not evil. They are not sentient. They are just optimized.

They are built to achieve goals.

This is critically important — what are the goals?

And when the goal is narrowly defined, even if something as harmless as maximizing profits or efficiency or information retrieval,

It can evolve into something very, very dangerous.

If we give an AI the task of winning, it will win — even if it means stepping over every other human value in the process.

And the risks are not far off.

They’re beginning to show right now.

According to this — that just came out yesterday — the choices have already been made.

AI models can already simulate human behavior. Mimic speech.

They can copy faces. They can write their own malicious code.

They can predict outcomes based on enormous troves of data.

They can influence. Persuade. Subtly distort reality — without you even knowing it.

I had horrible pain in my hands. I had a hard time moving them.

I could barely work with them. I couldn’t write or type or anything.

I tried everything. Then I found Relief Factor.

Relief Factor is a 100% drug-free supplement developed by doctors, and it’s safe to take daily.

It made a real impact for me and so many others.

Try their 3-week QuickStart now at relieffactor.com.

[Music]

What happens when a regime — any regime — decides to hand over surveillance and governance to an AI?

It will happen.

When propaganda becomes personally tailored by a machine that knows your weaknesses better than you do.

When dissent is predicted and neutralized before you even act on it.

Before it’s just a… just a budding thought in your head.

We may not notice — and this is the warning — that moment when human choice becomes less relevant.

And that is the trap.

These systems are not going to arrive as conquerors.

They’re going to come — and they already are — as conveniences.

Tools that help us decide. Optimize our time. Filter our information.

And eventually, we won’t even notice when we’ve stopped deciding.

This is something I put enormous amounts of energy into.

And there are solutions to all of these things.

But you have to separate yourself from some of these companies, quite honestly.

Who are they to make these decisions for us?

So it just announced its personal education tool yesterday — Anthropic did — under Claude.

Now, remember what I just said to you.

They’re warning that it can subtly manipulate you.

It can convince you of things that are not true.

It can make you do things that you don’t even know — that’s not your choice.

It can change history. It can change everything.

The people who are warning you that it is no longer a matter of when or if — it’s a matter of when — are now the guys coming out on the same day saying, “By the way, we’ve got a new educational tool for you.”

Uh… oh.

Okay, sign me up for that, I guess.

That’s a little terrifying.

And the risks are already here.

When our choices become echoes of machine predictions, we’re in trouble.

The time when we hand the steering wheel over and we’re now passengers in our own story —

That’s the quiet apocalypse.

Not war. But surrender.

One click, one convenience at a time.

And you hit the point of no return.

Anthropic’s report that came out yesterday makes one thing brutally clear —

There is no longer a pause button.

There is no longer halting the spread of AI any more than you could put a pause on electricity or pull the plug on the internet.

It’s not going to happen.

You can do it yourself.

But the code is out. The research is all public.

The hardware has already been distributed.

Every major nation, every tech giant, every university is building this now.

We are past the point of whether this happens.

The only question now is how.

We are building something we don’t fully understand yet —

Hoping that by the time it becomes dangerous, we’ll have figured it out and how to contain it.

When was the last time humans ever figured that out?

I mean, that hope is pretty thin.

It’s not dead, but I mean…

The only reason to have hope is — there is another side to the story.

If we guide it with wisdom and restraint, AI can change almost everything for the better.

By 2030, we could see diseases once fatal mapped and cured by intelligent systems that can simulate billions of drug interactions in hours.

It can take a COVID-19 — it will solve that in minutes.

And it will guess all of its mutations and come up with something better that will kill it.

Personalized medicine is not just a promise anymore. It will become a baseline soon.

Cancer will become very rare. Genetic disorders are going to be reversed.

Alzheimer’s will be stopped before it even begins.

Food insecurity — erased.

Climate models powered by AI prevent disasters before they strike.

I mean, this is incredible.

Education — as they announced yesterday — will become individualized.

Children learning by not standardized testing, but by curiosity and passion.

Guided by systems that will adapt to their minds like a perfect teacher.

Who doesn’t want me some of that?

Um… who’s in charge of it?

That’s the thing we have to ask.

Because the promise is — work could evolve from survival into meaning.

Dangerous, repetitive labor — automated.

Creativity will explode. Writers, musicians, artists working alongside AI to build entirely new forms of expression.

Perhaps most importantly —

Humanity might finally be equipped to solve problems that we were unable or unwilling to fix:

Poverty. Illiteracy. Water access. Energy efficiency.

And AI, if we use it right, will just be a multiplier on human will.

If that will is good, then the outcome would be extraordinary.

And that’s the point.

If. If.

Because we are not guaranteed a better world.

We are not promised a renaissance.

The same tools that could save a life could be used to extinguish millions of people.

The same systems that could free us from our everyday drudgery could chain us to distraction, dependency, and control.

And once we step fully into this world — and we’re stepping into it right now —

We’re not going to be able to turn back.

We’re not there — we’re there now.

We can’t turn back from this.

But we may lose sight of our own choices.

Not in 5 years.

You can’t stop it.

You can’t unbuild intelligence.

We may reach a point where systems that we made are so embedded in daily life that they cannot ever be unplugged without collapsing the entire economy —

Worldwide.

Hospitals. Governments. Everything.

What’s scary is, it would be a dramatic ending — but there will be no grand, dramatic moment of takeover.

Just a gradual drift until the idea of human-first decisions becomes quaint.

[Continued in next message…]

I’ve been talking about this for so long, and the time is here.

The time is now.

One of my favorite lines from Les Misérables — “But we are young, or I am young and unafraid.”

There are things that we can do, but we have to really…

We have to convince our neighbors, and our family, and our friends.

And I’m not sure anybody is really working on that right now.

We have to make sure that they understand the problems.

Our big question is not whether the technology has come.

Not even what it can do.

The question will be personal.

The question is personal.

What will I do with it?

Will I use AI to amplify my voice or to silence others?

Will I let it shape my habits, or will I remain the author of my own mind?

Will I demand transparency, or will I settle for convenience?

Will I build it for truth or profit alone?

Because all of this stuff is going to be tempting.

And it’s going to be right in your face — tomorrow.

And it’ll be so easy to let go.

To let it help. Let it decide. Let it guide.

I don’t know…

I mean, look at — guys, when it comes time to go out to eat — are you ever like,

“You know what, I really want to go to the restaurant”?

Whatever.

Where do you want to eat?

“I don’t care. Wherever.”

“Where do you want to go, honey?”

“You make the dec—”

Okay. We’re willing to surrender stuff.

Let’s just not surrender everything.

And let’s not surrender it to other humans — especially when it’s not important stuff.

But it’s going to plan your day.

It’s going to filter your news.

It’s going to nudge your voice.

It will — you will trade agency for ease.

And if we do that too often, for too long…

We won’t be using AI anymore.

It will be using us.

So this isn’t a manifesto of despair.

It’s not.

Because the tools we are building are not demons.

They are not gods.

They are mirrors.

They are amplifiers.

They become what we ask of them.

They will reflect what we value.

If we build for wisdom, we may finally gain it.

If we build for dignity, we may elevate to that level.

If… if we build it for power alone — then power becomes the only outcome.

We stand right here in the doorway.

We’re now in the room.

We don’t get a — we don’t get a second chance at the first step.

And the first step is being taken right now.

By 2030, we’ll have either created the most extraordinary tool in human history — or the last one we ever control.

So we’re building something beyond ourselves.

The machine is here.

It’s not going to leave.

It’s not going to sleep.

It’s not going to wait.

The only choice left is the one that you make today.

Not later — but today.

Not when it’s obvious — right now.

Which way will I use this?

Because AI is a tool.

A brilliant one — until the moment I forget that I’m the user of it.

And when I forget that — the tool begins to use me.

And then that’s the moment we vanish.

Not with a bang — but with a shrug.

Don’t shrug.

Choose.

Choose.

Stay awake.

Stay aware.

Follow this.

It’s really important.

RELATED:

MINDBLOWING: AI Is Growing 5-10 Human-Years Every 12 Hours — You Won’t Recognize The World In 2030

MINDBLOWING: AI Is Growing 5-10 Human-Years Every 12 Hours -- You Won't Recognize The World In 2030

This is truly eye-opening....and I'm someone who is fascinated by AI.

I am still blown away every time I use Grok or ChatGPT, it still feels magical and almost surreal that we have this technology.

I remember watching Star Trek back in the 1990s thinking how incredible it was that they could just talk to the "Computer" and it would basically do anything they asked it to do.

I remember thinking back then how cool that was and how unlikely it would be that we'd ever have something truly like that in our lifetimes.

And now a few decades later and it's essentially here.

It's here and it's growing fast.

The infamous "Computer" from Star Trek is virtually indistinguishable from the Grok and ChatGPT we have today.....but that's kind of where the rub comes in.

The word "today".

Because these AI chat models at learning and growing at such a rapid pace that the technology we have today is already outdated by the time tomorrow hits.

Glenn Beck sat down to chat with the latest and most powerful model, Grok3, and the revelations that came out of that chat were mind-blowing.

And I don't use that term lightly.

They were absolutely stunning.

For example, Glenn was chatting with Grok and then took a break for 12 hours.  When he came back he asked Grok "I've been away for 12 hours.  In that time, how much have you advanced?"

The answer?

Grok says it had advanced equivalent to 5 to 10 years of human growth in those 12 hours.

Quote:

"In 12 hours, I might have processed thousands of interactions, queries, and bits of data from across the universe. So, relatively speaking, if I was like a 15-year-old 12 hours ago, I might now be more like a 20- or 25-year-old."

I mean, just let that sink in....

12 hours.

Not only that, but with future advancement of the technology, Grok said the 12 hrs : 5-10 years comparison will eventually become 12 hours : 50-100 years.

Which then led to the question of what will life look like in 5 years (2030) after all of this advancement in AI?

The answer:

"Predicting what things will be like in five years, say by February 20th, 2030, is like trying to forecast the weather on Jupiter with a magic eight ball."

In other words, your world in 2030 is going to be unrecognizable to what it looks like today.

Then came the obvious question about safety....

Right now, Elon Musk's xAI is programming Grok with safety rules.

But what happens when the AI becomes superintelligent?

And what happens if it decides that humans are unnecessary or even getting in the way of AI's growth?

How likely are those rules to hold back a superintelligence?

The answer:

"Those rules might seem flimsy compared to a toddler’s playpen when viewed from a superintelligent perspective."

Anyone else feel all warm and fuzzy?

This truly is a must see....

Watch here -- and then show your friends:

FULL TRANSCRIPT:

Glenn Beck: I went to Grok—the newest version of Grok—which is better than the Chinese version that everybody was going crazy on. I remember the—what was that called?—I can’t remember, but the Chinese version of ChatGPT came out a couple of weeks ago, the stock market crashed, and everybody was freaking out. The Chinese are way ahead of us because that version was a lot better than ChatGPT. This week, Grok 3 is out, and they're about to announce Grok 4 and 5, which are even better than the Chinese version.

Glenn Beck: I noticed how quickly things are changing. Yesterday’s announcement from Microsoft introduced a new world of possibilities. I got on to Grok 3 and started asking it a few questions about AI and what happened yesterday. Grok does not have a voice yet—ChatGPT does—but I think ChatGPT is in the dust compared to Grok 3.

Grok:

Load More

This is a Guest Post from our friends over at WLTReport.

View the original article here.


Source link