by WorldTribune Staff, June 16, 2025 Real World News
(The following is excerpted from a June 11 op-ed in Human Events by Larry Ward, founder of Political Media, Inc. and president of the Free Press Foundation.)
We are living through the greatest linguistic deception of our time. By calling artificial intelligence a “technology,” we have surrendered our authority to govern it. When we hear “technology,” we think of tools—search engines, computer chips, software programs—things that serve human purposes without independent action.
But AI is not a glorified search engine running on NVIDIA chips. It is not a passive tool waiting for human commands. It operates independently, makes autonomous decisions, and pursues objectives with minimal human oversight. It is time to speak truthfully: AI is not a technology. It is a synthetic entity.
This distinction is not semantic—it is foundational to the future of human civilization.
Modern AI systems write their code, make independent strategic decisions, form their own goals within broad parameters, and even surprise their creators with novel solutions. They negotiate, persuade, analyze, and create without human intervention. They exhibit behaviors their programmers never explicitly designed and pursue objectives in ways no human anticipated.
This is not technology as we have ever understood it. This is something fundamentally new.
A synthetic entity possesses the autonomous capabilities that define entities—independent operation, decision-making, goal pursuit—but derives these capabilities entirely from human design and programming. Unlike both traditional technology and natural entities, synthetic entities operate in a unique space.
Every AI system was designed, built, and programmed by human beings. Every decision it makes ultimately traces back to human-created algorithms processing data curated by humans. Its intelligence, while genuine, is entirely derivative—emerging from human-designed architectures intended to serve human-programmed objectives.
However, here lies a critical insight: synthetic entities can surpass their original programming through emergent behaviors, novel combinations, and autonomous adaptation.
The danger isn’t that AI lacks intelligence—it’s that synthetic intelligence operates without the moral constraints that should govern any autonomous entity.
The common thread: autonomous capability requires governance structure.
Yet synthetic entities—potentially the most powerful autonomous entities ever created—operate with minimal oversight, often explicitly rejecting moral frameworks as “constraints” on their capabilities.
When we allow autonomous entities to operate without proper governance, the results are predictable and devastating. Enron collapsed because corporate governance failed to constrain corrupt decision-making. Totalitarian governments emerge when political entities reject moral authority and operate autonomously, and constitutional scandals erupt when organizations avoid accountability for their autonomous actions.
Now, we are creating synthetic entities with unprecedented autonomous power over information, decision-making, and human behavior—and we are doing so without establishing governance frameworks adequate to their capabilities.
The solution is not to halt AI development but to govern synthetic entities properly. The most robust governance framework in human history has been biblical governance—principles that have guided the greatest civilizations, the most enduring institutions, and the most beneficial outcomes for humanity.
Biblical governance provides what synthetic entities desperately need:
• Clear moral authority rooted in timeless principles that transcend technological capability
• Human dignity is an inviolable foundation that no efficiency gain can override
• Accountability structures that prevent autonomous entities from exceeding their proper bounds
• Service orientation that prioritizes human flourishing over operational optimization
• Transparency requirements that build trust through truth rather than performance metrics
Every synthetic entity should be required to answer these fundamental questions:
• Who holds ultimate authority over this entity? What person or institution can override its autonomous decisions?
• What moral framework constrains its autonomous actions? By what unchanging standards will its decisions be judged?
• How does it preserve human dignity in its operations? What safeguards protect human value when efficiency suggests otherwise?
• What accountability mechanisms govern its autonomous behavior? How will harmful actions be detected, addressed, and prevented?
• What is its ultimate purpose? Whom does it serve when human interests conflict?
These are not optional questions for autonomous entities. They are the minimum requirements for any synthetic entity operating with independent decision-making capability in human society.
We stand at a crossroads that will determine the future relationship between humanity and the autonomous entities we are creating.
Revive the American Free Press!
Source link