NIGHT OWL
We are told to greet artificial intelligence with awe, curiosity, and optimism. We are told it will make life easier, work faster, decisions smarter, systems smoother. We are told that autonomous agents will book our travel, manage our inboxes, negotiate our bills, write our reports, monitor our health, and maybe one day run entire companies, governments, and wars more efficiently than we ever could.
So let me ask a plain question: are you scared of AI?
I am.
Not because I think a robot is about to kick down my door. Not because I believe every science-fiction nightmare is around the corner. I am scared because we are drifting, almost casually, into a world where more and more human judgment is being handed over to systems we neither fully understand nor fully control. And we are doing it in the language of convenience.
That should alarm us.
We are now living in the land of AI autonomous agents, or at least being prepared for it. The pitch is simple: let the machine act for you. Let it decide, coordinate, optimize, respond. Let it learn your preferences so well that it can become your proxy in the world. An assistant first, then a representative, then something closer to a manager.
But if AI is autonomous, then what happens to human sovereignty?
That is the question beneath all the product launches, glossy demos, and utopian promises. Sovereignty is not just a matter for nations. It belongs to persons too. To be sovereign is to retain agency over your own life: your choices, your attention, your labor, your relationships, your values. Sovereignty means not merely being served, but remaining the author of your actions.
And authorship is exactly what autonomous AI begins to blur.
Every time a machine acts on our behalf, a small philosophical line shifts. At first, it seems harmless. Let it summarize the meeting. Let it recommend the next move. Let it answer the email. Let it screen the candidates. Let it flag the suspicious behavior. Let it adjust the insurance premium. Let it predict the likely criminal. Let it decide who qualifies, who gets seen, who gets heard, who gets helped.
Piece by piece, judgment migrates.
The danger is not only that AI may be wrong, biased, manipulated, or opaque, though it can be all of those things. The deeper danger is that humans may begin to surrender the habit of judgment itself. We stop asking whether a decision is wise, just, humane, or legitimate. We ask only whether it was efficient. We stop exercising discretion and call that progress. We stop being participants and become overseers of processes we barely comprehend.
Then eventually not even overseers. Just users.
There is something politically and morally disorienting about a society that speaks endlessly of empowerment while steadily removing the need for people to think, decide, and struggle. Friction is not always a flaw. Sometimes it is where responsibility lives. Sometimes difficulty is the price of freedom.
Autonomous AI threatens to recast freedom as convenience. That is a dangerous bargain.
Because convenience is seductive. It flatters us. It saves time. It reduces effort. It promises relief from the exhausting burden of modern life. And yet the more we outsource, the more dependent we become on the systems doing the outsourcing. Dependency has always been the shadow side of technological progress. With AI agents, that dependency becomes intimate. They will know us, anticipate us, speak for us, and perhaps eventually shape us.
A tool that merely obeys is one thing. A system that predicts, nudges, and acts is another.
This is where fear becomes rational. Not panic, but fear in its proper sense: moral alertness in the face of real power. AI is not just software. It is a social force, an economic force, and increasingly a governing force. It will determine who can work, who gets hired, who gets monitored, who gets targeted, who gets believed, and who gets left behind. And much of that power will sit in institutions and companies that are not democratically accountable.
So again: if AI is autonomous, how sovereign are humans?
Maybe the answer depends on whether we still have the right to interrupt it, overrule it, refuse it, and live without it. Maybe sovereignty survives only if autonomy remains truly human at the point of decision. Maybe a society remains free only when machines are tools and not authorities, assistants and not substitutes, instruments and not governors.
But that line is already under pressure.
We should be far more suspicious of the cheerful language surrounding AI. “Seamless” often means invisible power. “Personalized” often means surveillance. “Autonomous” often means unaccountable. And “frictionless” often means you have lost the chance to object.
I am not arguing that we should smash the machines or retreat into nostalgia. AI can be useful. It can augment human capacity in remarkable ways. It can reduce drudgery and widen access to knowledge. But usefulness is not innocence. A technology can be brilliant and still politically corrosive. It can be helpful and still diminish us.
What matters is not whether AI can do more. What matters is whether humans will still choose to be more than the sum of optimized prompts and automated decisions.
This is not just about jobs. It is about self-government. It is about whether we still believe that human beings should remain accountable for human consequences. It is about whether dignity lies in being served by ever-smarter systems, or in remaining responsible moral agents even when machines can imitate our thinking.
I am scared of AI because I am scared of a future in which we adjust too quickly to its authority. A future in which we confuse delegation with freedom. A future in which we lose not only control over systems, but confidence in our own judgment.
And once that confidence is gone, sovereignty will not disappear all at once. It will dissolve quietly, under the banner of innovation.
That is why fear, in this moment, may not be a weakness. It may be the beginning of wisdom.