Artificial Intelligence is being deployed at scale in warfare. We have seen this in cases such as the use of Claude (Anthropic) in the capture of Venezuelan dictator Nicolás Maduro in January 2026, and in the planning of the strikes that brought down the authoritarian regime of Iran in March 2026.
And all of this raises some questions: Is there any limit to the use of AI in armed conflict? Is it being used to increase the speed of military operational planning? Do they want to automate the decision-making of war? Is AI the next Manhattan Project? What happens when surveillance turns domestic?
I do not oppose the use of Artificial Intelligence in military environments — on the contrary, I support it. It is a technology that allows nations to maintain technological superiority over adversarial powers. But I believe we must acknowledge the limits of this technology and, therefore, understand where to use it and where not to, and never abandon the Human in the Loop. What happens if AI misreads a situation and ends up causing unnecessary collateral damage?
When the United States was conducting operations in the Middle East during the war on terror, in the 2000s and 2010s, they used Palantir software that collected massive amounts of data to analyze and, through an artificial intelligence algorithm, identify possible terrorist targets. It wrongly flagged a farmer in a village where American soldiers were stationed, and the attack on that farmer and his family was only stopped because a field agent removed the false positive.
This was already an early warning of what can happen when these systems get it wrong. Moving into 2025–2026, many startups have been developing and exponentially increasing the intelligence of language models like GPT-5, Claude Opus 4.5, among others. Given these advances, these LLMs are beginning to be deployed for military operational planning, allowing operations to be enhanced by the speed and intelligence these models provide. A successful example of this was the capture of Venezuelan dictator Nicolás Maduro, the strikes that dismantled the authoritarian regime of Iran, among others.
The technology is not yet advanced enough to build autonomous weapons or replace the Kill Chain — the decisions and conditions that must be met to identify a target and execute an operation against it. From the Palantir example to today's language models, all of them remain tools that assist humans, not replace them. The problem is the dangerous gap between what these systems can actually do and what governments want them to do, and that ambition to delegate too much, too fast, is exactly where the mistakes that cost lives are made.
And all these advances bring me to the following idea: The atomic era is ending. An era of deterrence built on the atom is coming to a close, and a new one, built on AI, is about to begin. An era where whoever has access to the most intelligent model will gain an unprecedented advantage over their adversaries — a war between private enterprises from the two principal world powers, the United States and China, competing to build and improve their algorithms.
Oppenheimer uttered the words "Now I am become Death, the destroyer of worlds" after witnessing the first test of the atomic bomb, Trinity, in 1945. What will happen when one of these companies wins the race?
In July 2025, Anthropic signed a contract with the United States Department of War, making Claude the first artificial intelligence model approved to operate on classified government networks. Months later, the Department of War demanded something Anthropic was not willing to give: unrestricted access for the development of autonomous weapons and domestic mass surveillance.
Anthropic refused, and the government responded by designating it a national supply chain risk — making it the first American company to receive a designation historically reserved for foreign adversaries.
They didn't punish them for being dangerous. They punished them for having limits.
Why is the demand for unrestricted access to domestic mass surveillance so dangerous? The government is made up of people, not an abstract entity, and these people have plans, interests and their own political agendas.
What happens when a citizen becomes a threat to those interests, and the government knows your medical history, who you spend time with, who you vote for, what you believe in, or where you live?
This reminds me more of an authoritarian regime with access to the holy grail of control, like China. It is hypocritical to criticize China as a dictatorship and then want to imitate the worst of it.
What happens when the line between democracy and authoritarianism blurs? What happens when a government gains access to the most powerful mass surveillance automation technology ever created?
Orwell warned us about all of this in 1984, but he never imagined that the manifestation of Big Brother would be an AI that designates as an enemy of the State the teenagers who decide to go against the status quo, or the worker who wakes up every morning to go to the construction site and chose to criticize the president on social media.
They will tell you it is for national security, but no promise of security should be built on dismantling the privacy and individual freedoms of its citizens.
I am not afraid of artificial intelligence — it is just a tool. What I fear is the direction humanity might take if it sets aside its morals and ethics when using these tools. We must not be cynical about everything that is happening; we must take action and hold onto the hope that we will always find the good in all of them. Humanity has always managed to get back on the right track, and I trust in a future full of abundance, fully aware of all these concerns, and with the safeguards in place to avoid the worst possible outcome.
I did not want to end this manifesto on a catastrophic note. I want to end it on a high note, with the hope that we can always build a better future.
Oppenheimer carried the weight of what he built. We still have time to decide what we build and how.