The Era of Extermination: Algorithms and the Politics of Agentic Violence
Recently, a terrifying video came out from Gaza: a small Palestinian girl, carrying a gallon of water, was crossing the ruins of what used to be her home when an Israeli missile struck her. The explosion—sudden, precise, and algorithmically calculated—ended her life in an instant.
The trigger, most likely, was pulled by data—by an autonomous AI agent system trained to decide what constitutes a "target."
Scenes like this have become tragically common in Gazan daily life, showing how technology and AI operate now as autonomous killing machines.
Those war-crime attacks illustrate more than the horrors of modern warfare; they reveal how machine learning models now play an active role in current genocide.
Understanding this shift requires examining how these AI models function. Many of these systems are inaccurate, biased, or trained on false, incomplete, and ethically flawed datasets.
So how do AI agents actually make decisions? In simple terms, these systems observe, analyze, and act.
First, the AI observes its environment—the field, the yard, or the target area—through cameras, drones, sensors, or satellite feeds. It then detects and classifies objects, trying to distinguish between, say, a person, a vehicle, or a structure. Next, it analyzes those objects, attempting to determine their behavior or "intention."
Based on this analysis, the AI estimates the level of "risk" each object poses and matches it against pre-programmed rules or patterns.
Finally, it executes a predefined action which, in a military context, might mean marking a target for strike or even launching an attack autonomously.
This chain of automated logic—pure pattern recognition—operates on probabilities. And when those probabilities are built on biased or inaccurate data, the outcome is a cycle of algorithmic violence.
Mentioned earlier that these systems observe, analyze, and act. But how do they analyze? The short answer: they learn from data.
You can think of a machine-learning model as a pattern matcher trained by feeding it tons of examples. Those examples—the dataset—come from many sources.
If we are talking about military models, the data often comes from drone and CCTV footage, satellite images, sensors, intercepted communications, and intelligence files, including social media content.
If we are talking about everyday AI systems, like ChatGPT, Gemini, or others, their data comes from books, online articles, websites, forums, and billions of human-written texts found across the internet.
So if a model is trained on biased, manipulated, or dehumanizing data, its "understanding" of the world will reflect that bias. If you gave an AI a dataset that described all people living in a certain area as "subhuman" or "animals," the model would absorb that definition as truth. So later, when asked to describe or act upon those people, it would respond exactly according to what it learned.
In other words, AI systems learn entirely from human-created data—data that is written, reviewed, and labeled by people.
If the individuals involved in that process hold biases, or if the data itself contains prejudice, misinformation, or even stereotypes, the resulting model will mirror those distortions. So, when a system is repeatedly trained on content portraying a group as "uncivilized" or "savage," its outputs will tend to reproduce and reinforce those same ideas.
Now, consider what happens when such a model doesn’t just analyze data—it’s also authorized to act on it. What if this same system tolerates mistakes, accepts margins of error, and has the authority to launch missiles? If the data feeding it comes from drone footage or satellite images of a completely destroyed area, how would it interpret what it sees?
Even with today’s most advanced imaging technology, AI models still misclassify objects. In war zones or destroyed areas, these systems are often fed chaotic, incomplete, and low-quality data that they were never designed to interpret accurately.
Under such conditions, the model’s ability to correctly distinguish objects drops sharply. It starts to see uncertainty everywhere, and when uncertainty increases, the model’s behavior depends entirely on the threshold configured by its designers. If that threshold for "threat" is set too low, the system could mark every unidentified or moving object as hostile, treating the entire area as a target.
In many ways, the way AI learns and reacts is a reflection of how humans themselves learn.
After all, both systems are shaped by the data they’re exposed to. The difference is that while humans can consciously question and reinterpret their experiences, machines can only be re-evaluated when humans intervene to adjust their data.
How is the world shaped by understanding distorted information? Humans are shaped by the data and experiences we absorb from the moment we’re born. Every input—every word, image, or reaction—becomes part of how we understand the world.
As children, we begin without prior knowledge or prejudice, learning through what we see and what others tell us. Over time, we start to classify, question, and sometimes unlearn these early inputs, yet much of our worldview remains rooted in those first lessons.
For example, a child might play with a dangerous animal, unaware of any threat. But when a parent reacts in fear—shouting, running, or killing the animal while warning that it’s dangerous—the child internalizes that response. From then on, the child panics at the sight of the same animal, not because of direct harm, but because of learned association.
Now imagine giving that same child a tool powerful enough to kill the animal instantly, without ever asking why.
Different families, cultures, and communities would respond to that same situation in different ways. Some might protect the animal, others might avoid it, and some might kill it. These differences show how humans interpret the world through the lens of their upbringing and experiences.
Similarly, machines are shaped by the data we feed them. Like children, they learn patterns and values from their environment. The less diverse and more one-sided that data is, the narrower and more isolated the model’s understanding becomes—seeing only a single version of reality, the one we choose to provide.
But then, who gets to choose that version of reality?
Most large-scale AI models are built and trained in a handful of resource-rich countries. Developing these systems requires massive computational power and energy—resources that only a few nations or corporations can afford.
Data centers demand enormous electricity consumption, often supported by dedicated power plants. As a result, the datasets that shape these models are collected, filtered, and interpreted through the lens of those who have access to such infrastructure.
So even if we aim for "unbiased" AI, the reality is that the perspective embedded in these systems remains one-sided. When AI models are built by only a few, they inevitably reflect those few.
Today, we are witnessing a form of global technological orientalism, where AI models and datasets reproduce the worldview of those in power, projecting their values and assumptions as if they were universal truths.
Territorial conquest and the control of maritime routes are still very much alive, now mirrored and reinforced by digital infrastructures. Domination through logistics and supply chains has not disappeared; it has simply evolved and intertwined with data and computation, merging the material and digital into a single system of global control. Together, they reinforce one another, forming a continuous structure of power that shapes how the world is governed, connected, and understood.
The same power that conquers and controls land, exploits labor, and erases entire histories now operates through digital means. It continues its work by shaping new systems of knowledge built on its own biases and perspectives, while silencing, erasing, or deliberately ignoring all other ways of knowing. Colonial control is now digital, but its logic remains unchanged.
Yet the violence has not ended; it has only evolved and automated. Human lives are still collateral, only now represented as formulas and equations.
Accountability? is it even something we can still talk about?
We are living in a time where war criminals openly admit their crimes and are still welcomed on the global stage. So does it make any sense to discuss AI accountability, when human accountability itself is already collapsing?
Even though this question matters precisely because the lines of responsibility are now blurred. In traditional warfare, someone gives an order—usually within a clear chain of command—someone executes it, and both can be held accountable.
But with AI-driven systems, that chain fragments into layers of humans and non-humans: the engineers who design the model, the analysts who label the data, the commanders who approve its deployment, the algorithm that executes its logic, the data centers that provide the infrastructure, and the governments that empower and protect such systems.
When a missile kills a child, who is responsible?
The agent that made the prediction? The programmer who wrote the code? The commander who trusted the output? The state that sanctioned it? Or the corporations that built the infrastructure enabling it?
Each actor can point to another link in the chain. Responsibility diffuses until it disappears. And that’s exactly the point.
AI systems are designed not only to automate decisions but also to automate the absence of accountability.
In the end, these systems are not self-born. They are built, trained, and deployed by humans. Every algorithm is written by someone. Every dataset is collected, labeled, and reviewed by teams of people. Every strike is approved by a human command. No matter how small a role one plays in that chain, responsibility remains shared.
Machines may execute the violence, but the data, intention, and permission all come from us—humans.
The systems killing people today are not abstract forces of nature. They are the products of political choices, economic incentives, and moral compromises. Someone funds them, someone builds them, and someone looks away when they are used.
Collective accountability begins with refusing detachment, recognizing that each dataset, each model, each line of code, and each policy decision carries moral weight.
If we still believe in global justice, we must question the choices we’ve made. It is still up to us—not the machine.