POLITIQ

The Question of Responsibility Regarding the Use of Autonomous Weapons in Warfare

By Jacob Griffith | Published on January 17, 2025

Political engagement image

The most pressing issue surrounding the future of warfare is the regulation of Lethal Autonomous Weapons Systems (LAWS). Although UN resolutions were approved on lethal autonomous weapons in both 2023 and 2024, there is presently no treaty governing their use in conflict. Forms of LAWS have been used for many years to improve national defence – think Isreal’s Iron Dome – however, as technology has become more sophisticated and AI has become increasingly able to make decisions without the oversight of humans, we are entering an era where LAWS may be relied upon to independently locate, identify and eliminate humans.

The concept of the human artifice, drawn upon by Hannah Arendt and Walter Benjamin, proves especially useful in conceptualising this development in technological capability which allows human life to be given and taken away by machines. The human artifice represents the world which humans have created, constructed purely by the objects of human production, whose utility creates a rich tapestry which we interact with as a central aspect of our lives. By contrast the natural world consists not only of organic life but of the intangible aspects of human existence which as individuals we cannot master and therefore cannot reproduce: the psyche, consciousness and so on.

Technology is the chief product of the human artifice, driving it forward, evolving and shifting it, increasing the might of the human artifice to the point where it threatens to supplant the natural world. Artificial Intelligence is the latest innovation of the human artifice; its technological developments so profound that it has enabled humans to produce autonomously acting entities rather than mere objects. AI becomes simultaneously an object of human production and an individual actor in itself, and as such, in some sort of middle ground between the human artifice and the natural world.

Within this new technological era, machines now hold the decision-making capacity to end the life of a human without the oversight of a human authorising this decision. The difference between AI and humans is that humans hold the capacity to be empathetic. Even if they choose not to exercise this empathy, or to ignore considerations derived from it, the fundamentally distinguishing factor between humans and AI is this ability to process the feelings of other humans. LAWS see humans only as units, and distinguish between these units by their differentiated characteristics and how these characteristics affect their likelihood of being a viable target. The consciousness, empathy and individuality essential in the establishment of morality are absent from AI.

This absence is at the heart of ethical questions surrounding the regulation of LAWS. International Law is constructed to hold humans accountable for action which are deemed, by other humans, as in violation of a code of conduct collectively established, once more, by humans. AI can respond to its environment and bring about actions much like humans – are the actions of a LAWS level to the actions of humans in the eyes of the law? And, if so, in a situation when an ‘action’ by a weapons system which leads to the death of civilians is deemed in violation of international law, who is to be held responsible? For if the autonomous entity which generated the action it truly autonomous, no humans played a role in its immediate action, and therefore responsibility for the act in question rests on AI. In this situation it is a machine which oversees the slaughter of humans. To deploy autonomous weapons in conflict therefore anonymises massacre.

One can perhaps prosecute those who deployed LAWS to malicious ends, but once again, there lies great ambiguity in determining responsibility given that these systems are capable of responding to a unique environment never before comprehended by a human, and generating a decision based on this environment which leads to the loss of life. These are circumstances which no human could have conceived beforehand and in which actions were decisively taken by a non-human entity. Where do we draw the line between what is the responsibility of a human and what is the responsibility of a machine?

In this grim outlook of the future, the only hope lies in systematic regulation. We must impose regulations on the development of autonomous weapons systems, governing how algorithms are constructed; we must impose regulations on when they can be used and transparency over how they are used; and we must impose regulations on the level of human oversight when using these systems.