Analysis: Lethal Autonomous Weapons are a growing part of the current landscape of international armed conflict
Advances in modern technology and artificial intelligence are changing all areas of modern life, from smart phones and smart kitchens to self-driving cars. The weapons used in armed conflict are not immune to such advances, with lethal autonomous weapons being developed and introduced into the arsenals of militaries worldwide.
The concept of autonomous functioning weapons is neither new nor novel. Weapons with autonomous functions have been part of many military arsenals for the past eight decades. "Fido", the first documented weapon with autonomous functions used in armed conflict, made its combat debut in the arsenal of the US military in May 1943. Since then, there has been a variety of weapons with autonomous functions introduced, including antivehicle and antipersonnel mines which once activated detect and engage targets based on trigger mechanisms.
As with other areas of modern artificial intelligence, the autonomous functionality of weapons has increased in line with technological advances and accessibility to technology through mass production. These advances in technology have seen the development of Lethal Autonomous Weapons (LAWs). While there is no universally agreed definition, many organisations including the UN and the International Committee of the Red Cross describe them as "weapon systems which once deployed select and engage targets without any meaningful human input".
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Prime Time, here's what worries experts about the development of AI
These weapon systems include many types of weapons such as loitering munition, drone technology, missiles, land-based arsenals, and sub aquatic machinery. The crucial part of the lethal autonomous weapons is the ability to select and engage targets devoid of any meaningful human input.
Once deployed, LAWs operate off a "fire and forget" system, which means that after the deployment of the weapon there is no further external intervention from a human decision maker when selecting and engaging targets. Instead, the weapon systems relies solely on sensor suite data and preprogrammed algorithms to make decisions which were previously reserved for military personnel.
The introduction of such weaponry has garnered much controversy. The advances in the autonomous nature of weapons used in war poses many ethical and legal questions surrounding the delegation of life and death decisions to pre-programmed algorithms.
From DW, how AI is driving a future of autonomous warfare
In 2013, the UN Special Rapporteur on extrajudicial summary or arbitrary prosecutions called on UN member states to implement a prohibition on the development and deployment of Lethal Autonomous Weapons until an agreed upon International Framework could be developed. This averseness was reiterated by the international community, with 97 states calling for a complete prohibition on the development, proliferation and use of LAWs.
The foremost ethical and legal problems that arises through the use of Lethal Autonomous Weapons is upholding the Laws of War. These are primarily outlined in the Geneva and Hague Conventions as well as customary international law. The Laws of War outlined in these conventions seek to limit the effects of armed conflict through the regulation of the means and methods of warfare and are based on four key principles of proportionality, distinction, military necessity and humanity.
These laws and underlying principles of International Humanitarian Law primarily govern human conduct in war and seek to prohibit indiscriminate and disproportionate attacks in the course of armed conflict. They include a protection of civilians including a prohibition on the killing of civilians, except under strict circumstances where the principle of proportionality justifies the possibility of unwanted deaths to prevent further loss.
From International Committee of the Red Cross, what are the dangers of autonomous weapons?
One of the most significant obstacles in the acceptance of LAWs is the delegation of life and death decisions and application of the principles of the Laws of War to pre-programmed algorithms. These worries may not have been without warrant as the first attack by Lethal Autonomous Weapons was recorded in 2019 during the Libyan civil war, where an autonomous drone system, Kargu-2 was reported to have been used, according to a UN report.
As well as governing human conduct, Additional Protocol 1 to the Geneva Conventions requires high contracting parties to review new weapon systems before they are used in International Armed Conflict. This is to ensure they can follow international law including that the new weapons are not likely to cause unnecessary injury and suffering.
Despite this, there is currently no international regulatory framework in place which specifically focuses on LAWs. The current international framework, which includes the Convention on Certain Conventional Weapons, has proved insufficient at regulating the use of Lethal Autonomous Weapons as they act outside human control. Suggestions by the international community in regulating their use include an additional protocol for the Convention on Certain Conventional Weapons, with draft articles being created in March 2023.
From BBC Click, autonomous weaponry is the biggest leap in military technology since the advent of nuclear weapons, but should they be banned?
Despite the international resistance to their use, Lethal Autonomous Weapons are currently being used in international armed conflicts, including the ongoing conflict in Ukraine and the recent conflicts in Azerbaijan and Libya. While they have been met with caution by the international community, they also hold the ability to advance the way in which wars are fought.
One way of doing this is to increase compliance with upholding three of the four key principles underlying International Humanitarian Law, of distinction, proportionality and military necessity, AI acts through pre-programmed algorithms and can potentially assist in more accurate targeting of military objects, in turn reducing the civilian casualties of war through increased precision and efficiency when selecting targets.
Read more: AI 2023: risks, regulation and an 'existential threat to humanity'
While the world has embraced technological and "smart" advances throughout modern life, there is an air of caution and scepticism associated with the use of Lethal Autonomous Weapons. While not welcomed by all states, they are a part of the current landscape of international armed conflict and offer an opportunity to modernise the ways in which wars are fought and limit the associated human cost of war. An international framework regulating the use of such weapons is needed urgently and would need to address issues of accountability for acts contravening the Laws of War committed by Lethal Autonomous Weapons.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ