The use of AI on the battlefield has become clearly visible in the Ukrainian war. The possible proliferation of autonomous weapons makes it necessary for States to explain the military functions in which AI is involved and agree on its regulation.
Advanced fighter jets flown by algorithms instead of humans? It is not science fiction, but recent history. In February 2023, Lockheed Martin Aeronautics announced that artificial intelligence (AI) software had flown a modified F-16 fighter for 17 hours in December 2022. There were no humans on board during this flight, the first in which AI had been used in a tactical aircraft.
The test reflects the interest of several countries to develop sixth generation fighters in which AI algorithms are in control. Combined with recent breakneck releases of AI-enabled technologies, these advances have raised alarm bells for those on the world stage who demand that human control over weapons systems be maintained and regulated. Without early action, the opportunity to regulate AI weapons may fade soon.
The role of AI
AI is at the center of the increasing autonomy of certain weapon systems. Some features of key interest, such as the target selection and matchup, could soon be shaped by the addition of AI. The use of AI on the battlefield is clearly visible in the Ukrainian war. The range of applications includes target fixation assistance and advanced loitering munitions. However, Ukraine is not the first example of such a battlefield deployment. loitering ammunition Kargu-2 Turkish-made, with facial recognition and other AI-enabled capabilities, has drawn attention since its use in the war from Libya.
Now that AI mastery is considered critical to the competence among the world’s great military powers, the United States and China in particular, investment in military AI is certain to increase. In 2021, the US was known to have approximately 685 Projects of ongoing AI, including some related to important weapons. The fiscal year 2024 budget released by President Joe Biden in March 2023 included spending $1.8 billion on AI development. China’s investments in military AI are not publicly available, but the experts they believe that Beijing has committed tens of billions of dollars.
The debate on lethal autonomous weapons in the United Nations Convention on Certain Conventional Weapons (CCAC), which has been in existence for a decade now, has made slow progress. But it has served as incubator of an international process that seeks to create and enforce regulation of such weapons. At the beginning of the discussions, some States, for example United Kingdom, argued that autonomous weapons would never come into existence, since states would not develop them. These claims can no longer be maintained.
Beyond the CCAC
The political community beyond the CCAC is aware of the new developments. Some analysts have raised concerns about the ways in which AI could further lessen human control. paul scharre, vice president and director of studies at the Center for a New American Security, notes that “militaries are adding to their combat functions an information-processing system that can think in ways quite foreign to human intelligence.” Systems that do not have clear human control or where military commanders do not sufficiently understand the actions they might take require specific prohibitions. scharre warns that the risk is that over time humans may lose power over the battlefield.
This concern is also shared by the States. Parallel to the CCAC debates are the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomypublished by the United States, and the Comunicate of Belén on autonomous weapons, led by Costa Rica. Both were released in February 2023. The US statement emphasizes voluntary measures and best practices, an approach long favored by Washington and its allies. The Costa Rica-led effort calls for a legally binding instrument that contains both prohibitions and regulations on autonomous weapons. This two-tiered approach recommends specific prohibitions for weapons that function without significant human control, and regulation of other systems that, while with some degree of human control, are considered high risk to civilians or civilian infrastructure.
The two-tier approach figured prominently in the discussions at the first meeting of the CCAC 2023 Government Expert Group, held from March 6-10, 2023 in Geneva. However, while the majority of states seemed to support this approach – only a handful of the roughly 80 states that usually participate are strongly opposed – it also became clear that there are divergent views on what such an approach will mean.
For example, France he proposed that the ban focus on what he understands to be fully autonomous weapons systems. From the French point of view, weapons that are fully autonomous operate without human control and also outside the military chain of command. As civil society organizations such as Article 36 and Reaching Critical Will, this definition is unhelpful and unrealistic, no state would develop systems outside of its chain of command. Putting the ban threshold at that level essentially means no bans.
A presentation from a US-led group that included Australia, Canada, Japan, South Korea and the UK outlined a ban on weapons that by their nature cannot be used under international humanitarian law (IHL). Even so, that group did not see the need for a legally binding instrument and focused instead on the existing principles of IHL.
Russia, Israel and India they were the most vocal against any legally binding instrument that would emerge from the CCAC process.
The CCAC looking to the future
Until now, CCAC meetings have tended to be insulated from actual technological developments. In general, the most powerful militarily states, such as Russia and the United States, have tried to paralyze the debates, in the case of the former, or to propose voluntary measures to guide the development of autonomous weapons, in the case of the latter. Few of these states with more advanced militaries have seriously engaged or explained their own use of new technologies during the years of CCAC meetings. Washington has been willing to discuss decades-old systems, but not newer technologies.
The last CCAC meeting in Geneva is taking place around this time, May 15-19, 2023. Due to numerous recent media reports on AI (such as ChatGPT), expert opinion and the tools that diplomats themselves may be using, cLike AI-assisted writing software, this meeting will provide an opportunity to discuss concrete examples of the use of AI by the military. No doubt proponents of regulation are looking forward to this debate. The US, for example, could explain in more detail how its air forces used AI to identify targets in a live operation that involved spotting and firing on chain targets. How, exactly, were targeting and engagement aided by the AI?
China, which has played a balancing role in the CCAC, ostensibly advocating regulation while wishing to keep its options open, could say specifically what limitations it would see placed on AI-powered weapons. Chinese diplomats have long advocated banning offensive uses of autonomous weapons, while allowing defensive uses. This is a meaningless distinction in a world where the line between defensive and offensive weapons is impossible to define.
What is, ultimately, the message that those attending this meeting need to express and amplify? The world urgently needs a new regulatory framework that places restrictions on the development of any weapon that further diminishes human agency over the use of force. But we can’t wait much longer. Left unchecked, the union of AI with the world’s most sophisticated weapons could be catastrophic.
Article originally published in English on the website of the Center for International Governance Innovation (CIGI).