The great military potential of AI demonstrates the need for the EU to regularize these technologies, since current legislation, through the Artificial Intelligence Law, does not prevent the most serious implications of military AI. It is urgent to establish an effective legal framework and promote responsible and ethical AI.
Generative artificial intelligence (AI) – including ChatGPT 4 – is perceived (and even feared) by some as “a threat to humanity”. Whether true or not, the many risks posed by the rapid development of AI for fundamental rights, security and human autonomy are already having repercussions in all sectors of the economy and society.
Advertising
Take defense AI as an example: either the Palantir AI platform, akin to ChatGPT but used for military decision-making, or Clearview’s facial recognition systems to identify enemies, or autonomous drones used deliberately As lethal weapons systems, the entire military sector is increasingly dependent on AI. And when AI is deployed in defence, the risks are magnified and the EU has to go to great lengths to regulate its implications.
The large investments in military AI (around $6 billion in 2021) as a percentage of growing global defense spending ($2 trillion in 2020) reflect the defense industry’s burgeoning love affair with this technology. Beyond weaponry, AI is critical to a variety of Intelligence, Surveillance, and Reconnaissance (ISR) tasks at the strategic, operational, and tactical levels, as well as automated reasoning, logistics, training, and other functions.
Taken together, AI enables what experts call “information superiority”—in short, gaining a strategic advantage over other nations’ defenses through data and intelligence—with far-reaching geopolitical implications.
In fact, AI-based technology supplied by European industry is one of Ukraine’s main assets against Russia. Unmanned aerial vehicles (supplied by the United States, Norway, Luxembourg and the United Kingdom) and autonomous underwater drones (supplied by the Netherlands) are tasked with preventing Russian attacks. Unmanned ground vehicles (courtesy of Germany) and mobile autonomous intelligence centers enhance geospatial intelligence as well as data processing on the ground. AI-powered acoustic surveillance solutions can also detect incoming missiles.
These examples demonstrate that AI can make a difference in conflicts, through intelligence and deterrence. Thus, the use of AI in the field of defense is a turning point for the outcome of geopolitics and war. However, while the European AI military industry thrives, its political leaders have chosen to turn a blind eye to its uses and associated risks.
The devil is in the details (the AI Rules)
The proposed Regulation on Artificial Intelligence (the AI Law), which will soon enter trialogue negotiations, promotes uses of AI that are ethical and respect fundamental rights, but discreetly mentions in a footnote that uses AI military do not fall within its scope.
This leaves Member States wide leeway in regulating the use of AI in warfare. Given the Union’s investment in AI and other advanced technologies that will reach the value of almost €8 billion between 2021-2027, it could be worrying. This is possible thanks to the European Defense Fund since the EU does not prohibit the use of autonomous weapons, despite the resolutions approved by the European Parliament in 2014, 2018 and 2021.
Although military AI is excluded, the AI Law will still have a significant impact on European defense. Many AI systems are not developed or used exclusively for defense, but are dual-use in nature, meaning they can be used for both civilian and military purposes (for example, a pattern recognition algorithm can be developed to detect cell cancerous or to identify and select targets in a military operation).
In these dual-use cases, the AI Law would apply, which requires systems to comply with its provisions for high-risk AI. However, enforcement of regulatory requirements can often be unfeasible for systems operating autonomously or in a classified environment. Additionally, most advocacy organizations do not closely follow developments in civilian digital policy, so they may be unprepared for the AI Act once it enters into force.
On a political level, governments are becoming increasingly involved in the regulatory issues around military AI. The Dutch and South Korean governments co-hosted a Responsible AI in the Military (REAIM) Summit in February 2023, bringing together more than 50 government representatives to endorse a joint call to action, with the goal of situating “the responsible use of AI higher on the political agenda.” The Canadian, Australian, US and UK Departments of Defense have already established guidelines for the responsible use of AI. NATO adopted its own AI Strategy in 2021, along with a Data and Artificial Intelligence Review Board (DARB) dedicated to ensuring legal and responsible AI development through a certification standard.
However, the NATO AI Strategy may face implementation hurdles. Apart from France’s public AI Defense strategy, there is no EU-wide legal and ethical framework for military uses of AI. Consequently, Member States may take different approaches, which will lead to gaps in regulation and supervision.
It’s time for the EU to step up
Therefore, the EU should step up and develop a framework for both dual-use and military applications of AI, specifically under a Europe-wide approach to promote responsible use of AI in defence, based on in the classification by risk levels of the AI Law. This would guide defense institutions and industry to develop, acquire and use AI responsibly based on shared values.
Although defense is not an EU competence under the Treaties, the EU has found ways to influence its response to the Russian invasion of Ukraine. The first meaningful approach to governing AI risks across institutions would come from a first general EU framework. Ultimately, establishing a unified framework for responsible AI in defense would signal the EU’s global leadership ambitions in shaping the future of values-based AI governance, mitigating the most serious risks in both contexts military as civilians.
In short, Europe cannot afford to overlook the important implications of AI in defense. Current EU legislation only partly covers defensive AI applications (in the case of dual-use AI) or does not cover them at all (military AI is excluded from the AI Law). This leaves political responsibility and risk management in the hands of the Member States or, in the worst case, only the defense industry.
The EU’s much lauded risk-based approach to AI pays off if it also effectively regulates military systems, perhaps the most critical sector when it comes to these technologies. Otherwise, the real risks will remain unaddressed and the full potential benefits of responsible AI will not be realized.
Article translated from English from the website of the Center of European Policy Studies (CEPS).