Western and Chinese experts agree that, if left uncontrolled, which would require a global regime, Artificial Intelligence will pose existential risks in both its civil and military versions.
While the EU boasted about its new regulation of Artificial Intelligence (AI), with global aspirations, a group of leading Western and Chinese experts met in Beijing to identify “red lines”, civil and military, in the face of existential risks. posed by the uncontrolled development of this technology or technological field. It is clear that it can only be limited and controlled if it is done at a global level, with concrete and verifiable agreements, not with mere general statements as until now, for example at the G20. Now, it is not proven today that it can be done. And while some call for progress in the regulatory field, the Pentagon's innovation arm, DARPA, requested more money, more than double what it did last year, to achieve a symbiosis between humans and machines, an AI that reasons and highly autonomous AIs – That is, they can decide without human intervention – in accordance with the ethical principles of the Department of Defense. Chinese and others are immersed in similar programs.
Advertising
The Beijing meeting, the second convening of the International Dialogue on AI Safety, but the first in China, ended with a release which reflects the need for a joint approach to AI security to stop “the catastrophic or even existential risks to humanity” that we may encounter “within our lifetimes.” In fact, this is what the global summit convened in London last November sought, without much success. Despite the tensions between the West (especially the US) and China to each secure crucial primacy in AI, behind the scenes various scientists and technologists are talking to each other.
On the one hand, there is the development of AI as such, especially if it reaches what is called Artificial General Intelligence that surpasses humans in practically all fields. Secondly, there is the problem, or challenge, of applying AI to the military field, either to improve weapons or to make them autonomous, the vulgarly called “killer robots” that some movements are trying to ban. To control it, not only in military matters but also in civil-existential matters, it would be a matter of reaching a regime of AI control agreements such as those that in the Cold War and the détente between the US and the Soviet Union led to a whole panoply of treaties – most of them today denounced by one or another – to avoid a nuclear war. In fact, this is what Henry Kissinger (who died at the age of 100 last November and one of the architects of that arms control), Eric Schmidt and Daniel Huttenlocher proposed in their book The era of Artificial Intelligence and our human future (Spanish edition of 2023).
But realities do not wait for regulators. Generative AI, one more step that others will follow, has moved ahead. In the military field, there are already drones of all types guided by AI. Although some movements try to stop it, the next generation of unmanned combat vehicles has been exhibited in a recent demonstration in California for senior US military commanders. Along with aerial drones and various augmented reality devices, a robotic dog, called Ghostand autonomous vehicles with automatic weapons, intended for urban combat, the most dangerous for human soldiers.
Will the race or advances in this field be faster than its control? The new EU regulation, the AI Law (which will come into force in 2026), has many interesting aspects, but it has lately had to make way for the generative AI that is flooding the world with Chat GPT and other programs. It will not apply, however, to systems used exclusively for military or defense purposes. Yes to the increasingly numerous cases of “dual use”, civil and military, which, for example, are playing a greater role in the war in Ukraine, especially with drones. Precisely, DARPA (Defense Advanced Research Projects Agency) has required more funds for these purposes, specifically for the Rapid Experimental Missionized Autonomy (REMA) with which it aims to improve commercial and military military drones. stocks with a subsystem that allows its autonomous operation. It is also requesting $22 million, up from $5 million last year, to test autonomous weapons software in complex scenarios involving ethical decisions. Or 41 million for “AI pilots”, between other programs.
Despite the EU, China was ahead in AI regulation. According to an interesting study by Matt Sheehan for the Carnegie FoundationIn 2021 and 2022, China became the first country to implement detailed, binding rules on some of the most common applications of AI. Although, as expected, Chinese AI development programs have a greater control component. Facial recognition on street cameras is not prohibited (although the EU, which does prohibit them, makes an exception for security reasons). These Chinese regulations came before the media explosion of generative AI, which they consider included in what they call “recommendation algorithms” and “deep synthesis.” But they are adopting new regulations in this regard. Chinese regulation places more emphasis on the effectiveness and efficiency of AI than on transparency, even on what it can contribute to “harmony,” a Confucian concept so promoted by Xi Jinping. Concerns about individual privacy are less than in the West, although privacy has long since been emptied. There are the advances in AI and in general in the monitoring of communications, as Edward Snowden clearly showed with his leaks, and there are the advances in the use that some governments give it in terms of control of citizens, if it is still possible. You can call them that and not mere users.
This Chinese regulation also does not speak about the application of AI in the military field. AI, as I recently remembered Jacob Stokes, CNAS, is an essential part of the empowerment and modernization program of the People's Liberation Army. Xi Jinping wants the armed forces to continue advancing simultaneously in mechanization, computerization and what he calls “intelligentization.” In 2022 he already urged China to “accelerate the development of intelligent and unmanned combat capabilities.” Furthermore, according to Stokes, the Chinese Military-Civil Fusion program (what in the West is understood as “dual use”) aims to appropriate certain civilian technological advances, including some developed in cooperation with international research partners, to increase military capabilities. . Chinese military experts talk about reaching a “command brain.”
The scientists meeting in Beijing called for no system to “substantially increase the ability of actors to design weapons of mass destruction, violate the biological or chemical weapons convention” or be capable of “autonomously executing cyberattacks that cause serious economic loss or damage.” equivalent.” Like arms controlarms control) before, this will require a whole new scaffolding of verifiable agreements on AI, a challenge not only for the two superpowers, but for humanity as a whole.
It is not just about governments, but also about companies and technocrats. The statement from the Beijing meeting contrasts with that of other positions, such as that of the Techno-optimistic Manifesto by Marc Andreessen, a successful technologist and entrepreneur, who proclaims “effective accelerationism”, an acceleration of technology, under some control, as a solution to almost all of humanity's ills. He seems to forget that, when solving problems, humans often generate new ones.
The Beijing meeting had implicit official support, which also indicates a concern in the world's second power in AI. For experts, the control of systems is essential not because they act autonomously, but because they develop autonomously: “No artificial intelligence system should be able to copy or improve itself without the explicit approval and help of a being. human”, or “taking measures to unduly increase its power and influence”. For former British Deputy Prime Minister Nick Clegg, a long-time supporter of Meta (Facebook), it is like trying to build an airplane in mid-flight, not only difficult, but risky.
The so-called AI is in its infancy, even in the military field in which the war in Ukraine is driving it. It will reach much more, hand in hand with the companies and institutions (including the armed forces) that invest in it, and who knows if one day on its own. Is it too late to control it?
Activity subsidized by the Secretary of State for Foreign and Global Affairs.