in

For artificial intelligence with human values

The stakes are too high to leave AI development in the hands of researchers, let alone the CEOs of tech companies. While overly rigid regulation is not the answer, the current regulatory vacuum must be filled, and that process requires broad global commitment.

“This may be the year that artificial intelligence transforms everyday life.” So affirmed it Brad Smithpresident and vice president of Microsoft, in a act on artificial intelligence (AI) organized by the Vatican in early January. But Smith’s statement was more of a call to action than a prediction: The event—attended by industry leaders and representatives of all three Abrahamic faiths—was intended to to promote an ethical and anthropocentric approach to AI development.

There is no doubt that AI poses a number of daunting operational, ethical and regulatory challenges. And addressing them will not be easy. Even though AI development dates back to the 1950s, the general concept has not yet been defined, and its potential impact remains unclear.

Of course, recent advances, from the almost—chillingly—human texts produced by the ChatGPT from OpenAI, to applications that could reduce years the drug discovery process, shed light on some dimensions of the immense potential of AI. But it remains impossible to predict all the ways in which AI will transform human life and civilization.

This uncertainty is not new. Even after recognizing the transformative potential of a technology, the form that transformation takes often surprises us. Social networks, for example, were initially presented as an innovation that would strengthen democracy, but they have contributed much more to destabilize it, as it is a key tool for the spread of disinformation. It can be assumed that AI will be exploited in a similar way.

We don’t even fully understand how AI works. Let us consider the call black box problem: With most AI-based tools, we know what goes in and what goes out, but not what happens in between. If AI makes (sometimes irrevocable) decisions, this opacity poses a serious risk, compounded by issues such as the transmission of implicit bias through machine learning.

The misuse of personal data and the destruction of jobs are two other risks. And according to former US Secretary of State Henry A. Kissinger, AI technology can undermine human creativity and visionsince the information reaches “overwhelm” to wisdom. Some fear that AI lead to human extinction.

With so much at stake, the future of technology cannot be left in the hands of researchers in the area, much less in that of CEOs of tech companies. Although the answer does not lie in rigid regulation, the current regulatory vacuum must be filled. That process requires the kind of broad global engagement. Like the one that is giving shape today, after many efforts, to combat climate change.

In fact, climate change offers a useful analogy for AI, much more useful than the comparison to nuclear weapons. These types of weapons can affect the population indirectly, through geopolitical events, but the technology behind it is not a fixture of our personal and professional lives, nor is it shared globally. But climate change, like AI, affects everyone, and acting to address it can lead to differences between countries.

The race to master AI is already one of the keys of the rivalry between the United States and China. If either country imposes limits on its AI industry, it risks allowing the other to get ahead of itself. Therefore, as in the case of reducing emissions, a cooperative approach is vital. Governments, along with other public actors, must work together to design and place limits on private sector innovation.

Of course, this is easier said than done. The scant consensus on how to approach AI has led to a hodgepodge of regulations. And efforts to craft a common approach in international fora have been hampered by power struggles between major players and a lack of enforcement authority.

But there is promising news. The European Union is drawing up an ambitious instrument to establish harmonized standards on AI. The AI ​​Regulation, which is due to be finalized this year, aims to facilitate the “development and adoption” of AI in the EU, while ensuring that the technology “works for people and is a positive force for society”. . The legal proposal includes everything from adapting the rules on civil liability to revising the EU framework for product safety. The law takes the kind of comprehensive approach to AI regulation that we have been missing.

Not surprisingly, the EU has been at the forefront of AI regulation. The block has a history of leadership in the development of regulatory frameworks in critical areas. arguably EU data protection legislation inspired similar measures elsewhere, from California’s Consumer Privacy Law to China’s Personal Information Protection Law.

But it will be impossible to advance global regulation of AI without the US. And despite his shared commitment with the EU to develop and deploy “trustworthy AI” The US is committed to the supremacy of AI above all else. To do this, it tries not only to strengthen its own leading industries — among other things, by minimizing the red tape necessary for their development — but also to prevent progress in China.

As the Homeland Security Committee on Artificial Intelligence pointed out in a 2021 report, the US should focus on “bottlenecks that impose significant derivative strategic costs on competitors, and minimal economic costs on US industry.” The export controls that the US imposed in October, targeting China’s advanced computing and semiconductor industries, exemplify this approach. For its part, China is unlikely to give up its goal of achieving technological self-sufficiency (and eventually supremacy).

In addition to opening the way for AI risks to manifest, this technological rivalry has obvious geopolitical implications. For example, him leading role of Taiwan in the global semiconductor industry gives it an advantage, but it can also put another target in its back.

It took more than three decades for awareness of climate change to translate into real action, and yet we are still not doing enough. Given the pace of technological innovation, we cannot afford to go down a similar path with AI. Unless we act now to ensure that technology development is guided by anthropocentric principles, we will almost certainly regret it. And, as in the case of climate change, we will most likely regret our inaction much sooner than we think.

© Project Syndicate, 2023. www.project-syndicate.org

Source link

Written by Editor TLN

Robot liquefied to escape from cell

A robot liquefies itself to go through bars like in Terminator 2

Brazil judge fines Telegram for not suspending far-right congressman’s account