Science and Tech

"Superintelligent AI will be uncontrollable": the expert’s warning after ChatGPT

Ilya Sutskever, former chief scientist at OpenAI

The development of artificial intelligence is advancing by leaps and bounds, but with it concerns are also growing.

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has issued a strong warning about the dangers of the next generation of AI: “Superintelligent artificial intelligence will be uncontrollable, autonomous and unpredictable“.

According to the expert, we are about to cross a threshold in which these technologies will not only be more advanced, but also more autonomous and unpredictable, posing unprecedented ethical and security challenges.

A new era for artificial intelligence

Ilya Sutskever is a key figure in the development of ChatGPT and after leaving Open AI, he founded Safe Superintelligence Inc. (SSI), an organization that supports controlled and safe AI development.

He argues that the next generation of AI will not just be more powerful, but radically different. In recent statements at the NeurIPS conference, the expert highlighted three key characteristics that will mark this evolution.

The Economist

Future artificial intelligence could act independentlywithout constant human supervision. Also, the increasing complexity of these systems will make their actions increasingly difficult to anticipate. And although it is controversial, Sutskever does not rule out that these AIs develop a certain degree of self-awareness.

These advances, although promising, open a debate about the possible consequences of letting such advanced systems operate freely.

The current AI landscape is marked by a race between companies to lead the sector, but not all experts agree on the approach. Sutskever left OpenAI in May 2024 after tensions with Sam Altman, the company’s CEO.

While Altman was betting on rapid growth to maintain competitiveness, Sutskever defended prioritizing securityespecially in the development of superintelligent systems.

This dispute reflects a broader division in the industry. On the one hand, those who seek to maximize the commercial impact of AI, and on the other, those who warn about the risks of advancing without sufficient control.

The proposal from Sutskever’s new company, SSI, focuses on ensuring that future artificial intelligences are aligned with human interests. In the words of the expert through a statement from his foundation: “Our sole focus is secure superintelligence. “We will advance capabilities as quickly as possible, but with security always one step ahead.”

The company has already attracted the attention of investors, achieving initial financing of $1 billion.

Sutskever’s notice not only points to technical challenges, but also ethical ones. The possibility of creating systems that exceed human capabilities raises fundamental questions about control, coexistence and the very future of humanity.

These questions do not have easy answers, but they are necessary at a time when technology advances faster than regulations and public debate.

The development of AI is full of promise, but also risks. As companies like OpenAI and SSI work in opposite directions, the world watches closely. Will we be able to control the systems we are creating, or will they end up escaping our influence?

Sutskever’s reflection reminds us that artificial intelligence, no matter how advanced it may be, is still a tool created by humans. The challenge is to ensure that its evolution is not only technological, but also ethical and responsible.

Get to know how we work in ComputerToday.

Tags: Artificial intelligence

Source link

About the author

Redaction TLN

Add Comment

Click here to post a comment