Science and Tech

Artificial intelligence: real risks it would have for humanity

The Alarm bells have risen everywhere: Artificial intelligence (AI) poses an existential risk to humanity and it must be controlled before it is too late.

But, what are these doomsday scenarios and how are the machines supposed to wipe out humanity?

(See: More than half of companies save up to three hours a day with AI.)

The most scenarios start from the same point: one day machines will surpass human capabilities, they will escape all control and refuse to be turned off.

“Once we have machines that aim for self-preservation, we’ll have quite a problem.”He said AI scholar Yoshua Bengio during a conference this month.

But since these machines do not yet exist, imagining how they could doom humanity is often the task of philosophy and science fiction.

Swedish philosopher Nick Bostrom has mentioned an ‘explosion of intelligence’
which will occur when super-intelligent machines begin to design other machines as well.

Bostrom’s ideas have been dismissed by many as science fiction, not least because he once argued that humanity is a computer simulation and supported theories close to eugenics.

Also recently apologized after a racist message he sent in the 1990s was exposed.

However, yesHis thoughts on AI have been highly influential, inspiring both Elon Musk and Professor Stephen Hawking.

(See: Technology and Artificial Intelligence at the service of the planet).

Terminator

Terminator

Archive Weather

If super-intelligent machines are going to destroy humanity, they will surely need a physical form.

ANDThe red-eyed ‘cyborg’ played by Arnold Schwarzenegger andn ‘Terminator’ has proven to be a powerful image.

But experts have dismissed the idea. “This science fiction concept is unlikely to become a reality in the next few decades, if ever.” wrote an activist group against war robots, Stop Killer Robots, in a 2021 report.

However, the group has warned that giving machines the power to make decisions about life and death is an existential risk.

The Robotics expert Kerstin Dautenhahn, of the University of Waterloo in Canada, played down those fears.

It is unlikely, he told AFP, that AI gives machines greater reasoning abilities or instills in them the desire to kill all humans.

“Robots are not bad”, he said, though he admitted that programmers can make them do bad things.

(See: Can ChatGPT and more generative AIs be more misinforming than humans?).

deadliest chemicals

coronavirus

After the uncertainties of the first months, today it is known with certainty where and how the virus is transmitted,

AFP

A less fanciful scenario is that of ‘villains’ using AI to create new viruses and spread them.

Powerful language models like GPT-3, used to create ChatGPT, are extremely good at inventing horrible new chemical agents.

A A group of scientists using AI to help discover new drugs conducted an experiment in which they tweaked their AI to invent harmful molecules.

They managed to generate 40,000 potentially poisonous agents in less than six hours, according to the journal Nature Machine Intelligence.

The Artificial intelligence expert Joanna Bryson of the Hertie School in Berlin explained that it is perfectly possible that someone will discover a way to spread a poison like anthrax more quickly.

“But it’s not an existential threat. It’s just a horrible, horrible weapon.”

(See: US seeks to toughen export of AI chips to China.)

Thes Hollywood rules dictate that period disasters must be sudden, huge and dramatic, but what if the end of humanity was slow, silent and not final?

“Our species could come to an end without having a successor”says philosopher Huw Price in a promotional video for the Center for the Study of Existential Risk at the University of Cambridge.

But in his opinion there is “less bleak possibilities” in which humans enhanced with advanced technology could survive.

A picture of the apocalypse is often framed in evolutionary terms.

The well-known theoretical physicist StoeHen Hawking argued in 2014 that ultimately our species will no longer be able to compete with artificial intelligence machines. and told the BBC that he could “signify the end of the human race”.

Geoffrey Hinton, who has spent his career building machines that resemble the human brain, speaks in similar terms of ‘superintelligences’ that will outright surpass humans.

He recently told US broadcaster PBS that it was possible that “humanity is only a passing phase in the evolution of intelligence”.

(See: Google’s new tool: the virtual clothing fitting room with AI).

AFP

Source link