Stephen Hawking was a scientist who not only stood out for his work in theoretical physics, but also for his opinions and predictions about the future of humanity. One of the issues that concerned him most was the development of artificial intelligence (AI).
Stephen Hawking was one of the most important theoretical physicists of the last decade and world renowned for his work on the quantum theory of gravitation and of black holes.
He was also known for his book “Brief history of time”, which was a success in sales. The same way, popularized science through his lectures, which were accessible to a non-specialist audience.
However, at the age of 21, he was diagnosed with motor neuron disease, a neurodegenerative disease known as amyotrophic lateral sclerosis (ALS), which left him physically disabled and confined to a wheelchair.
Such was his hobby that his condition did not prevent him from continuing to work in his field and becoming one of the most influential scientists of all time. Unfortunately, Hawking passed away in March 2018 at the age of 76.but his work was a source of inspiration for many people.
This is what Stephen Hawking thought of artificial intelligence
Before his passing in 2018, Stephen Hawking left some interesting predictions for the future of humanity. It should be noted that many of them are chilling, as they speak of a possible extinction.
Surprising of all, the British scientist had been warning about the dangers of artificial intelligence long before tools like ChatGPT became widespread.
During an interview in 2014, he stated that the development of full AI could lead to the end of humanity if it surpassed human intelligence. He explained that the basic AI was already advanced, but more developed AI could outrun humans.
Undoubtedly, everything seems to indicate that this prediction is coming true little by little, since the development of technologies such as GPT-5, which will be released in the coming years, could wipe out several jobs as we know them today.
in his book “Brief Answers to the Big Questions”Hawking talked about the real possibility of AI surpassing human intelligence. It is for this reason that there was a need to find a new home in space in order to preserve the human race.
It is also mentioned that AI could have both a positive and negative impact in the future. He warned that, if not approached with caution and responsibility, the result could be catastrophic.
The famous theoretical physicist not only talked about the possibilities of humanity’s extinction, but also shared some solutions to be able to control its development and advancement of generative technology.
Stephen Hawking emphasized the importance of establishing ethical and legal frameworks to regulate the development and use of advanced tools such as ChatGPT in society.
AI regulation is a complex issue, but corporate responsibility and accountability must be promoted and organizations using AI. Elon Musk is one of the people who are concerned about the uncontrolled development of this technology.
They have even called for a 6-month delay in AI development because of its potential danger, but many others think it’s a bad idea. However, Hawking said something very true before he died and that you should take into account: “Artificial intelligence could be the worst thing that has happened to humanity”.