( Spanish) — According to a report by Goldman Sachs economists published in March, some 300 million full-time jobs could be replaced by artificial intelligence around the world.
That’s 18% of the workforce. The most affected workers would be in advanced economies than in emerging ones.
Patricia Ventura, PhD in Media, Communication and Culture from the Autonomous University of Barcelona, is an expert in ethics and artificial intelligence.
In 2021, he produced a report that provided a framework for the Spanish government on algorithmic technologies in the communication sphere. ‘s Miguel Ángel Antoñanzas spoke with her.
One of the scariest things is the possibility that we feel replaced by artificial intelligence, is it possible?
We cannot feel inferior to a machine because it is more efficient or productive. Human persons are much more than efficiency and productivity. So, this discourse of productivity and comparing the human brain with the machine does not help us and we cannot give in to these stories. These are interested accounts of who is going to benefit from this chaos.
We have to value ourselves, above all, for our moral criteria to govern that technology. That morality is not and cannot be had by machines. There is nothing to show that they can have it.
So what are the professions that are going to be harmed by this artificial intelligence?
I have perceived a lot of concern on the part of people who are dedicated to dubbing, on the part of scriptwriters, in fact, in the United States there has been the main worldwide mobilization by scriptwriters so that the studios can continue to count on them. They don’t want them to bring them scripts already made by machines so that they can only review them, they don’t want them to train the machines with their scripts and if they do, then they get paid for it. So these are professions that, of course, are in danger and can be harmed.
In the legal world too. One of these tools is capable of tracking jurisprudence on a specific topic at a time. Among the journalists there are certain tasks that will not be necessary, but even some say that the programmers will also be affected, now the code is also made by the machine.
But it is also true that when you look back, you see that when there were technological changes it also seemed that everything was going to disappear, that the world was going to end. And then in the end things have been relocated, that’s why we have to learn from history. Fear is never good, obviously we have to be understanding with those professions that, or people that, now cannot reinvent themselves or that are in certain positions. But that’s how technological evolution is. If there are professions that can be done by a machine, it means that maybe they are not so creative or determined. Therefore, I think that we still don’t know the real effects, that we have to look at it calmly, but we have to assume that certain tasks, I don’t know if professions, but certain tasks will end up being assumed by this technology.
And what professions are going to be favoured?
The big technologies, those that have certain related sectors or that can take advantage of that technology in some way. In the case of communication and culture, perhaps the most specialized one, well, those who want to misinform or intoxicate many times can take advantage of it, but on the other hand, people who want to build beautiful projects and even people who want to do journalistic projects that couldn’t be done before. Above all, the big tech companies have the upper hand.
What is not well understood about artificial intelligence?
What I think is still not being understood and will end up being understood is that this is technology, technology that can be used in one way or another to do positive things, for things that are not as positive as any other.
It is a tool that can be used to achieve many things that perhaps until now we have not been able to achieve. It can serve so that we all have more opportunities. It can help us to improve the environment. But it can also serve to generate misinformation, polarization, to intoxicate.
Some experts or even those responsible for the development of this artificial intelligence are publicly calling for a regulation, other experts have warned of the danger even for all humanity of uncontrolled artificial intelligence.
Artificial intelligence itself is not bad and it has no intentions, it has no motivations and there is no evidence that a machine can have a motivation. And that is a very important nuance because it puts the responsibility on who creates it, so artificial intelligence is not going to dominate us, it is not out of control, but rather the people who are creating it and putting it on the market, perhaps without enough security measures and without enough auditing, those are indeed a danger. The focus should be on people. I believe that the worst scenario is deregulation, because regulation is going to make everyone have to go through regulations to be able to put certain devices on the market that could harm others.
The creators are responsible for their creations and therefore that responsibility obliges them to control it. So it seems to me that this is an interesting story, that certain creators are interested in disseminating or promoting, but I think that it is not that, if they are responsible, they can control it.
Although we have publicly seen calls for regulation, you have stated that you believe there is double language in those statements.
It seems to me that there is a confusing and interested language here. I think that they are not interested in regulation, but what interests them is to participate in regulation.
In Europe we have the General Data Protection Regulation that we know and also explained very well by an adviser to the UN and the European Union, who when Italy banned Chat GPT prohibited it, being very strict in the application of the General Data Protection Regulation , which in no way legitimizes what Open AI and these companies are doing with the data on the Internet, among other things because many of them are subject to copyright. And, on the other hand, now a California law firm has also filed a lawsuit against OPEN AI in the United States. What happens is that it is curious because Italy backed down, Europe has not applied the General Data Protection Regulation and it seems to me that this is what reveals to us the great power that these companies have, which seems to be above of the states.
Indeed, and you mentioned it, Open IA has been sued collectively in California with the accusation that the company steals and appropriates the information from people to train their artificial intelligence tools.
The problem is that perhaps these tools have been launched on the market that also take, let’s say, data from people, from creators protected by copyright, so here comes this contradiction and it’s sad because they are tools that can serve and encourage creativity a lot. , but at the same time the creator himself may feel uncomfortable, since they are using the creations of others and that is why regulation is important.