Go ahead that right now we have no absolute certainty of what is to come. Not at least those of us who live relatively unaware of what the big technology companies are cooking up in their different laboratories. Thus, it is as respectable to follow the path of optimism as that of the most absolute pessimism in relation to the emergence of artificial intelligence and its possible consequences.
We are all aware that it is probably a transformative technology like we have never seen before in the history of humanity and It is legitimate to wonder if it will change our lives in the best of ways or if, on the contrary, there are real reasons to worry and prepare for something over which we can perfectly lose control.
“The problem is not artificial intelligence, it is superintelligence”
One of those critical voices with the current system of development of artificial intelligence is that of Roman Yampolskiycomputer scientist and doctor in computer science and engineering, who coined the term AI security three decades ago and who currently focuses his efforts on preventing a catastrophe from happening.
“I hope to make sure that the artificial superintelligence we are currently creating do not kill all humans in the future“says the expert of Latvian origin in The Diary of a CEOwho is currently a professor at the University of Lousiville (USA). “In my first years of work I was convinced that we could create safe AIbut the more I analyzed it the more I realized that it is not something we can really achieve because one problem leads to 10 new ones and those ten to another 100. And we are not talking about problems that are difficult to solve, but impossible. We have only applied patches so far.”adds the expert.
And this expert assures that while progress in AI capabilities is exponential or even hyper-exponential, progress in AI security is “linear or constant.” ““The gap between the capacity of systems and how we can predict what they are going to do and control them is getting bigger.”ditch.
Forks a problem that will be especially aggravated with the arrival of superintelligencethat point at which artificial intelligence will surpass human capabilities. “In the last decade we figured out how to make artificial intelligence better, but it turns out that if you add more computing power, more data, it just gets smarter. And now, the smartest people in the world, with an investment of billions, are focused on creating the best possible artificial intelligence.”
“Unfortunately, while we know how to make those systems much smarter, we don’t know how to make them secure. How do we make sure they don’t do something we regret? When we look at predictions, some estimates claim that in just two or three years we will reach advanced AI. And at the same time we don’t know how to make those systems aligned with our preferences,” he adds.
“So,” he emphasizes, we are creating a kind of alien intelligence. “If aliens came to Earth and you had three years to prepare, you’d be panicking right now, but most people don’t even realize this is happening.”
And the problem is that, according to the computer scientist, no one seems willing to put on the handbrake. Yampolskiy assures that we have not scratched the surface of the potential of narrow artificial intelligence -the one we have at our disposal today- among other things because it is not interesting. And it is a sufficient potential to satisfy society at various levels. But a powerful gentleman is a gift of money.
“The reality of large companies is that The only obligation they have is to make money for their investors.. They have no moral or ethical obligation. Furthermore, according to them, they don’t know how to do it. The answer they give to a possible problem is that they will solve it when we get to that point or that artificial intelligence will help us control more advanced artificial intelligence. And that’s crazy.”indica.
“No one can tell you with certainty what is going to happen, but of course if you don’t control it you are not going to get the results you want. The space of possibilities is almost infinite. The space of the results that we would like is tiny”share.
So, if that point comes where we glimpse that it is possible to lose control, there are experts who say, such as Eric Schmidt, former CEO of Google, that we can turn off superintelligence, although everything will depend on the moment. “It’s absurd to think that because they are distributed systems. You can’t turn them off and they’re also smarter than you. They would make multiple backups. They will turn you off before you can turn them off.. We will only have control at the levels prior to superintelligence. “I’m concerned about higher intelligence, not the human who can misuse current AI,” he explains.
In short, Yampolskiy is convinced, in line with what other experts, such as Yoshua Bengio, think, that we still have time to control what we want artificial intelligence to be so that it does not escape our control and we are at its mercy. “It’s not over until it’s over. We can decide today not to build superintelligence“he concludes.
