End of September 2019, a fascinating debate happened spontaneously in the comment section of a Facebook post.
Yann LeCun, Chief AI Scientist at Facebook, had posted an article he had co-written with Tony Zador (an American neuroscientist) entitled Don’t Fear the Terminator - Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others.
Soon a debate ensued between him, Stuart Russel and Yoshua Bengio.
Yoshua Bengio is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
Yann LeCun and Yoshua Bengio are often referred to as 2 of the 3 godfathers of modern AI and in particular deeplearning (along with Geoff Hinton). These 3 were awarded together the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
Here below are their contributions to the debate. (many other people participated but I didn't include them out of clarity, the full debate can be found here). I've retained some contributions from others that are helpful to understand the debate
What seems missing in all these musings is the impartial arbiter of physics. Let's take the simple:
In a fistfight, exactly what advantage does super-intelligence confer?
Then abstracting it out, paying attention to inviolable physical constraints, what advantage does super-intelligence convey over billions of years of evolution?
We can muse about both these things, but there are likely theorems lurking and that's what's needed.
Physics has a way of setting limits on the power of intelligence.
Hence there is a limit on the amount of computation per unit volume.
More importantly, the smarter the machine, the larger and more power hungry it will need to be, and the more vulnerable it will be to physical attacks.
I don't think we can predict the behavior of an intelligence that will be several orders of magnitude more advanced than the intelligence of the whole humanity combined.
A virus can't come close to predicting the behavior of your intelligence, which is several orders of magnitude more advanced than the combined intelligence of billions of viruses.
But it can still kill you.
The point is that if we can build a super-intelligent AI that ends up threatening us (for some unforeseen reason), we can build another system, with access to the same amount of resources, whose only purpose will be to disable the first one. It will almost certainly succeed. For the same reason a virus can kill you.
Here is the juicy bit from the article where Stuart calls me stupid:
<<Russell took exception to the views of Yann LeCun, who developed the forerunner of the convolutional neural nets used by AlphaGo and is Facebook’s director of A.I. research. LeCun told the BBC that there would be no Ex Machina or Terminator scenarios, because robots would not be built with human drives—hunger, power, reproduction, self-preservation. “Yann LeCun keeps saying that there’s no reason why machines would have any self-preservation instinct,” Russell said. “And it’s simply and mathematically false. I mean, it’s so obvious that a machine will have self-preservation even if you don’t program it in because if you say, ‘Fetch the coffee,’ it can’t fetch the coffee if it’s dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal. And if you threaten it on your way to getting coffee, it’s going to kill you because any risk to the coffee has to be countered. People have explained this to LeCun in very simple terms.” >>
We just sent you an email. Please click the link in the email to confirm your subscription!