We talked about Artificial General Intelligence and AI Risk and a lot about stamp collecting devices. It was good fun. I said some silly things that hopefully won’t make the cut, and forgot to mention some important things, but this is the nature of recording a conversation rather than a prepared speech, I guess.
I feel like I didn’t really do justice to the ideas, but the resulting videos will serve as a basic introduction and get people thinking about these things.
In a 2013 survey conducted by Nick Bostrom, one of the leading thinkers in the field of artificial intelligence, human-level intelligence is defined as a machine’s ability to “carry out most human professions at least as well as a typical human.” Another name for this is artificial general intelligence (AGI).
In order to evaluate the progress of AGI technology, many experts look to computing power as a key indicator because of its quantifiability. When will computers be able to think faster than humans and process more information?
Computers today have already transcended human capabilities in terms of raw computing power. What humans find very difficult - crunching numbers and analyzing big data - computers can do with great speed and efficiency. Computers also have the added benefit of being able to store all that data and access it at anytime without having to worry about forgetting.
But what about computers’ ability to think qualitatively better than us? Until now machines have found it very difficult to tell the difference between a cat and a dog in a photograph, or tell the difference between a piece of meat in a bun and a dog in summer, any better than an average four year old child.
That’s not to say that we haven’t seen huge advancements in machine learning software over the past few years, from IBM’s Watson to Microsoft’s Cortana to Google’s Brain just to name a few. But these projects have cost hundreds of millions of dollars and are not nearly as accessible as the advancements in raw computing power… yet.
So AGI is advancing at an admirable rate, faster in power than in quality with quality catching up quickly, but how can we tell if and when machines will be as smart as us?
Not shockingly, this is a heated debate among leading AI thinkers. They can be roughly divided into three camps: the optimists, the conservatives, and the skeptics.
The optimists think that exponential growth is at work and artificial intelligence, though only slowly creeping up on us now, will blow right past us within the next few decades due to Ray Kurzweil’s Law of Accelerating Returns.
The conservatives believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to reaching AGI.
The third camp - the skeptics - believes simply that there’s a good chance AGI will happen eventually, but there’s also a good chance it won’t; and nobody has the ability to predict with any accuracy when it will happen.
How do we make sense of all these opinions?
Luckily for us, in the survey mentioned above, hundreds of AI experts were asked the following question: “When do you predict human-level AGI will be achieved?”
Here were the results:
Median optimistic year: 2022
Median realistic year: 2040
Median pessimistic year: 2075
Interestingly, in response to the question, “In your opinion, what are the research approaches that might contribute the most to the development of such [human level machine intelligence] HLMI?” the experts gave the following range of disciplines:
Anyone that wants to keep an eye on progress in the field of AI can look to these disciplines for an indication.
Whether we reach human-level machine intelligence in the next decade, in our lifetime, or ever is still anyone’s guess. As humans we have a natural cognitive bias to think linearly rather than exponentially, so that might be preventing everyone but the most visionary thinkers from seeing what really lies ahead.
I can only hope that if human-level machine intelligence arrives, it will play nicely with us.
The AI Revolution: Road to Superintelligence - Wait But Why
Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by…