Hi, I'm Tim Tyler, and today I'll be addressing the question of:
"How long before superintelligence?"
What is superintelligence?
Firstly, to explain the terminology: a superintelligence is an
agent which is vastly smarter than a human in most domains.
Are companies superintelligences?
Perhaps the nearest thing we have to superintelligence today are
companies. A company can posess intelligence which exceeds that of an
individual human in many domains. However companies don't
really qualify as as superintelligent agents, because they're not
smart enough across a wide enough range of different problems.
Companies may attain super-human performance in some areas -
and yet exhibit relatively poor performance in other ones.
Often, a company is only as smart as its smartest employee. For
example, consider the case of ability at the game of go. A company
may be able to play go better than its best employee - but it
will probably not be a superintelligent go player - simply
because there is no known effective algorithm for parallelizing the
problem of playing go and distributing it across multiple employees.
This issue can be illustrated by a diagram:
A company usually consists of a network of individual human brains.
There are communication bottlenecks between the brains and algorithms
must split a problem into modular chunks in order to run efficiently
over such a network. A genuine superintelligence would probably not
have that architecture. It would dispense with the communications
bottlenecks between the nodes - and thus be able to handle larger
problems without requiring that they first be divided up.
When will genuine superintelligence arrive?
If we do not have superintelligence today, when will it arrive? I'll
argue that superintelligence will come soon after synthetic
intelligence reaches human performance levels.
Once human-level synthetic intelligence is attained, machines will be
able to contribute extensively to the research and development needed
to produce the next generation of intelligent machines - thus
accelerating their development. Also, by the time we have human-level
machine intelligence, progress is likely to be taking place rapidly -
simply on the grounds that technological development is constantly
When will we get broadly human-level performance
from intelligent machines?
So, the next question is: when will we get broadly human-level
performance from intelligent machines?
One way in which this question has traditionally been addressed
is by looking at the hardware requirements for something with
broadly-equivalent functionality to the human brain.
Hans Moravec produced one of the first estimates of when this
might happen - way back in the 1980s. He looked at the computational
properties of cells in the retina performing edge detection and motion
detection, and compared these to some electronic signal processing
equipment which he considered to perform a similar function. He came up
with some MIPS-per-neuron figures, and then extrapolated from these to
produce an estimate representing the computational power of the
entire human brain.
Other teams have subsequently analysed other mental subsystems -
including parts of the auditory cortex and the cerebellum - and come
up with broadly comparable figures.
Plotting these on a graph with the computational power of existing
computers suggests that human-level computing hardware will become
available in supercomputers around 2010 - and will become available
in desktops around 2020.
However - according to most researchers in the field - hardware is
not the limiting factor. We don't know how to utilise the
hardware we currently have, let alone what will be available
It is common for computer software to lag behind the capabilities of
its associated hardware. The effect will be familiar to anyone who has
owned a computer games console near the time of its launch. Software
contains the complex parts of the system, and it is those which are
difficult to create and maintain.
So: hardware estimates provide a lower bound for when we will have
superintelligence - but are not the whole story. Are there estimates
of how difficult it would be to create the required software?
The brain and the genome
Ray Kurzweil has attempted such an estimate - arguing that the brain's
design is of a managable size - since it is stored in the human
genome. Here he is at Stanford in 2006, making his case:
[Ray Kurzweil footage]
However, this argument was criticised by Douglas Hofstadter - at the same event.
[Douglas Hofstadter footage]
So, who is right? Does the brain's design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the
nucleotide sequence that specifies them, the cytoplasmic environment
in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance -
however, the contribution of the laws of physics is harder to
discount. At first sight, it may seem simply absurd to argue
that the laws of physics contain design information relating to
the construction of the human brain. However there is a
well-established mechanism by which physical law may do just that - an
idea known as the anthropic principle. This argues that the universe
we observe must necessarily permit the emergence of intelligent
agents. If that involves a coding the design of the brains of
intelligent agents into the laws of physics then: so be it. There
are plenty of apparently-arbitrary constants in physics where such
information could conceivably be encoded: the fine structure constant,
the cosmological constant, Planck's constant - and so on.
At the moment, it is not even possible to bound the quantity of
brain-design information so encoded. When we get machine intelligence,
we will have an independent estimate of the complexity of the design
required to produce an intelligent agent. Alternatively, when we know
what the laws of physics are, we may be able to bound the quantity of
information encoded by them. However, today neither option is
available to us.
Anyway, even if Kurzweil was right - and the design
of the human brain fits onto a CD-ROM - that would still represent an
utterly enormous search space - which would easily take far
longer than the history of the universe to exhaustively search. So
this whole approach doesn't really permit us to say anything useful
about how difficult the overall problem is.
Argument from evolution
What about the idea that the human brain evolved over the last 600
million years - in the time since the Cambrian explosion? Doesn't that
give us a clue about how hard the problem is? If we can apply a fudge
factor to account for the advantage of engineering design over blind
mutations - and another fudge factor to account for the fact that we
can crib from nature's solution to the problem, doesn't that help us
estimate the level of difficulty? Unfortunately, no: the anthropic
principle again blocks this kind of approach. We have no idea whether
our brain evolved via a series of lucky flukes - we don't know if our
own evolution was typical - or not. For example, if we imagine a
constraint that says that a big meteorite destroys all multi-cellular
life every 700 million years, the evolution of intelligent life had
better take place within that timescale - no matter how many
apparently-improbable events that involves.
Reverse-engineering the human brain
How about reverse-engineering the human brain? If we can estimate how
hard that problem is won't that at least give us an
upper bound on how difficult the overall problem is? Yes, but
reverse-engineering the human brain looks pretty tricky - and the task
might take us well into the second half of this century. So while such
an approach does - in principle - allow us to bound the difficulty of
the problem, the resulting bound is not very tight. We will have
probably machine intelligence long before such projects
gets very far off the ground.
So, what does all that leave us with? It mostly leaves us
with intelligence testing. We can test the intelligence of humans -
and of machines. We can plot the increase of machine intelligence over
time, and see when it reaches human level.
I think this is the most promising approach to estimating when
we will first have access to superintelligent machines, on other grounds
besides sheer hardware capability.
So, what intelligence tests exist - where we already have access to
historical data about machine intelligence? One such area is computer
chess. Before Deep Blue beat Kasparov it was
possible to look at the history of chess computer ratings, and plot a
graph that showed when computers would be able to beat the best
humans. That illustrates the potential of the technique - but it also
highlights one of its limitations - any individual test may "crap out"
before broad human-level intelligence is reached, and fall to a
machine intelligence that specialises in solving the test.
The most obvious solution is to use multiple tests, from a variety
of domains: strategy games, robot control, compression, speech
recognition, language translation, IQ tests - and so on.
One of the best tests we currently have is the game of go - a
classical oriental board game. Playing go exercises pattern
recognition circuitry in the human visual cortex - which represents a
substantial proportion of the human brain by volume. It is a taxing
game for both humans and machines. Also there is a rating scale whose
uniformty and linearity is backed up by a handicap system, and there
is a long history of computer players and tournaments.
Currently the best computer go programs rate at around 1 or 2 kyu - on
a scale that goes from beginners at 30-kyu down to 1-kyu, and then
from 1-dan up to 9-dan - which represents world champion level.
As an intelligence test, Go may "crap out" before broad
human-level intelligence is reached - but to me, it looks as though it
may well take us to within spitting distance of the target.
Go is probably the best single way of measuring intelligence
for which we have a good history of machine intelligence in which to ground
A 1997 survey of go programmers produced a wide range of answers
to the question of when a program would be world champion.
Mei-Kou Tei 9-dan [P]
Darren Cook [P]
Mick Reiss [P]
Martin Mueller [P]
Chen Zhixing [P]
Ken Chen [P]
Shinichi Sei [P]
Tristan Cazenave [P]
David Fotland [P]
Yung Jye Hunag [P]
Even with go - where we have some of the best available evidence,
the situation is not yet clear-cut.
To finish this talk, I'd like to present a slide which illustrates
my estimate of when we will obtain superintelligent machines:
Tim Tyler's estimate
The curve is a probabilty density function, illustrating the
probability of superintelligence first arising on the specified date.
A roughly-bell-shaped curve, peaking around 2025 - with a
fairly substantial spread - indicating my estimate of my level of
uncertainty about the issue.
I've listed estimates from those interested in the issue enough to
produce probability density functions.
Michael Vassar [source]
This graph has an unusual shape - and the y-axis seems to be miscalibrated.
Less Wrong 2011 Survey [source] I discarded data before 2010 and above 2150 on the grounds that these were outliers. The question was: "By what year do you think the Singularity will occur? Answer such that you think there is an even chance of the Singularity falling before or after that year. If you don't think a Singularity will ever happen, leave blank." Note: each point (rather misleadingly) represents data for the next 10 years.