Hi! I'm Tim Tyler - and this is a video about the number of insights
required in order to produce superintelligent machines.
This is of interest since it affects the time required to produce
superintelligent machines, our level of uncertaintly about that
time, the rate of change when such machines are developed, and
the best strategey to employ in producing them.
To illustrate what I mean with some examples:
If a sudden theoretical breakthrough allows superintelligent machines
to be produced, then this might happen at almost any time - and it
increases the liklihood of rapid change when that happens.
On the other hand, if the development of superintelligent machines
takes a long and complex engineering effort, then the duration of that
effort will be easier to predict - based on observations of the
progress so far. Also, the impact of the onset of superintelligent
machines will probably be spread out over a longer period of time.
So: will the development of superintelligent machines involve a
small number of large leaps - or many small incremental steps?
Insight into this question can be gained by considering the insights
needed in other projects - both completed projects and unfinished
Also, it may be possible to get some insight by considering the
structure of the remaining part of the problem.
One issue with comparing against other problems is that it is difficult
to know which problems to select for comparison.
Some obvious examples include powered flight, the atom bomb, space
flight, the development of printing, telephony, computers, and the
Consider powered flight, for example. Flight is a classical instance
of a technological problem which has already been mastered. There we
see a few basic principles, some insights about how to apply them, and
then a complex engineering project based on them.
One might say that the basic principle of flight involves creating a
low-pressure area above a wing, and the key insight is that that can
be done by giving a cross section through the wing a longer upper
side. However, this then leads to a relatively complex engineering
project, which implements the idea - which itself has many associated
If machine intelligence is like flight, then probably the main
insights are that intelligence is an adaptive learning process, that
such a learning process can be instantiated by a self-organising
system consisting of large numbers of relatively simple computing
This formulation places the key insights in the past - with the
implication that we are currently in the "extended engineering
project" phase of the project to construct machine intelligence.
However, flight might not be a useful problem to compare against. It
certainly turned out to be a much easier problem - and we solved it
long ago. Maybe the problems which we find harder to solve are
characterised by complexity and a large numbers of interacting
One issue is that there are many types of problem with differing levels
of serial insight required.
Another issue iss that the problems we have solved may be systematically
different from the problems that remain before us - for example they
might be simpler and easier.
Machine intelligence so far
Looking at the progress of specific machine intelligence applications
usually leads to the idea that progress is a long and painful process.
Speech recognition is an obvious example of this. People have been
trying to write speech recognition programs for a very long time, and
progress has been slow, in a manner not entirely explicable by
Progress in machine intelligence in general leads to a similar
conclusion. People have been trying for a long time - and progress is
being made - but it is slow and gradual.
What is the case that a sudden breakthrough will be made? This
is sometimes known as the idea that there is a "magic bullet".
In my view, the best case for a magic bullet is based on the idea
that much of the brain is composed of large numbers of relatively
simple cells that form a neural network. Each cell is not that
complicated - and the brain's properties come from linking billions
of these cells together.
If we can understand roughly how the brain does what it does, then we
can duplicate it - and thus produce machine intelligence.
There are three main sub-problems to this approach:
Figuring out how the various types of synapses work;
Figuring out how axon growth and atrophy operate;
Figuring out how the various types of neurons work.
These problems are not trivial, but it seems likely that we will
eventually solve them - and then we will probably be able to create
synthetic "grey matter" - functionally similar to the human
This would indeed be an important step forwards - but it would not
solve the entire problem. There are other inportant architectural
features of the brain. In particular, there is attention, there is a
heirarchical memory structure - with short and long term memory, there
are emotions, there is the preprocessing circuitry associated with
each of the senses, and the post-processing circuitry associated with
However, maybe understanding all this material is not necessary.
Perhaps there is some kind of short-cut?
My own view is that it isn't possible to completely rule out these
"magic bullet" scenarios - but the history of attempts to build
machine intelligence does not seem to offer them much support.
Perhaps some day, future historians will look back, and wonder why we
didn't find the secret of intelligence earlier. However, it
doesn't currently look as though we are going to make machine
intelligence by discovering a few secrets - even if those secrets are
in some sense "out there". Rather, it looks as though it will be a
long hard slog.
The structure of the remaining problem
Lastly, can we gain anything by an examination of the structure of
the remaining problem?
Again, looking from the direction of the brain lets us see at least a
little of the problem's structure.
Of the remaining mysteries there, one big one is attention. Attention
represents a systematic shutting down of perceptual promotion into
consciousness in most areas of the brain. We can see why its useful,
we can crudely fake the effect, but we don't yet know how best to
Another mystery is memory architecture. Humans have multiple stages of
short, medium and long-term memory, things are shuffled into long-term
memory while we are asleep. We don't yet really understand how best to
implement such a storage medium.
Also, there are the emotions. Presumably we will not want synthetic
intelligent agents feeling anger, lust and jealousy. Indeed, most
emotions look like crude biochemical hacks that we could probably
improve on substantially. Some emotional states beyond pleasure and
pain may well be needed - but which ones?
Another possible issue is the wirehead problem. It seems possible
that - once intelligent agents learn how to modify themselves - they
will either get paranoid about self-modification and avoid it, or
else be over-enthusiastic, and mess themselves up. It seems likely
that they will eventually find the line between these extremes, but
it may not be easy to get them to this point initially.
Of these, only the last looks like it might be resistant to research
and development - and we probably don't need to crack that problem in
order to develop pretty sophisticated machine intelligences.
If the analysis here is correct, some of the implications would seem
Machine intelligence will not suddenly enjoy a massive competitive
advantage over humans when it arrives. Instead, each generation of
machine intelligence will find itself competing with the products of
the previous generation. Machines will compete against last years
machines - and humans will incrementally augment their own
intelligence - by preprocessing their senses with machines, and
post-processing their motor outputs with machines.
Machine intelligence is not terribly close at hand -
there still seems to be quite a way to go;
The time of arrival of machine intelligence should be
relatively predictable - based on what we already know about
progress so far;
Machine intelligence is likely to be produced by a large
organisation with considerable resources to devote to research and
development. Reserchers working on the problem in their basements and
garages should probably look into getting hired if they actually want