Science fiction is often criticized by experts for unrealistically featuring far-future humans - to provide entertainment or to help with casting.
For example, Shane Legg has said:
Eventually, I think human extinction will probably occur, and technology will likely play a part in this.
The extinction of humanity is often held up as a possible negative outcome associated with a badly-executed attempt to construct a superintelligent machine. The argument goes that - unless the superintelligent machine is constructed so as to favour the tiny spot in state space containing modern humans, it will optimise the universe for something else entirely - and consequently all humans will rapidly vanish.
Here, I will argue that preserving humanity is likely to be an instrumental value of a wide range of human-created intelligent machines. Consequently, humanity is not very likely to go extinct at the hands of our mind children.
The claim made here is that that humans are fairly likely to be preserved - either as flesh-and blood or in historical simulations. Humans are are part of a major transition in evolution, so there's likely to be plenty of interest in us from our mind children.
The preservation of humanity is likely to be the result of the general instrumental need to perserve some of the past in order to build world models and make predictions. Historical data is needed to test the validity of world models used to make future predictions. You need to understand the past to predict the future - and you need to predict the future to understand the consequences of your actions.
Another important reason for preserving humans in particular is that
we are the immediate predecessor to superintelligence. We are currently witnessing the major evolutionary transition with the most immediate impact on the form that superintelligence might take. A superintelligence might well be very interested in the ways in which the current evolutionary transition could have gone - because this is the best clue it might have to suggest what aliens it might meet in the future.
Understanding the form of potential alien competitors could be very important - because aliens really do have the capacity to destroy everything we value - and a war with aliens could cause a lot of destruction.
Remembering the details of the transition to superintelligence could thus be of critical importance. It is plausible that this will strongly motivate our descendants to not forget the relevant history, and to recreate it, and many variations on it to understand the possible outcomes of the transition - and so gain insight into the threats they might face in the future.
- Large scale destruction in a future war is a possible. Maybe the struggle for existence will turn into a nasty scrap that has the side effect of destroying all humans. While theoretically possible such a destructive war does not seem all that likely.
- Keeping humans in simulations will be costly; the future will be dominated by agents who don't pay such costs.
Human sims would use only a tiny fraction of the available
However, even small costs can be selected against by evolution.
The argument that simulations involving hypothetical aliens
are worth doing necessarily involves sacrificing some short-term
benefits to attain long-term goals. However, with sufficient
foresight, I figure our descendants will be able to manage that
budget intelligently. Going into battle with the aliens without
preparation is pretty stupid. The factions that advocate not paying
the associated R & D costs can be defeated while they are still
small minorities, in much the same way that we cut out fast-growing
cancer cells from living bodies today.
- A superintelligence might have a short planning horizon. While this is theoretically possible, a
superintelligence with a long planning horizon seems likely.
A short planning horizon is like a bug that causes agents to
take actions that benefit them in the short term, but harm
them in the long term. It is likely that efforts will be made
to ensue that superintelligent agents do not have such glaring defects.
- Aliens would also preserve whatever civilizations they encountered on instrumental grounds. These would be assimilated - rather than obliterated - and their essential information would be preserved. So: neither us nor superintelligence need fear being assimilated by an alien race. This seems like a more dubious conclusion. Assimilation could still be very bad. Even if it might not result in complete extinction it is still something that looks worth avoiding.
- Evidence may be discovered indicating that there are no advanced aliens out there. It's hard to imagine what such evidence would look like, so this also seems implausible.
- If intelligent aliens are common interest in us would rapidly diminish. That is possible if the aliens were successfully assimilated by our descendants. However, the
evidence we have suggests that intelligent aliens are not common in our galaxy. If intelligent aliens do turn out to be common, we are probably in big trouble for other reasons.
Here we have argued that human extinction at the hands of
intelligent machines is not likely - since they have good reasons
to keep us around, even when we appear to be functionally redundant
and have no obvious role to play in the economy.
It doesn't follow that we don't have to bother with getting the motivation of intelligent machines right - since humans will
Nor does the argument given here offer any certainty on the issue. Humans could still go extinct at the hands of intelligent machines. The argument here just suggests that there are more reasons for keeping the humans around than may be initially obvious.
Tim Tyler |