Hi. I'm Tim Tyler, and this is a video about the possibility of humans merging with machines.
It seems to me that many of today's concerns about the possibility of a machine takeover
are dominated by a kind of "us-and-them" thinking. The alignment problem, the control problem, the containment problem, and the "species dominance" problem - these framings all picture humans and
machines as distinct tribes and then propose that we find ways to reconcile their differences.
I am concerned that these framings all shut down other possibilities without
giving them due consideration. I am also concerned that associated solutions
will not be effective. Humans attempting to control or contain superintelligent
machines for an extended period of time seems like a risky and unstable strategy
that could easily fail or backfire. I think it is important to note that those
are not the only options. We could merge with the machines instead.
It is true that - today - humans and machines are largely distinct. However,
that need not always be true. Over the years, various thinkers have proposed
that humans merge with machines:
Ray Kurzweil is a proponent of merging. He wrote: "Earth’s technology-creating species - us - will merge with our own technology."
Elon Musk has also proposed merging. Specifically, Musk said that "humans must merge
with machines to overcome the “existential threat” of artificial intelligence".
Yuval Noah Harari has forecast a possible merger between humans and machines. He has said:
"Humans and machines might merge so completely that humans will not be able to
survive at all if they are disconnected from the network."
Sam Altman is another merge proponent. He has said: "My guess is that we can either
be the biological bootloader for digital intelligence and then fade into an
evolutionary tree branch, or we can figure out what a successful merge looks like."
Symbiology suggests that our goal should be a merger. The world is full
of cases where symbiotic relationships have become unstuck. The parties
to the symbiosis can eat each other. They can become separated. They can
come to a sticky end at the hands of predators or parasites. The missing
partners sometimes leave behind phenotypic scars. They have been called
the "ghosts of evolution". These relationship breakdowns illustrate the
fact that symbiotic relationships can be unstable. If you want to make
sure that your wagons stay hitched together, then you want an obligate
symbiosis - or a merger.
We have a symbiotic relationship with the mitochondria inside our cells. Those were
once independent organisms, but now they are in an obligate symbiotic relationship
with the rest of us. It is not easy to imagine that relationship breaking down.
However, relationships between different parties often involve conflict. For example,
mitochondria are passed down the female line and have no interest in producing male bodies.
This leads to conflict with the nuclear genes - most of which lack this preference.
Eliminating such conflict by refactoring such intimate relationships may become
easier in the future. It seems as though a merger is preferable over an obligate
symbiosis for these reasons.
Framings of the issue such as "Safe AI" and "Friendly AI" seem to assume that
human nature is fixed and the problem is how to make machines fit in with it.
However, if the situation is pictured in terms of symbiology - as a mutualism -
then it becomes more obvious that accommodations could be made on both sides.
If you go into a negotiation with a "no compromise" attitude, then you might fail
completely to make a deal. A bit of flexibility may be a better approach.
There have been a variety of critical responses to the merger proposal.
Some say it's not possible. Others say there's not enough time. Some say
merging with machines would still be the end of the human race. Some say
humans won't want it - or that machines won't want it. Some have said that
merging is for microbes - with the implication that long and complex
developmental trajectories are too difficult to mess with. Some say
that relinquishment would be safer. Or that we should engineer humans
and forget about intelligent machines.
I'm not going to address all these issues here. Instead, I just want to
convince you that a merger is a possibility and it does not deserve to
be prematurely dismissed. My main pitch for considering a merger is that
it enlarges the space of possible solutions and is not obviously mistaken.
Ruling a merger out on a-priori grounds seems to be very common to me.
Many people seem to think that if we don't solve the alignment problem,
then humans are toast. That's "us-and-them" thinking - of a kind that
humans delight in. "If you're not for us you're against us" - as they say.
Here, I would mostly just like to say that this is not true - and
there is another path. To me, a merger looks like a better and more
satisfactory solution than trying to keep man and machines separate but
somehow "aligned". The "alignment problem" is a bad framing that prematurely
dismisses a range of viable solutions for no good reason. It seems to be mostly
promoted by those seeking to benefit from exaggerating the risks because it
seems very challenging. Not even identical twins have fully aligned goals.
The most viable endpoints are not included or considered, and the proposed
plan looks dangerous and unstable.
While goal alignment is good, it's also a matter of degree - and it is only
the first step. It is good to know where you are going, and I propose merging.
Merging with machines and merging with each other. Let's resolve our conflicts.
Let's merge!