Universal Instrumental Values

Universal Instrumental Values


Hi! I'm Tim Tyler, and this is a video about Universal Instrumental Values.

First, though, a brief introduction to some terminology:

Instrumental values

Instrumental values are things that are not particularly intrinsically valuable, but nontheless have a kind of intermediate value - since they contribute towards achieving ultimate goals.

One example of something with instrumental value is money. Money is just bits of paper. It has minimal intrinsic value - its main value arises through what you can do with it in a social environment.

Universal Instrumental Values

It is hypothesised that there is a class of instrumental values that are widespread amongst goal-seeking agents.

Here we will call these Universal Instrumental Values. They are things that most goal-seeking agents are likely to assign instrumental value to.

List of the values

The Universal Instrumental Values appear to prominently include, control over spacetime, and mass/energy - resulting in power and security.

Technological development appears to be a Universal Instrumental Value - at least among sufficiently advanced agents. The drive to inhibit or eliminate potential competitors seems pretty universal too.

Information gathering, truth-seeking, avoiding self-deception, and being rational are other candidates. Absent alchemy, each element might count as being its own form of Universal Instrumental Value. This list of values could easily be extended further.


The term Universal Instrumental Value comes from Roko. However, they also have been referred to under a variety of other names:

  • Steve Omohundro refers to them as "Basic AI Drives";
  • Richard Hollerith has a closely-related concept, which he called "Goal System Zero";

I think Roko has the best terminology - though his "universal" term seems rather grand to me. These values are probably not literally universal in most senses of the word. However, here I will adopt the use of Roko's terminology.

Roko's Universal Instrumental Values

Roko introduced his version of the idea as follows:

It may be true that for a very wide class of notions of intrinsic value, you always end up with the same notion of instrumental value.

Let me call this hypothesis the Universal Instrumental Value Hypothesis.

Steve Omohundro

Steve Omohundro has written some nice papers on this topic. His list of Universal Instrumental Values included things like protecting your goals from modification, clarifying your goals to youself and avoiding behaviour analogous to drug-taking. He emphasised that these were things that a broad class of intelligent agents would want - unless opposing desires were explicitly programmed into them.

Richard Hollerith

Richard Hollerith claims to have first come across the idea waaay back in 1992!

He proposes that agents that do not employ much in the way of temporal discounting are likely to pursue Universal Instrumental Values almost exclusively for extended periods of time - before feeling secure enough to go on to do other things.

Another stimulating idea from Hollerith is to classify deviations from Universal Instrumental Values as being moral contaminants.

Handicapped superintelligences

I once made a popular presentation on the subject of moral contaminants - calling powerful agents suffering from them handicapped superintelligences.

Many handicapped superintelligences with moral contaminants would maximise something other than entropy production - perhaps gold atoms - or whatever else they valued. They might thus leave behind a low-entropy state - such as a big pile of gold atoms.

God's Utility Function

The idea of Universal Instrumental Values also links up to the alas, still little-known idea that living systems maximise entropy production. Agents pursuing Universal Instrumental Values would leave nothing behind them. They would care for nothing - except for the future representation of their own heritable information - and in doing so would locally maximise the increase in entropy. I refer to this idea as God's Utility Function.

The idea that we might create a handicapped superintelligence - which leaves behind a low-entropy state and fails to maximise entropy would run counter to the principle of Maximum Entropy Production (MEP).


The idea of Universal Instrumental Values seems to be an important one to me. It provides an elegant, naturalistic basis for moral behaviour which is independent of the details of today's biological systems. This is something which moral philosophers seem likely to be interested in.

It gives us a guide relating to what machine intelligences are likely to want. It also gives us a guide to what values we should pursue if we are interested in maximising the chances of the long-term survival of our civilisation.



Universal Instrumental Values - Roko
The Basic AI Drives - Steve Omohundro
The nature of self improving artificial intelligence - Steve Omohundro
Goal System Zero - Richard Hollerith
Goal System Zero - Richard Hollerith
Competitive Analysis of Goal System Zero - Richard Hollerith
Some Open Questions about Goal System Zero - Richard Hollerith
A Kinder, Gentler Richard Hollerith - Richard Hollerith
Handicapped superintelligence - Tim Tyler
God's Utility Function - Tim Tyler
Bright Light - Tim Tyler
Invisible Frameworks - critique by Eleizer Yudkowsky
Convergence Theories of Meta-Ethics - Perplexed - February 2011.M
Instrumental value - defined and explained
Instrumental value - wikipedia

Tim Tyler | Contact | http://matchingpennies.com/