Open Source Machine Intelligence

Why open source machine intelligence?

Core infrastructure

Open source licenses are a good match for core infrastructure - as demonstrated by Linux, Apache, MySQL, PHP, Python, Perl and Java.

Machine intelligence is likely to be core infrastructure. An inductive inference engine is likely to be placed in central libraries, to be reused in many different applications.

The match between open source and machine intelligence is recognised by many. There are lots of open source machine intelligence projects already.

There seem to be several ideas about why open source may not be the best match for machine intelligence projects.

  • Funding

    One is concern about funding. Google, Facebook and various hedge funds have well funded commercial projects, which may have some chance of success at developing machine intelligence. These are commercial projects - so perhaps a commercial model is the way to go.

    This is a possiblity. It is probably a relatively undesirable prospect for most of humanity, since they may well wind up working for the benefit of an elite group of shareholders under this scenario. On the other hand, it is probably a relatively desirabe outcome from the perspective of the makers, since this option gives them the most control.

  • Safety

    Another concern is safety. Essentially, it is claimed that an open source methodology is likely to result in the publication of dangerous information, which could be used by bad guys.

    I think this concern makes less sense - and mostly represents a misunderstanding of open source software.

Analysis

Shane Legg once said:

If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that's likely to be the one that matters.

This seems to be a reasonable sentiment - but it leaves open the question of which software-development methodology is best at doing that.

Probable outcomes

Rather than asking what is desirable, let's start by asking what is probable.

The inductive inference engine at the core of intelligent systems seems likely to be open source. That's mainly because it is core infrastructure, it will need to inter-operate with a very large number of systems. So, it willl be built into libraries, be included in embedded systems and be widely spread around. If it isn't open source, the source code is likely to be widely available anyhow. You can't keep something like that secret.

However, I expect there to initially be a diverse range of layers added on top, to produce different types of intelligent machine. One for every type of robot needed - and then some. Some of these will be open source, others will be proprietary. It is here that pressure can be applied, and changes effected.

Conventional wisdom

The usual thinking on the effect of open source on security is that it usually helps. For example, here is Bruce Schneier:

As a cryptography and computer security expert, I have never understood the current fuss about the open source software movement. In the cryptography world, we consider open source necessary for good security; we have for decades. Public security is always more secure than proprietary security. It's true for cryptographic algorithms, security protocols, and security source code. For us, open source isn't just a business model; it's smart engineering practice.
The document goes on to say:

The exact same reasoning leads any smart security engineer to demand open source code for anything related to security. Let's review: Security has nothing to do with functionality. Therefore, no amount of beta testing can ever uncover a security flaw. The only way to find security flaws in a piece of code -- such as in a cryptographic algorithm or security protocol -- is to evaluate it. This is true for all code, whether it is open source or proprietary. And you can't just have anyone evaluate the code, you need experts in security software evaluating the code. You need them evaluating it multiple times and from different angles, over the course of years. It's possible to hire this kind of expertise, but it is much cheaper and more effective to let the community at large do this. And the best way to make that happen is to publish the source code.

I endorse this perspective.

The idea that you should not publish code because bad people might do bad things with it is usually incorrect - because the good things that result are usually more significant.

For example, operating systems are powerful tools - and they are mostly morally neutral - they help bad people in much the same way as they help good ones. However, we should not refrain from publishing their source code - for fear that bad people will use the code to perform bad deeds. Instead, we just accept that such abuse will inevitably take place - and seek to minimise the disruption it causes in other ways.

Good-guys vs bad guys model

The agents in the biggest rush may have a good chance of inventing intelligent machines first - and so there may be a trade-off between speed and safety.

The problem can be characterised as there being a range of actors, who vary in their skills, abilities and level of altruism. To simplify, let's represent this as a one-dimensional "good"-"bad" spectrum.

To have the best chance of a successful outcome, we want the "good" agents to be successful, and for the "bad" agents to fail.

Often "good" agents are better at cooperating - while "bad" agents are better able to make use of the thermodynamic advantage of destruction over creation.

If we assuming that the "good" agents are more likely to cooperate and act unselfishly than the "bad" agents are, a strategy of cooperation may help the "good" agents to win out. If they can identify each other, trade with each other and cooperate, they will have an advantage over the more selfish "bad" agents.

To do this they need to identify and trade and cooperate preferentially with each other. I expect one of the main currencies that will be used is knowledge about their subject area, and one of the most reliable and difficult-to-fake ways of identifying "good" agents is likely to be their propensity for sharing their knowledge in a cooperative manner.

This seems likely to collectively synergise the projects of the "good" agents. If the "bad" agents can be identified, and excluded or punished, so much the better.

This story is similar to the way that good guys have prospered since the rise of civilisation. As Richard Dawkins wrote in "The Selfish Gene", "Nice Guys Finish First". Just so. Most of the time, anyway.

For this kind of strategey to work most effectively, it is best that all those involved are clear from the beginning that cooperating with each other is a good thing - and that selfishness and secrecy will be seen as being negative signs.

Enjoy,


Tim Tyler | Contact | http://matchingpennies.com/