Maintaining Generality

Many would like to construct a general-purpose machine intelligence.

That necessarily requires a general-purpose stream compression algorithm - to dynamically updaate the agent's world view.

At first glance, a matching pennies tournament does not seem to be a very good fit for this requirement - since agents must interact with an environment containing enemies whose aims include making the sensory inputs of those they interact with closely approximate unpredictable random noise.

One proposed solution to this is a diverse array of house robots.

However, what seems undesirable is to have a fixed array of house robots - since that tends to produce a program which has a counter-strategy to each of the residents, and can switch between them once it has figured out which one it is dealing with.

To test forecasting, a stochastic stream exhibiting varying levels of unpredability and compressability seems desirable.

The idea would be to feed the stream to different compressors - to act as a test of their modelling abilities.

The stream should be compressible - and the compression difficulty should be variable - so agents with different levels of intelligence will have varying levels of success in compressing the stream.

Another criterion is that it should be possible to detect differences in intelligence using as little test material as reasonably possible - so intelligence can be measured quickly.

I am aware that a problem similar to this one has been treated by Legg and Hutter. They have fed random data into simple finite state machines - to produce a range of streams with varying levels of compressibility.

This approach apparently produces outputs which are quite biased towards simplicity - which seems rather undesirable. It doesn't really address the issue of how such systems could be combined together into a single stream, either.

Matt Mahoney has looked at a similar problem in some detail, for his compression benchmarks.

The area appears to be in need of further research. However, I think employing strategies like this seems likely to allow a pressure towards handling generality to be maintained within the context of a matching pennies tournament.

Links

Generic Compression Benchmark - Matt Mahoney


Tim Tyler | Contact | http://matchingpennies.com/