Hi! I'm Tim Tyler, and this is a video about the scope and significance of inductive inference.
OK. So, I've done some research, and got some feedback from other people working in nearby regions, and it seems as though most of the main barriers to understanding that I encounter arise in basic areas associated with the scope and significance of compression, forecasting and inductive inference.
So, briefly, a few more words about why inductive inference is important, and how it relates to other concepts in machine inteligence.
What is inductive inference?
First of all: what is inductive inference? It is the ability that allows agents to use past sense data to predict future sense data. You've probably tried intelligence test questions that give you a sequence of numbers, and ask you what comes next in the sequence. Those tests attempt to measure directly the testee's ability to perform inductive inference.
Inductive inference is a key component of machine intelligence projects. If you think of a simple cybernetic diagram of a machine intelligence, the biggest component is the one which performs inductive inference on data input streams. The next biggest consits of the evaluation component, and then after that come various bits and pieces - tree pruning, habit formation, action generation - and so on.
By my estimate, the human brain spends about 80% of its time performing inductive inference.
Other claimed fundamental principles
There have been a lot of other things besides inductive inference that have been claimed to be fundamental principles underlying intelligent behaviour.
Neural network enthusiasts typically say that pattern recognition is a central skill of intelligent agents.
Hofstadter seems to think analogy formation is a key skill.
Jeff Hawkins says predicting the future is key.
Others say intelligent agents are Powerful Optimisation Processes, that optimisation is important and that we should measure intelligence as the ability to solve optimisation problems.
So, who are we supposed to believe?
Those - like Jeff Hawkins - who see prediction as a key factor in intelligence are on pretty-much the same page as those who think inductive inference is important. Prediction and induction are close bedfellows. The heirarchicical structure for preprocessing sense data is one of the other things he proposes that fits in well with inductive inference ideas. The sensory heirarchy he desccribes would give a range of partly-preprocessed sensory input streams which inductive inference could then be applied to.
Pattern completion problems can usually be reformulated as induction problems. If you imagine seeing a pattern with its incomplete section obscured, and then the obscured part comes into view, the associated pattern completion problem is transformed into a problem involving determining what comes next - which can then be solved by inductive inference. So: pattern completion and inductive inference problems are closely equivalent.
As for analogy formation, that's a key part of developing a compressed model of the world. Rather than instantiating multiple similar representions of features of the world, it makes sense to break things down in a heirarchical manner, with base classes and derived instances. Determining which features go into the base class is the key problem that analogy formation deals with. So: analogy formation turns out to be a key skill used when gerating compact models of the world, for generating predictions with.
Intelligent agents are pretty much defined as being goal directed optimisation processes - at least to the extent to which they act intelligently. Inductive inference contains a type of optimisation - the quest to find a short program that describes the observed data so far. However, that is not a very general form of optimisation.
So: why be interested in inductive inference? Well, we have a useful model of temporal optimisation processes in the form of expected utility maximisers. Non-temporal optimisation processes can be shoe-horned into this model as well, by embedding them in space-time. Expected utility maximisers rely on inductive inference as a subcomponent. They use it to determine the expected consequences of possible actions, to allow their utility to be calculated. Inductive inference is not the only component of such systems. There are also evaluation functions and tree-pruning, and other things. However, inductive inference represents an important sub-component.
So, what about deduction? Surely, you need deduction - as well as induction to be smart. Well, deductive problems are usually regarded as being relatively trivial - compared to inductive ones. There are a lot of deduction problems that can be solved by visualising them and imagining what would happen next. There are also a bunch of other deduction problems that can be solved by just inductively learning the rules of deduction, and then applying them. If you have an expert of deduction available, for example: Sherlock Holmes, you can imagine what answer he would give to the problem. Doing that transforms deductive problems into inductive ones. In short, deduction doesn't seem terribly challenging to master, if you can already perform inductive inference.
I hope that helps to put inductive inference in context, with respect to some other ideas about machine intelligence.
Inductive inference is not as general a concept as general intelligence, but it is a central concept in machine intelligence. Inductive inference engines have an advantage in being easier to design, test and build than general-purpose intelligences.