ÅÝܽ¶ÌÊÓƵ

Centre for Cognitive Science (COGS)

Professor Aaron Sloman: A New Approach to Philosophy of Mathematics

 

Aaron Sloman, Professor of Computer Science, Birmingham 
December 9th, 2008

Most current AI learning systems (e.g. Bayesian learning systems) merely discover empirical (e.g. statistical) generalisations. In contrast, a young human child after a while starts noticing (implicitly) that some things that were first learnt empirically actually are non-contingent and can be derived from a theory about the environment.

Example -- adapted from Kant: a child may learn that if you walk along a row of houses, from house X to house Y, you pass them in a certain order, and then if you go back along the same route from Y to X, you pass the same houses but in reverse order. At first this is an empirical discovery: but a child can come to see that it MUST be so.

How? There are many more examples to do with counting, with shapes and processes, with topological relations. I am trying to produce a collection of previously unnoticed 'toddler theorems' that could start off as empirical discoveries, then be transformed. E.g.

You can't get your feet into your slippers by sliding them around on the floor.

Moving towards an open door makes more information available about the room beyond.

Making the transition from empirical discovery to perceiving necessity requires the ability to notice the need and the opportunity to construct theories, about the the form and content of the environment, usually going far beyond the empirical data, but making use of very general implicit assumptions about kinds of possible environments (produced in a small subset of species by biological evolution).

This raises questions about the forms of representation available to a young child, or robot, and the kind of information-processing architecture that can make these discoveries. I think this is deeply connected with unsolved problems in machine vision, especially the ability to see the possibility of processes that are not actually occurring and many kinds of affordance (including not only action affordances but also epistemic affordances and deliberative affordances).

As far as I know, this phenomenon has been largely ignored by developmental psychologists (apart from Piaget?) and also by AI researchers. I am collaborating with biologists who are interested in whether some non-human animals (corvids, primates) may also have such competences.

There are challenges for roboticists, developmental psychologists, biologists, evolutionary theorists, and philosopers of mathematics.