ÅÝܽ¶ÌÊÓƵ

Centre for Cognitive Science (COGS)

2005-2006 seminars

Autumn Term 2006

  • Week 1 (3 October): Bill Bigge (Centre for Computational Neuroscience and Robotics, ÅÝܽ¶ÌÊÓƵ): Programmable Springs: Developing Programmable Compliance Actuators for Autonomous Robots
  • Week 2 (10 October): Professor Robin Dunbar (Evolutionary Psychology Research Group, University of Liverpool): The Evolution of the Social Brain
  • Week 3 (17 October): No Meeting
  • Week 4 (24 October): Eric Olson (Department of Philosophy, University of Sheffield): What's Intelligent in Artificial Intelligence?
  • Week 5 (31 October): Seth Bullock (School of Electronics and Computer Science, University of Southampton): Lies, Damned Lies, and Simulation: The Lure of Artificial Worlds
  • Week 6 (7 November): Professor Annette Karmiloff-Smith (Developmental Neurocognition Lab, Centre for Brain and Cognitive Development, Birkbeck, University of London): Modules, Genes and Evolution: What have we learnt from developmental disorders?
  • Week 7 (14 November): Beena Khurana (Department of Psychology, ÅÝܽ¶ÌÊÓƵ): Temporal Order of Strokes Primes Letter Recognition
  • Week 7 Special Session (16 November) Professor Harold Cohen (Department of Visual Arts, University of California, San Diego): AARON, Colorist: From Expert System to Expert
  • Week 8 (21 November): Michael Schmitz (Department of Philosophy, University of Konstanz; UCL): Agency: Experiencing Causality
  • Week 9 (28 November): Sarah Sawyer (Department of Philosophy, ÅÝܽ¶ÌÊÓƵ):There is No Viable Notion of Narrow Content
  • Week 10 (5 December): **CANCELLED** Professor John Fox (Cancer Research UK and University of Oxford): Rational medical agents: from theory to engineering



Week 1

Tuesday 3rd of October, 2006
Speaker: Bill Bigge (Centre for Computational Neuroscience and Robotics, ÅÝܽ¶ÌÊÓƵ)

Programmable Springs: Developing Programmable Compliance Actuators for Autonomous Robots

Conventional approaches to actuation and motion control are designed to eliminate any perturbations from the system and provide smooth precise control of speed or position and a high level of stiffness. By contrast, emerging approaches to autonomous robotics rely on exploiting the environment to aid motion. In passive dynamic systems motion is modulated by interactions between the mechanism and the environment; instead of forcing the actuators to follow pre-planned trajectories the environment is used to guide motion.

Developing real robots that can exploit these dynamics requires the use of actuators that can react to the environment, exhibiting behaviour that varies from high stiffness to complete compliance or zero impedance. I will outline our design for an electric actuator, called a programmable spring, which can be configured to emulate complex spring, damping and zero impedance systems within its range of movement and mechanical limits.

This design forms the basis for a prototype actuator intended as a cost effective 'off the shelf' component for robotics development. Our design includes a sophisticated high level control architecture that allows the actuator to exhibit complex multimodal behaviour whilst offering the user a high degree of control.



Week 2

Tuesday 10th of October, 2006
Speaker: Professor Robin Dunbar (Evolutionary Psychology Research Group, University of Liverpool)

The Evolution of the Social Brain

The Social Brain Hypothesis was proposed as an explanation for the unusually large brains of primates compared to other mammals. New comparative analyses suggest a more complex picture in which increases in brain size across higher vertebrates (birds and mammals) have in fact been driven by pairbonding. Primates seem to be unusual, having generalised these pairbond relationships into non-reproductive relationships so as to create more intensely cohesive
groups.


Week 3

No meeting


Week 4

Tuesday 24th of October, 2006
Speaker: Eric Olson (Department of Philosophy, University of Sheffield)

What's Intelligent in Artificial Intelligence?

To produce artificial intelligence would be to produce, by artificial means, an intelligent being. Though there has been much discussion of the nature of such intelligence, there has been little about the nature of the being that would have it. I want to ask what sort of thing, metaphysically speaking, an artificially intelligent being would be. The question is hard, and possible answers have interesting implications for our own metaphysical nature.


Week 5

Tuesday 31st of October, 2006
Speaker: Seth Bullock (School of Electronics and Computer Science, University of Southampton)

Lies, Damned Lies, and Simulation: The Lure of Artificial Worlds

Driven by the availability of raw computing power, approachable programming languages and attractive modelling software, individual-based simulation modelling of systems as diverse as freshwater ecosystems, artificial chemistries, stock exchanges, traffic systems, cities, and battle-fields is on the rise. Rather than attempt to capture the interdependencies of a target system via mathematical equations and derive behaviour analytically (or numerically), modellers are increasingly tempted to code up these interdependencies and simply observe how they play out within a simulated population of entities. For some modellers such simulations are treated (and even described) as "artificial worlds".

Although there is movement towards a rigorous methodology governing the role of such models, one has yet to be established, let alone penetrate the diverse modelling communities that are relevant here. Consequently, there is considerable scope for empty or artefactual work within these communities. Rather than focus directly on the issue of methodology, this talk will take a tangential look at what it is that modellers find attractive about artificial worlds.

By combining some of the work of Dick Levins, David Marr, and Andy Clark, I hope to show why treating individual-based simulation models, and particularly those that involve adaptive processes, as artificial worlds has a particular draw, and that this draw should be resisted in general.


Week 6

Tuesday 7th of November, 2006
Speaker: Professor Annette Karmiloff-Smith (Developmental Neurocognition Lab, Centre for Brain and Cognitive Development, Birkbeck, University of London)

Modules, Genes and Evolution: What have we learnt from developmental disorders?

In their excitement at using the human genome project to uncover the functions of specific genes, researchers have often ignored one fundamental factor: the gradual process of ontogenetic development. The view that there might be a gene for spatial cognition or language has emanated from a focus on the structure of the adult brain in neuropsychological patients whose brains were fully and normally developed until their brain insult. The developing brain is very different. The cortex starts out highly interconnected across regions and is neither localized nor specialized at birth, allowing interaction with the environment to play an important role in gene expression, brain development, and the ultimate cognitive phenotype. This talk will take a neuroconstructivist perspective, arguing that through developmental time domain-specific end states can stem from more domain-general start states, that associations may turn out to be as informative as dissociations, and that genetic mutations that alter the trajectory of ontogenetic development can inform Nature/Nurture debates if properly placed within a truly developmental context.


Week 7

Tuesday 14th of November, 2006
Speaker: Beena Khurana (Department of Psychology, ÅÝܽ¶ÌÊÓƵ)

Temporal Order of Strokes Primes Letter Recognition

Does the perception of objects that are the result of human actions reflect the underlying dynamic structure of the actions that gave rise to them? We tested whether the temporal order of letter strokes influences letter recognition. In behavioral experiments, participants identified letters that temporally unfolded as an additive sequence of letter strokes, either consistent or inconsistent with common writing action. Participants were significantly faster to identify letters from consistent sequences, indicating that the initial part of the sequence contained sufficient information to prime letter recognition. In order to gauge brain correlates of stroke order priming, visual ERPs (event related potentials) were measured while participants engaged in active (response required) and passive viewing of letter stroke sequences. Preliminary analyses indicate differential activation both in occipital-parietal cortex and frontal cortex as a function of stroke order. We suggest that letter perception might reflect the temporal structure of letter production; in other words, Simon sees as Simon does.

Week 7 Special Session

Thursday 16th of November, 2006
Speaker: Professor Harold Cohen (Department of Visual Arts, University of California, San Diego)

AARON, Colorist: From Expert System to Expert

For the past twenty years the AARON program has been a rule-based "expert system," steadily accumulating higher levels of expertise in coloring its images. Its rule-base has also become increasingly detailed and complex, to the point where making changes, or adding new rules, often resulted in broken code buried elsewhere, deep in the program.

A few months ago its author, Harold Cohen, abandoned this long-developed, highly successful system in favor of a remarkably simple algorithm, which not only performed as well as its predecessor, but also extended the range of AARON's coloring strategies. This algorithmic approach is now in its third version, and the program exhibits a high level of control over the "kind" of coloring it does.

In this talk, Cohen describes the color technology underlying the new approach and how twenty years of accumulated expertise were collapsed into a few lines of simple code; how and why it works as well as it does.


Week 8

Tuesday 21st of November, 2006
Speaker: Michael Schmitz (Department of Philosophy, University of Konstanz; UCL)

Agency: Experiencing Causality

Whereas in perception we experience the world as acting causally on us, in action we experience ourselves as causing things. This difference in active vs. passive causal role is arguably also reflected in the Intentional contents of the relevant states of consciousness and thus in their conditions of satisfaction. So, for example, in order for an intention to be satisfied, that is executed, it needs to cause the intended action. John Searle (1983) has suggested that the Intentional contents of both ordinary intentions and experiences of bodily action (what he calls "intentions-in-action"), as well as of perceptions are causally self-referential. That is, they determine, for example, that in order for the experience of raising my arm to be satisfied, that experience itself must cause the arm to go up. Conversely, the perceptual experience must be caused by its object. Focussing on the bodily experience of acting, I will discuss various objections to Searle's account and propose to modify it along the following lines. First, we should think of the causal role of agency as being specified through the mode of the experience of acting rather than through its content. Second, we should think of the Intentionality of the experience of acting as being nonconceptual in nature.


Week 9

Tuesday 28th of November, 2006
Speaker: Sarah Sawyer (Department of Philosophy, ÅÝܽ¶ÌÊÓƵ)

There is No Viable Notion of Narrow Content

Thoughts have associated contents. If the content of a thought is necessarily shared by physical duplicates it is said to be narrow; otherwise it is said to be broad. It is commonplace amongst philosophers to think-both for metaphysical and epistemological reasons-that there must be a viable notion of narrow content. In this paper I argue that there is no such viable notion and that the metaphysical and epistemological reasons in its favour are erroneous.


Week 10 **CANCELLED**

Tuesday 5th of December, 2006
Speaker: Professor John Fox (Cancer Research UK and University of Oxford)

Rational medical agents: from theory to engineering

The Advanced Computation Laboratory at Cancer Research UK has a long established programme of research into clinical decision-making and care planning, and the development of technologies to support these processes. Starting from an empirical analysis of clinical expertise we developed a formal language for describing decisions and plans (PROforma), and a practical technology to apply PROforma models in assisting doctors and other professionals. A growing body of evidence has been accumulated that this technology is clinically useful, and general enough to be used in other fields as well. The talk will review the CRUK programme, from its theoretical foundations in cognitive science and AI to the technology and evidence of its practical value.


Series organized by Dustin Stokes



 

Summer Term 2006

  • Week 1 (18 April): No meeting
  • Week 2 (25 April): Wendell Wallach (Center for Bioethics, Yale University): Machine Morality: Bottom-up and Top-down Approaches for Modeling Human Moral Faculties
  • Week 3 (2 May): Robin Banerjee (Department of Psychology, ÅÝܽ¶ÌÊÓƵ): Children's Differentiation Between Facts and Opinions
  • Week 4 (9 May): Hakwan Lau (Wellcome Department of Imaging Neuroscience, University College London; Oxford University): Scientific Zombie Hunt (In conjunction with the Cognition and Language Seminar Series)
  • Week 5 (16 May): Manuel Marques-Pita (School of Informatics, University of Edinburgh): Conceptual Representations of Cellular Automata That Perform the Density Classification Task
  • Week 6 (23 May): Inman Harvey (Centre for Computational Neuroscience and Robotics, ÅÝܽ¶ÌÊÓƵ): Cognition in the Round
  • Week 7 (30 May): Professor Stephen Stich (Department of Philosophy, Rutgers University): Philosophy, Intuition, and Culture: An Overview of a Research Program
  • Week 8 (6 June): Professor Peter Goldie (Department of Philosophy, University of Manchester): Anti-Empathy
  • Week 9 (13 June): Professor Ernest Edmonds (Department of Information Systems, University of Technology-Sydney):The Creative Process Where the Artist is Amplified or Superseded By the Computer
  • Week 10 (20 June): Jon Bird (Centre for Computational Neuroscience and Robotics, ÅÝܽ¶ÌÊÓƵ; Blip): Exploratory Modelling with Homeostatic Mechanisms


Week 2

Tuesday 25th of April, 2006
Speaker: Wendell Wallach (Center for Bioethics, Yale University)

Machine Morality: Bottom-up and Top-down Approaches for Modeling Human Moral Faculties


Week 3

Tuesday 2nd of May, 2006
Speaker: Robin Banerjee(Department of Psychology, ÅÝܽ¶ÌÊÓƵ)

Children's Differentiation Between Facts and Opinions

I will report on a study of primary school children's appreciation of the distinction between beliefs about matters of fact and beliefs about matters of opinion. Initial work suggested that 8- to 9-year-olds have only a limited grasp of the subjectivity of opinions. However, our recent work has demonstrated that even younger children can display implicit awareness of the distinction between facts and opinions. Specifically,children aged as young as 6 years were significantly more likely to conform to experts' judgements on matters of ambiguous fact than on matters of opinion, even though they showed little explicit awareness of the subjective nature of opinions. I will discuss the implications of this work for our understanding of children's epistemological development.


Week 4

Tuesday 9th of May, 2006
Speaker: Hakwan Lau(Wellcome Department of Imaging Neuroscience, University College London and Oxford University)

Scientific Zombie Hunt

Philosophers have used imaginary cases of zombies to help us think about perceptual consciousness. Some of these zombies are functionally identical to us, but do not consciously feel anything. Neuroscientists often completely ignore such possibilities. In looking for the neural correlates of consciousness, they compare experimental conditions in which the subject's performances are not matched. If consciousness is not the same thing as performance, performance difference could be considered as a major experimental confound, and thus most of the current brain imaging results could be trivialized and explained away. I propose what we need to do is to compare the conscious subject against a "zombie" baseline condition, in which the performance is at least roughly matched. I present brain imaging data collected using this approach, and also a cognitive model (called the Higher-Order Bayesian model) which could account for these results. The concept of zombie is useful even to the cognitive scientist.


Week 5

Tuesday 16th of May, 2006
Speaker: Manuel Marques-Pita(Department of Informatics, University of Edinburgh)

Conceptual Representations of Cellular Automata That Perform the Density Classification Task

This paper explores cognitive mechanisms that process models of complex systems - represented in their implicit form - in order to produce 'conceptual' redescriptions, which (we hypothesise) could reveal knowledge about these models that is not accessible on the implicit level. The aim of this exploration is to support new ways of conceptualising the phenomenon of emergence, the main characterising feature of complex systems in general. Here, we focus on exemplar Cellular Automata (CA) rules developed to perform the density (majority) classification task (as defined by Mitchell et al., 1994). Conceptual representations of the best known rules for this task will be presented and we will show how the resulting abstractions can be considered suitable for the formation of "Conceptual Spaces", wherein rules that perform similar computations are positioned in close proximity.


Week 6

Tuesday 23rd of May, 2006
Speaker: Inman Harvey(Centre for Computational Neurosciene and Robotics, ÅÝܽ¶ÌÊÓƵ)

Cognition in the Round

An organism, real or artificial, can be described in the language of mechanisms and physics; it can also be described in the language of perceptions, intentions, needs and dangers. What is the relationship between the form and dynamics of its physics, and the form and dynamics of its cognitive world?


Week 7

Tuesday 30th of May, 2006
Speaker: Professor Stephen Stich(Department of Philosophy, Rutgers University)

Philosophy, Intuition, and Culture: An Overview of a Research Program

Philosophers use intuitions in a variety of ways in a variety of projects. For the last several years, my collaborators and I have been exploring the extent to which intuitions vary across cultural groups, and attempting to explain why that cultural variation exist. In this talk I will offer an overview of this work. The talk will (i) present some of our findings about the cultural variation in philosophically important intuitions, (ii) sketch some of our work aimed at explaining that variation, and (iii) explore the implications, for a range of philosophical projects, if our findings are robust and our explanations are correct. Most of the work discussed will deal with the use of intuitions in epistemology, ethics and the philosophy of language. If time permits, the relevance of this work to metaphysics will also be explored.


Week 8

Tuesday 6th of June, 2006
Speaker: Professor Peter Goldie(Department of Philosophy, University of Manchester)

Anti-Empathy

The aim of my paper will be to challenge the idea that it is ethically a good thing to seek to empathise with other people, where 'empathy' means adopting the other person's perspective or 'perspective-shifting'.

One of the apparent attractions of perspective-shifting (often as a competitor to a more theory driven approach) is that it is supposed to explain, first, how we are able to engage in what is often called 'mind-reading', and secondly, how we are able to recognise what is ethically the right thing for us to do-by determining what we would like others to do with circumstances reversed ('do unto others'). I want to put forward an alternative, perceptual-sympathetic model, which is intended to remove the mystery from what is really very simple and natural, if not always very easy. I will try to show that the appeal of empathy as perspective-shifting diminishes once we have in place this alternative model, according to which it is possible not only immediately to perceive what someone is thinking and feeling, but also immediately to respond with emotion, motivation and action.


Week 9

Tuesday 13th of June, 2006
Speaker: Professor Ernest Edmonds(Department of Information Systems, University of Technology-Sydney)

The Creative Process Where the Artist is Amplified or Superseded By the Computer

The title was first used for a paper that I presented at a computer graphics conference in 1970. How have we moved on since then? We argued then that interaction was key, but we also looked towards what has become generative art. Where are we now with the generative and interactive arts? Do we have answers to the questions posed in 1970? How autonomous are today's generative art works? How do we understand interaction today? There is a growing field of interactive art that is increasingly being shown in public, museums, cafes and bars. For the designers of such systems, the questions relating to audience engagement are also critical. The talk will discuss these issues, some recent work in generative art and the use of Beta_Space at Sydney's Powerhouse Museum for studies of audience engagement with them.


Week 10

Tuesday 20th of June, 2006
Speaker: Jon Bird (Centre for Computational Neuroscience and Robotics, ÅÝܽ¶ÌÊÓƵ; Blip)

Exploratory Modelling with Homeostatic Mechanisms

I will describe two recent projects where I have used models inspired by Ashby's Homeostat: i)an investigation of Ezequiel Di Paolo's evolutionary robotics (ER) experiments exploring the link between ultrastability and adaptive behaviour; ii)a prototype floating sculpture (Network) for Herne Bay, Kent in collaboration with artist Jane Prophet, cell biologist Neil Theise and mathematician Mark D'Inverno. I will detail the key findings from my ER experiments investigating the hypothesis that internal ultrastability can lead to adaptive behaviour, specifically the ability to adapt, or be robust to, inversion of light sensors: a. too short or long evaluation periods can put a cost on learning and lead to a selection pressure in favour of hard-wired controllers, in a manner analogous to the Baldwin effect; b. in order to evolve robot controllers that were robust to the inversion of their light sensors I found it was also necessary to:
1. introduce sufficiently variable and systematic sensor disruption during evolution;
2. use activity dependent plasticity;
3. *not* explicitly select for neuronal stability during evolution.
More generally, I will highlight some of the similarities between these ostensibly very different projects, suggesting that in exploratory modelling some of the distinctions between art and science can begin to blur.


Series organized by Dustin Stokes



 

Spring Term 2006

  • Week 1 (10 January): No meeting
  • Week 2 (17 January): Chris Thornton (COGS/Informatics): When is a Braitenberg Architecture All You Really Need?
  • Week 3 (24 January): Professor Ranulph Glanville (CyberEthics Research): The Black Box and the Value of Ignorance
  • Week 4 (31 January): No meeting
  • Week 5 (7 February): Professor Aaron Sloman (University of Birmingham): Orthogonal Competences in Humans, Robots, and Other Animals (Or, What Did Max Mean by "Controlled Hallucination"?)
  • Week 6 (14 February): Stephaine Sandra Pourcel (ÅÝܽ¶ÌÊÓƵ): Linguistic Relativity in Motion
  • Week 7 (21 February): Mercedes Lahnstein (Imperial College London): Researching the Developmental Nature of Emotions with Robots-CANCELLED
  • Week 8 (28 February): Richard Menary (University of Hertfordshire): Parity Problems
  • Week 9 (7 March): Stephen Butterfill (University of Warwick): What Are Modules?

Week 2

Tuesday 17th of January, 2006
Speaker: Chris Thornton (COGS/Informatics)

When is a Braitenberg Architecture All You Really Need?

Braitenberg vehicles 2a (the one which shows `fear') and 2b (which shows `aggression') represent a kind of holy grail in minimal robotics. Few designs offer quite so much performance for so little architecture. But to what extent can the idea of directly connecting sensors to motors---the key idea in vehicles 2a and 2b---be used on other tasks? Experimental work can supply answers in particular cases. But in the talk I'll be arguing that information theory can go one better and supply a general, theoretical answer.


Week 3

Tuesday 24th of January, 2006
Speaker: Professor Ranulph Glanville (CyberEthics Research)

The Black Box and the Value of Ignorance

The Black Box is often talked about as becoming whitened when we have built an understanding of what we believe is going on in it. Indeed, we often even talk about opening up the Black Box. This interesting approach is essentially inappropriate, because, as I shall argue, not knowing what is in the Black Box - indeed recognising it as a deceit - is essential to its functioning. I shall explore this position, talking about what we may know and what such knowing may mean to us, and how it is founded on and in ignorance. I shall then consider how this may help us understand communication and the nature of the world we describe when we use the Black Box, arguing that a basis in ignorance is important and profoundly valuable.


Week 4

No Meeting


Week 5

Tuesday 7th of March, 2006
Speaker: Professor Aaron Sloman (University of Birmingham)

Orthogonal Competences in Humans, Robots and Other Animals (Or, What Did Max Mean by "Controlled Hallucination"?)

What I am doing has deep roots in COGS, going back many years. I am
beginning to understand more of what Max Clowes, the prime mover of
COGS, might have meant when he wrote at least 35 years ago that 'Vision
is controlled hallucination', expanding Helmholtz's view of vision as
unconscious inference.

Still in the spirit in which we originally set up COGS, I have
recently been doing philosophy within an EC-funded robotics project that
is trying to make some small steps towards the long term goal of
understanding how to produce a domestic robot with an interesting subset
of the competences of a child aged somewhere between 3 and 5 years.
(Also the topic of a forthcoming AISB symposium.)

Actually, my main motivation is not to build robots but to understand
what a human child is and how it can develop -- e.g into a plumber, a
dancer, a poet or a professor of mathematics or archaeology.
(Adult humans have much additional cultural and individual ad-hoc
clutter of less general interest, at least to me.)

I would also like to understand how magpies can build their nests, and
many other animal achievements.

The recent work is partly inspired by interactions with a biologist
(Jackie Chappell) who studies animal cognition, e.g. in crows (Betty the
famous hook-maker) and more recently parrots. We are trying to
understand evolutionary pressures and tradeoffs related to differences
between

(a) species that have almost all the competences they require
pre-programmed, including visual and other competences far beyond
any current robot (e.g. some deer can run with the herd soon after
birth and newly hatched chicks can peck for food and follow a hen --
which clobbers concept empiricism and 'symbol-grounding' theory)

and

(b) species that seem to be born a lot more stupid and helpless but
which end up with far richer and more diverse competences as adults,
(e.g. hunting mammals, nest-building birds, primates and especially,
spectacularly, humans), using mechanisms we are trying to
understand.

[There are many intermediate, mixed cases: a spectrum of possibilities
with many interesting discontinuities.]

I don't think experimental methods available to developmental
psychologists, however fashionable, are remotely able to give windows
into the complex architecture-building in virtual machines that goes on
in the first year or two, so an indirect approach is needed. Detailed
analysis of requirements for a young, developing, human-like robot in
this framework has profoundly changed my thinking in recent months, on a
number of issues on which I have been working since my DPhil 45 years
ago on the nature of mathematical reasoning, especially visual
mathematical reasoning -- also the topic of my first AI paper in 1971
(COGS CSRP 192 written because Max made me submit something to
IJCAI'71 even though I was ill with flu just before the deadline).

The recent changes include my view of the functions of vision (going
back to work that Max inspired me to do here in Sussex 30 years ago,
e.g. on the POPEYE project), and work on analysis of the concept of
'cause' -- probably the hardest unsolved problem in philosophy (though
Chris Taylor's Sussex D.Phil made some important progress, as others
have done recently).

The hardest unsolved problem in AI and psychology is explaining what
vision does and how it does it (at many levels). A major obstacle is the
difficulty of identifying all the diverse *functions* of vision that
need to be explained in an integrated theory. Most AI vision researchers
focus on a tiny subset, e.g. recognition or depth perception, or
tracking, or motor control, etc., or more recently emotion detection.

Thinking *in great detail* about requirements for a child-like domestic
robot forces reintegration and unearths new problems. (Countering what
Ron Chrisley and I referred to as 'ontological blindness' in a recent
paper.)

In the past, my work on vision focused on how to interpret static images
as providing information about 2-D and 3-D *structures* at different
levels of abstraction, in partial registration with one another and with
the original images, and how to relate affordances (collections of true
counterfactual conditionals) to parts of those structures. POPEYE did
that (for simple 2.5D scenes, without the affordances) in a mixture of
top down and bottom up processing, around 1975. Thinking about a robot
able to see and manipulate 3-D domestic objects, I've come to realise
the error of my ways: that vision is primarily (though not entirely)
concerned with perception not of structures but of *processes* at
different levels of abstraction in registration with one another and
with the optic array and the environment (not the retinal image: that
keeps changing).

(Different levels of abstraction deal with different sets of invariants,
expressed in different ontologies, some continuous, some discrete.)

A corollary is that seeing involves concurrently running (partially
complete) process simulations of various kinds with multiple
concurrently changing relationships to each other and to sensory data
and the scene. Control of the processes is, of course, simultaneously
data-driven and 'top down' (somewhat as in hallucinations). Static
scenes are just processes with nothing changing: though they are rich in
potential for change, a special case of the fact that every perceived
process is rich in potential for redirection of many kinds. (Expressable
as collections of true counterfactual conditionals.) Affordances and
also what I call proto-affordances, drop out of that naturally -- though
there are still many unsolved problems.

The existence of the ability to run such processes in a manner that can
be but need not be controlled by visual input underlies important
features of human minds, including our ability to do mathematics using
imagined diagrams. Blind people can also use their visual systems: all
that evolution is not wasted because of a peripheral defect. A less
obvious corollary of this work is that mirror neurones should probably
have been called 'abstraction neurones', causing less muddle in several
research communities.

Perceived processes can differ in many different dimensions, most,
though not all of them, dependent on what is in the environment (i.e.
'objective' environmental invariants, not just sensori-motor
invariants), e.g. various kinds of 3-D surface curvature and surface
discontinuities, rigid vs flexible objects, different kinds of stuff:
compressible, elastic, plastic, flexible like paper or flexible like
cloth, strings, rods, sheets, differences in viscosity, kinds of
texture, stickiness, etc. etc. These, in different combinations, make
possible an *enormous* variety of types of 3-D process, of which a
subset can be actions -- far more than a child can encounter in 5 years:
hence the importance of orthogonality and recombinability. Investigating
implications of all this contrasts with the excessive emphasis on
sensory-motor contingencies that loom large in 'embodiment' and
'dynamical systems' approaches, that focus on a tiny subset of human
competences, such as maintaining balance, and turning a crank handle
(also excellent for the study of insects, no doubt).

Being able to see all the things a five year old child can see requires
being able to identify which process components are involved in the
child's environment (including things temporarily out of sight, as the
child twists and turns) and how they interact. I may show a video of a
19month child failing to understand hooks despite obviously having many
perceptual and manipulative competences -- unlike Betty, who makes
hooks). But later he understood a great deal. It seems that multiple
independent competences have to be acquired through early exploration
and play, and represented in ways that allow them to be creatively
*recombined* in perceiving novel scenes, and also in creatively acting,
planning, reasoning and explaining, including forming new, ever more
complex units to support subsequent learning.

This seems to require powerful innate 'syntactic' (i.e.
structure-manipulating) mechanisms, perhaps implemented in kinds of
neural mechanisms that have not yet been thought of.

Examples of this ability evolved before human language (since they seem
to be manifested in chimps and corvids, for example, as well as
prelinguistic children, if there are such things). But perhaps through
the usual biological trick (in evolution) of duplication followed by
differentiation, the pre-linguistic mechanisms could have provided a
basis for human language, simultaneously providing both linguistic
mechanisms and semantic content -- after which positive feedback and
cultural evolution rapidly enhanced both the non-linguistic and
linguistic competences after they started co-evolving.

These ideas generate many research questions, e.g. the obvious ones
about which sorts of virtual and physical machines can support such
capabilities, and less obvious questions about varieties of genetic
defects or brain damage that could prevent development of specific
aspects of the ability to acquire and deploy orthogonal competences,
varieties of defect that might occur later in life, and above all what
sorts of neural mechanisms can support creative controlled
hallucinations as required for normal visual perception. Drug and
damage-induced hallucinations and synaesthesia may provide some pointers
to the mechanisms.

My talk will present a small sample of these ideas from which I hope
creative and intelligent listeners will be able to reconstruct the rest
by recombining their own orthogonal competences.

Some examples and speculations can be found in an online version that is
still under development
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0601

I accidentally discovered last month that Sharon Wood's recent work (in
her RAS04 paper) seems to be closely related to some of these ideas.

Poplog/Pop11 users can find one of the key ideas in 'TEACH FINGER',
based on an idea Oliver Selfridge discussed when he visited Sussex in
1981.
I wish Max, and Jean Piaget (another strong influence), were still
around to criticise and help.
Offers from others very welcome.


Week 6

Tuesday 14th of February, 2006
Speaker: Stephanie Sandra Pourcel (ÅÝܽ¶ÌÊÓƵ)

Linguistic Relativity in Motion

Motion of objects and animates is a considerable aspect of life daily
conceptualised and referred to in language by individuals. Motion is
composed of four basic dimensions, (1) a moving entity, (2) a trajectory,
(3) a spatial reference, and (4) a manner of displacement. Expressing
motion events in various languages is realised differently in semantic
forms, however, so that those dimensions may be equally or selectively
highlighted. In English, for instance, all four dimensions are typically
encoded, as in 'the dog ran across the road.' However, in Romance
languages, such as French, the manner of motion is typically left out, as
in 'the dog crossed the road' (le chien a traversé la rue).
The question I am addressing is whether differing semantic representations
entail differing conceptual representations in cognition. This question is
also known as 'linguistic relativity' - the notion that conceptualisation
is partly relative to our native languages. To address this question, I
will present comparative data from cognitive tasks on memory and inference
processes obtained from native speakers of French and English. The data
lends support to relativity; that is, it effectively shows differing
cognitive performances in reaction to the same stimuli and tasks by
speakers of different languages.


Week 7 CANCELLED

Tuesday 21st of February, 2006
Speaker: Mercedes Lahnstein (Imperial College London)

Researching the Developmental Nature of Emotions With Robots

This paper presents an interdisciplinary contribution to research into
emotions and describes a neurorobotic systemic approach for investigating
the dynamic elicitation and differentiation of reward-directed emotive
processes in real time. Experimental results demonstrate the temporal
differentiation of emotive component processes by the mesolimbic dopamine
system. Results show the temporal modulation of movement processes, leading
to proactive and precise movement elicitation, and the parallel temporal
modulation of sensory processes, resulting in the temporal differentiation
of simulated emotional experience. Emotional experience is proposed to be
qualitatively and quantitatively differentiated according to the temporal
value the movement is predicted to make towards the rewarding sensory
experience. Results underline the dynamic and developmental nature of
emotive processes to be driven by integration and evaluation.


Week 8

Tuesday 28th of February, 2006
Speaker: Richard Menary (University of Hertfordshire)

Parity Problems

In a joint authored paper Mike Wheeler and I argue that the parity principle
as a motivational tool for the extended mind is ill formulated by Clark and
Chalmers. This has lead to some serious misunderstandings of the extended
mind project (see for example Adams and Aizawa 2001 and 2006, Rupert 2004).
We argue that the claim that underlies the extended mind hypothesis is not
(or should not be!) that external factors count as themselves the proper
parts of a cognitive process by being coupled in a particular way to an
already existing cognitive agent or cognitive system, but rather that
internal and external elements can, in certain circumstances, be integrated
parts of a single causal system that itself counts as a cognitive system
(what I, elsewhere, call cognitive integration Menary 2006). The
consequence is that a clear understanding of this integration is required
and standard internalist criticisms turn out to have no purchase on the
extended mind.


Week 9

Tuesday 7th of March, 2006
Speaker: Stephen Butterfill (University of Warwick)

What Are Modules?

Notions of modular cognition play a central role in theories about infants'
developing understanding of people and things, and in accounts of the mind
generally. But there is much disagreement about what modules are, and no
detailed account of modularity. This talk will explain why we need to know
what modules are and outline one approach to understanding them. I'll
suggest that we can distinguish kinds of cognition, modular and nonmodular,
by reference to the types of process they involve, and that modular
cognition differs from other kinds of cognition in being a computational
process. I'll then conclude by considering what this approach to modularity
implies for the role of modules in explaining mental development.



 

Autumn Term 2005

  • Week 1 (04 October): No meeting
  • Week 2 (11 October): Professor Maggie Boden (COGS, Sussex University): War, AI and Cognitive Science.
  • Week 3 (18 October): Ron Chrisley (COGS/Informatics): Agent Smith vs. The Sentinels: Embodiment and Machine Consciousness in The Matrix.
  • Week 4 (25 October): New Directions in Cognitive Linguistics Conference http://www.cogling.org.uk
  • Week 5 (1 November): Professor Gregory Currie (University of Nottingham): Why Irony is Pretense.
  • Week 6 (8 November): Professor Geoffrey Sampson (Informatics, Sussex University): Social and Political Implications of Current Enterprise-Software Strategies.
  • Week 7 (15 November): Frédérique de Vignemont (CNRS, Paris): Egocentrism and Allocentrism in Social Cognition.
  • Week 8 (22 November): Lola Canamero (University of Heartfordshire): Modeling Affective Phenomena in Autonomous Robots.
  • Week 9 (29 November): Dustin Stokes (COGS, Sussex University): Incubated Cognition and Creativity.
  • Week 10 (6 December): Professor Ranulph Glanville (CybernEthics Research, UK): The Black Box and the Value of Ignorance.


Week 2

Tuesday 11th of October, 2005
Speaker: Professor Maggie Boden

War, AI and Cognitve Science

It's well-known that cybernetics became involved with ballistic-weapons projects, and also that the development of computers--both analogue and digital--was overwhelmingly funded by the War Department (alias Defence Dept.!) in the USA. (The sums for the Manhattan Project were checked by humans--mostly, the wives of the technical staff--because automatic methods were still unreliable; but the H-Bomb was developed thanks to machine computations.)

In fact, both early AI and early cognitive science (including psycholinguistics, the psychology of vision, work on attention, on communication, and on problem solving) happened largely through military funding. Not necessarily for obviously "military" projects, but the background aims were military nonetheless. Similarly, the huge influx of funding for AI and expert systems in response to Japan's Fifth Generation project was often justified in military (not just economic/industrial) terms. So was the sudden renaissance of DARPA funding for connectionism in the mid-1980s.

A few AI workers refused to have anything to do with military funding (with dire results for their research careers). Most were more pragmatic--for ethically-argued reasons, as well as selfish ones. (Compare the moral position of a non-vegetarian who refuses to have anything to do with abattoirs.) In the 1980s, with the rise of the Star Wars programme, they were more likely to speak out. Many AI scientists vociferously opposed this military adventure.

Key names here include Kenneth Craik, John von Neumann, Joseph Licklider, Herb Simon, Edward Feigenbaum, Benjamin Kuipers, David Parnas, and Terry Winograd.

After giving some informal background, I'll read the Section of my forthcoming book which deals with GOFAI and the military. (Military connections are so widespread that they are mentioned in several different chapters, but this is the main point at which I discuss them.) I concentrate on the historical story, but the connections of AI--and of certain aspects of computational psychology and neuroscience--with the military are still strong today.


Week 3

Tuesday 18th of October, 2005
Speaker: Ron Chrisley (COGS/Informatics)

Agent Smith vs. The Sentinels: Embodiment and Machine Consciousness in The Matrix.

Is embodiment a prerequisite for consciousness? The claim that it is - the "Embodied Consciousness" view - is becoming ever more popular in cognitive science. Drawing loosely from examples of machine consciousness depicted in The Matrix films, I observe that, contrary to this current fashion, the embodied Sentinel robots, if sentient at all, are depicted as having a much lower form of consciousness than the disembodied, simulated, purely symbolic Agents. Does this mean that the Wachowski brothers got it wrong? I look at arguments, both empirical and philosophical, and based on the views of Chalmers, Clark, Dreyfus and Searle, that attempt to reconcile the view of machine consciousness in The Matrix films with the Embodied Consciousness view. I don't find any of the arguments persuasive. I end by proposing a (perhaps surprising) different means by which a reconciliation might be achieved.


Week 4

Cognitive Linguistics Conference: http://www.cogling.org.uk


Week 5

Tuesday 1st of November, 2005
Speaker: Professor Gregory Currie

Why Irony is Pretence

All that we know of children suggests that their lives are massively enriched by pretending. But grown-ups are pretenders also, often in ways we barely recognise. Irony is a good example. I try to show off the pretence theory to best advantage, separating it from some restrictive assumptions often made about irony: that it is essentially communicative, that it is essentially linguistic, that it is essentially critical. The great competitor with the pretence theory is the echoic theory of Sperber and Wilson. We pretence theorists have something to learn from them. But they, I'll argue, are unable to make the right kinds of distinctions between irony and other "echoic" activities. I look at some empirical evidence which, it has been claimed, supports the Sperber-Wilson theory, and argue that it supports the pretence theory at least as well. I show how the pretence theory is extendible in natural ways to cover dramatic, situational and what I label "comic irony". I conclude with some thoughts about what a sensibly modest theory of irony should try to be.


Week 6

Tuesday 8th of November, 2005
Speaker: Professor Geoffrey Sampson

Social and Political Implications of Current Enterprise-Software Strategies

A major current contrast between alternative business-process automation strategies lies between so-called "integrated suite" and "best of breed" approaches, which one can associate respectively with Oracle and with IBM. Almost all discussion of this contrast has been in terms of cost-effectiveness, competitive advantage, and so forth to the individual businesses adopting either strategy. But the question which strategy is destined to predominate will make a large difference to the overall complexion of the business environment; and, since it will affect the nature of the bargain that the electorate implicitly makes with the private-enterprise system, it could also modify the political complexion of society. These issues merit wider consideration than they are at present receiving.


Week 7

Tuesday 15th of November, 2005
Speaker: Frédérique de Vignemont

Egocentrism and Allocentrism in Social Cognition.

The social world involves a first-person (the self), a second-person (the other related to the self) and a third-person component (the other unrelated to the self). The question is whether the same mechanisms are involved in all cases. The distinction between the two latter cases has often been neglected in the literature about theory of mind. I suggest here to apply the spatial distinction between egocentrism and allocentrism to social cognition. I claim that in mentalizing the other can be understood from either an egocentric stance ("you") or an allocentric stance ("he/she/they"). The others from an egocentric perspective are represented only because they are related to the self in one way or the other. By contrast, within an allocentric stance the mental states of the other person are represented independently of the self. These two stances play different roles. Only an egocentric representation of the other enables the subject to interact properly with him. However, such representation cannot provide a full grip on social understanding, contrary to allocentric theory of mind. Social interaction is based on egocentric mentalizing, while folk psychology depends on allocentric mentalizing.


Week 8

Tuesday 22nd of November, 2005
Speaker: Lola Canamero

Modeling Affective Phenomena in Autonomous Robots

Motivation and emotion are highly intertwined (e.g., emotions are often very powerful motivational factors; motivation can be seen as a consequence of emotion and viceversa, etc.) and it is not always easy to establish clear boundaries between them. Both types of phenomena are grouped under the broader category of "affect", traditionally distinguished from "cold" cognition. They lie at the heart of autonomy, adaptation, and social interaction in both biological and artificial agents. They also have a powerful and wide-ranging influence on many aspects of cognition and action. However, their roles are often considered to be complementary - as a first approximation, motivation would be concerned with the internal and external factors involved in the initiation of interaction with the environment, whereas emotion is rather concerned, among other critical factors, with evaluative aspects of the relation between an agent and its environment. In this talk I will discuss some of the approaches that can be used to model such phenomena within an embodied AI perspective. Following an approach that draws from different sources (in particular neuroscience, cybernetics and ethology), I will then illustrate some of the roles and mutual interactions of motivation and emotion in influencing different aspects of "lower-level" cognition and action in autonomous robots performing action selection. The talk will also outline some of the challenges and potential contributions of autonomous robots research to the affective sciences, in particular in terms of: (a) conceptual clarification by operationalization of theoretical concepts ; (b) prevention of unnecessary anthropomorphism and over-attribution by adopting an incremental approach to the synthetic design of emotion systems; and (c) elaboration of precise quantitative criteria to assess the influence of emotions in different aspects of cognition and action, and their behavioral manifestations.


Week 9

Tuesday 29th of November, 2005
Speaker: Dustin Stokes

Incubated Cognition and Creativity

A traditionally acknowledged stage of creative thinking is incubation: some unconscious, non-attentive, or less attentive stage of cognitive processing that yields creative insight. This feature, combined with other ostensibly mysterious features of the phenomenon of creativity, has discouraged naturalistically minded philosophers from theorizing it. This avoidance is misguided: we can maintain incubated cognition and we can explain it in scientifically responsible ways. This paper, focusing on the effects of attention and cognitive practice on the functional networking of the brain, attempts just such an explanation. Moreover, the model that it provides is general: consistent with a variety of theories and definitions of creativity. It also serves to assuage the naturalist's skepticism about other features of creative cognition and, one would hope, should provide another good reason for philosophers of mind and cognitive scientists to return attention to the long neglected topic of creativity.


Week 10

Tuesday 6th of December, 2005
Speaker: Professor Ranulph Glanville

The Black Box and the Value of Ignorance

The Black Box is often talked about as becoming whitened when we have built an understanding of what we believe is going on in it. Indeed, we often even talk about opening up the Black Box. This is interesting approach is essentially inappropriate, because, as I shall argue, not knowing what is in the Black Box-indeed recognising it as a deceit-is essential to its functioning. I shall explore this position, talking about what we may know and what such knowing may mean to us, and how it is founded on and in ignorance. I shall then consider how this may help us understand communication and the natureof the world we describe when we use the Black Box, arguing that a basis in ignorance is important and profoundly valuable.



 

Summer Term 2005

  • Week 1 (19 Apr): Rob Saunders (ÅÝܽ¶ÌÊÓƵ; in Conjunction with the Creative Systems Lab): Computational Models of Curiosity in Art and Design.
  • Week 2 (26 Apr): Ian Cross (University of Cambridge; in conjunction with the Creative Systems Lab): Music, Meaning, Ambiguity and Evolution.
  • Week 3 (03 May): John Eacott (University of Westminster; in conjunction with the Creative Systems Lab): Playing Changes: Developing Algorithmic Music Artefacts.
  • Week 4 (10 May): Steve Torrance: The Extended Hard Problem
  • Week 5 (17 May): Tom Ziemke (University of Skövde; in conjunction with the activate.d reading group): Taking Embodiment Seriously: Integrating Cognition, Emotion and Autonomy.
  • Week 6 (24 May): Mark Sprevak (University of Cambridge): Algorithms and the Chinese Room.
  • Week 7 (31 May): Igor Aleksander (COGS; Imperial College, London): Implications of the Five Axioms of Consciousness.
  • Week 8 (07 Jun): David Gooding (University of Bath): Where's the Body? Thinking, Reasoning and Being in Science.


Week 1: Computational Models of Curiosity in Art and Design


Tuesday April 19th, 2005
Speaker: Rob Saunders (ÅÝܽ¶ÌÊÓƵ; in Conjunction with the Creative Systems Lab)

Curiosity is an important motivation for creative individuals. Studies suggest that intrinsic motivations such as the satisfaction of curiosity are more important than extrinsic motivations such as peer recognition. Computational models of curiosity have been developed using novelty/fault detection technologies and applied to several fields of research. This talk presents a model of curiosity that has been used in the development of computational models of creative individuals and creative societies; it will explore theoretical and technical foundations, as well as some of the potential benefits of developing "curious agents". Example applications of curious agents in art and design systems will be presented.


Week 2: Music, Meaning, Ambiguity and Evolution

Tuesday April 26th, 2005
Speaker: Ian Cross (University of Cambridge)

From a biological perspective, humans are intensely and diversely social animals with powerful and adaptable cognitive abilities. Our capacity to interact with our environments and with each other is unparalleled in its flexibility, and underlying that fluidity is our capacity to communicate. This paper adopts an evolutionary stance in suggesting that music is likely to have played a significant role in the emergence of human cognitive and communicative abilities, and proposes that its efficacy is still evident in the development of such capacities through infancy and childhood. However, the adoption of this view, tied not to specific manifestations of music within a particular culture but to the notion of 'music' as a fundamental mode of human interaction, enforces a need to define the 'music' in question as broadly but as precisely as possible. The definition that will be employed here is that music embodies, entrains and transposably intentionalises time in sound and action. This paper will explore music as a significant component in the human communicative toolkit. It will outline the ways in which apparently universal and interactive infant proto-musical behaviours may exploit frameworks and assumptions underlying human communication and may scaffold communication by providing a supportive medium for the emergence of flexibility in cognitive and communicative capacities. It will be suggested that music's floating intentionality,or transposable aboutness, is grounded in intersubjective frameworks of embodied action and interaction and is functional in infant proto-musical behaviours in facilitating the emergence of joint attention and the development of 'theories of mind'. These proto-musical behaviours appear to underlie the development of flexible cross-domain intelligence as well as being instrumental in the rehearsal of modes of social interaction. The paper will then briefly analyse the evolutionary adaptiveness of musical and proto-musical behaviours, outlining the characteristics of humans and human societies that differentiate them from their precursors and demonstrating how the archaeological record and evolutionary cladistics indicate specific evolutionary trajectories for those attributes. It is concluded that proto-musical and musical behaviours are likely to have been functional in the emergence of many of the features that make us human.


Week 3: Playing Changes: Developing Algorithmic Music Artefacts

Tuesday May 3rd, 2005
Speaker: John Eacott (ÅÝܽ¶ÌÊÓƵ)

This talk considers ways in which algorithmic processes can be embedded into artefacts to provide music experiences which are rich, engaging and personal. The issues can be broken down to two main themes. Firstly, the design of artefacts which considers not only new ways of experiencing music but also new aesthetics in which we judge music not only by its quality but by its quality of change. The second theme is the design of algorithms which offer the qualities of composition and responsiveness required. There are many approaches to algorithmic composition and one used effectively here is simple probability. Unlike Xenakis' stochastic technique however I focus on minimizing the amount of choice that occurs and maximizing the significance of those choices. In this way the aim is for every stochastic choice to be deeply woven in to many levels of the music, in the same way that a human composer may begin with a tiny seed of an idea which develops into a coherent work.


Week 4: The Extended Hard Problem

Tuesday May 10th, 2005
Speaker: Steve Torrance (COGS; University of Middlesex)

TBA


Week 5: Taking Embodiment Seriously: Integrating Cognition, Emotion and Autonomy

Tuesday May 17th, 2005
Speaker: Tom Ziemke (University of Skovde)

Much research in embodied AI and cognitive science emphasizes the fact that robots, like animals, but unlike the computer models of classical AI, are "embodied". However, in this talk it is argued that the physical embodiment that robots share with animals provides only one aspect of the "organismic embodiment" that is underlying natural cognition. Based on Damasio's theory of emotions (as survival-related bioregulatory reactions) and feelings (mental representations of physiological changes during emotions), the talk outlines a project that aims to model the integration of cognition, emotion and autonomy (self-preservation) in robots.


Week 6: Algorithms and the Chinese Room

Tuesday May 24th, 2005
Speaker: Mark Sprevak

I argue in this paper that there is a mistake in the Chinese room argument that has not received sufficient attention. The mistake stems from Searle's use of the Church-Turing thesis. Searle assumes that the Church-Turing thesis licences the assumption that the Chinese room can run any program. I argue that it does not, and that this assumption is false. A possible response for Searle is considered, and then rejected. My conclusion is that it is consistent with Searle's argument for an advocate of a computational theory of mind to hold onto the claim that understanding consists in the running of a particular program.


Week 7: Implications of the Five Axioms of Consciousness

Tuesday May 31, 2005
Speaker: Igor Aleksander (COGS; Imperial College)

In my COGS seminar in 2003 I introduced five introspective axioms that are meant to clarify the design of machine models of being conscious. These lead to a 'kernel' architecture which I have found useful in discussing aspects of being conscious. I shall summarise applications of such modelling in addresing what it is to be unconscious, what can be said about animal consciousness and how the architecture relates to enactive vision and 'illusion' theories of volition. Inevitably, I need to comment on what the 'kernel' says about the 'hard problem' .


Week 8: Where's the Body? Thinking, Reasoning and Being in Science

Tuesday June 7th, 2005
Speaker: David Gooding (University of Bath)

Thought experiments have a cogency and transparency to intellect that has made them central to scientific argument since the 16th century. If any aspect of scientific work escapes embodiment, surely TE's do. In this talk I'll describe some features of TE's and identify strategies of representation that they share with diagrams and physical demonstrations, in order to show how this most abstract form of demonstrative argument relies upon the embodiment of its authors and audiences.

Readings:
D.C. Gooding, 2000, "Experiment", in W. Newton-Smith, ed., A Companion to the Philosophy of Science, Oxford: Blackwells, pp. 117-126.
D. C. Gooding, 1999, "Thought Experiment", in E. Craig, ed., The Encyclopaedia of Philosophy, London: Routlege.



 

Spring Term 2005

  • Week 1 (11 Jan): No seminar
  • Week 2 (18 Jan): Sallyann Bryant: Programming as a social activity: Distributed cognition and communication in collaborative software development
  • Week 3 (25 Jan): Daniel Osorio: Colour vision and categorisation by animals
  • Week 4 (01 Feb): Murali Ramachandran: Williamson's anti-luminosity arguments: Limitations on self-knowledge
  • Week 5 (08 Feb): Ron Chrisley and Chris Thornton (COGS, Informatics): "The Hard Problem: The Science Behind The Fiction"
  • Week 6 (15 Feb): Andy Clark: Material Symbols: From Translation to Co-ordination in the Constitution of Thought and Reason
  • Week 7 (22 Feb): Ezequiel DiPaolo: Autopoiesis, adaptivity and time: the biology of agency and sense-making
  • Week 8 (01 Mar): India Morrison: The Brain's Representation of Others' Pain: fMRI Studies of Empathy
  • Week 9 (08 Mar): Peter Cheng's Professorial Lecture (RSVP; see http://www.sussex.ac.uk/cogs/1-1.php)
  • Week 10 (15 Mar): COGS Symposium: Art, Body, Embodiment


Week 1: No seminar


Week 2: Programming as a social activity: Distributed cognition and communication in collaborative software development.

Tuesday 18th of January, 2005
Speaker: Sallyann Bryant (COGS, IDEAS Lab, Informatics)

While software development may traditionally be considered a solitary task, evidence suggests it may be much more of a collaborative endeavour (e.g. Perry et al, 1994). This collaborative approach has recently been recognised in the form of 'pair programming', a practice core to the eXtreme Programming (XP) methodology (Beck, 2000). This presentation will discuss works on the effect of pair programming on software quality, and outline a set of ethnographic studies of experienced pair programmers 'in the wild' that attempt to ascertain how this improvement comes about. Particular attention will be paid to 'distributed cognition' and the roles of tools and artefacts in the collaborative software development process, along with key findings on self-rating and verbal interaction. Finally, future work focusing on the effect of verbalisation, referencing both self-explanation (Chi, 1994) and verbal overshadowing (Schooler et al, 1993) will then be discussed.


Week 3: Colour vision and categorisation by animals

Tuesday 25th of January, 2005
Speaker: Daniel Osorio (Neuroscience, CCNR and COGS)

Colour - vision, categorization and naming - poses well-known questions about the physical world and perception. Colour forms a perceptual continuum, in which stimuli can be located by their physical properties, or more usefully in terms of photoreceptor excitations. Human colour categorization and naming has long been controversial. I will outline our current understanding of unique colours and colour names (or basic colour terms). Are these terms cultural conventions, determined by physiological mechanisms or imposed by the statistics of the visual world? I will then introduce our own work on how birds discriminate and classify colours, which shows how some of these questions can be studied in non-human animals. This shows that poultry chicks form clear categories on the colour continuum according to certain rules, and these are rapidly modified as the bird's acquire new information about the world.


Week 4: Williamson's anti-luminosity arguments: Limitations on self-knowledge

Tuesday 1st of February, 2005
Speaker: Murali Ramachandran (Philosophy)

Timothy Williamson (Knowledge and Its Limits, OUP 2000) reckons hardly any mental states are luminous, where:

Defn. A state S is luminous if and only if: if one were in state S, one would invariably know?-or at least be in a position to know--that one was in S.

In defending this claim, he presents an argument against the luminosity of feeling cold (which is meant to generalize to cover other phenomenal states, such as e.g. being in pain) and an argument against the luminosity of knowing, i.e. against what is often called the (KK)-principle: if one knows p, then one knows (or is in a position to know) that one knows p.

While I have no strong convictions about the luminosity of these states per se, I do not think Williamson's arguments establish that they are not luminous. In this talk I hope to uncover where the arguments go wrong. The main focus will be on the anti-(KK) argument.


Week 5: "The Hard Problem: The Science Behind The Fiction"

Tuesday 8th of February, 2005
Speaker: Ron Chrisley and Chris Thornton (COGS, Informatics)

Instead of the usual format, this week's COGS Research Seminar will feature the screening of an excerpt from a documentary entitled "The Hard Problem: The Science Behind The Fiction", from the recently-released 10-DVD box set of The Matrix films. The documentray features, among others, COGS members Ron Chrisley, Andy Clark, Phil Husbands and Chris Thornton. After the screening, Ron and Chris will lead a discussion on the philosophical and technological issues raised in the documentary, and in the films themselves.


Week 6: Material Symbols: From Translation to Co-ordination in the Constitution of Thought and Reason.

Tuesday 15th of Feb
Speaker: Professor Andy Clark (School of Philosophy, Psychology and Language Sciences. Edinburgh University)

How, if at all, can embodied and 'representation-lite' approaches deal with traditionally cognitive phenomena? One key move may be to take very seriously the role of human activity and human-built structures in altering the way difficult problems are presented and solved. Our best practices and artifacts, on this view, enable many of the same basic strategies to tackle perception, action and high-level reason. An important challenge to this view depicts practices of 'higher-level reasoning' as themselves requiring the use of new forms of 'de-coupled' internal representation (for a sophisticated version, see Sterelny (In Press)). I explore this issue, with special attention to the cognitive role of words and public symbols.

De-coupled ways of knowing, I conclude, do not demand de-coupled internal representations in addition to the resources provided by, respectively, standard perceptually-based knowledge and representations (internal and external) of the words and symbols themselves. In this way, de-coupled knowing is itself a kind of global skill, and one that is partially constituted by our activities with a variety of cognitive artifacts including words and symbols.


Week 7: Autopoiesis, adaptivity and time: the biology of agency andsense-making

Tuesday 22nd of February, 2005
Speaker: Ezequiel Di Paolo (COGS, Informatics)

A proposal for the biological grounding of intrinsic teleology and sense-making through the theory of autopoiesis is critically evaluated. Autopoiesis provides a systemic language for speaking about intrinsic teleology but its original formulation needs to be elaborated further in order to explain sense-making. This is done by introducing adaptivity, a many-layered property that allows organisms to regulate themselves with respect to their conditions of viability. Adaptivity leads to more articulated concepts of behaviour, agency, sense-construction, health, and temporality than those given so far by autopoiesis and enaction. These and other implications for understanding the organismic generation of values are explored.


Week 8: The Brain's Representation of Others' Pain: fMRI Studies of Empathy

Tuesday 1st of March, 2005
Speaker: India Morrison (Wolfson Centre for Clinical and Cognitive Neuroscience, University of Wales Bangor)

The ability to represent another person's pain in subjective terms is an important component of empathy. Neuroimaging evidence suggests that brain areas crucial for the firsthand experience of pain also respond to the observation of others' pain. In particular, the anterior cingulate cortex (ACC) appears to play a primary role in this link between feeling and seeing pain. This talk discusses the possible nature of the ACC's role in "vicarious pain," and what insight this may lend to our theoretical conception of empathy. It covers evidence from fMRI experiments addressing the specificity and level of processing of this response. This evidence is situated within a larger framework of ACC function, ultimately relating it to a central functional claim: that the ACC's role in representing observed pain is intimately bound to motivational processes involved in producing appropriate behavioral responses to pain-related events.


Week 9: Diagrams: Cognition, Discovery and Invention

Tuesday 8th of March, 2005
Speaker: Professor Peter Cheng (COGS, Informatics)

Diagrams, mathematical notations, graphs, tables, numbers, maps, natural language, computer interfaces: inhabiting a world of symbolic representations is something that makes us uniquely human. Good representations are critical for understanding, problem solving, conceptual learning and discovery. Studying the nature of representations, such as diagrams, and how they impact upon thinking and learning is at the heart of studies in Cognitive Science. We can gain insights into the nature of human cognition by discovering why some representations are effective and others not.

This lecture will explore the properties and benefits of good representations and the problems caused by poor representations. Examples will be drawn from everyday life, the history of science, engineering, mathematics and science education, and computer interfaces. Novel diagrammatic representations that I have invented, to improve complex problem solving and enhance conceptual learning, will also be presented.

To RSVP, please contact Sue Hepburn on (01273) 678258, or email S.J.Hepburn@sussex.ac.uk

The professorial lectures are free and open to all.


Week 10: COGS Symposium: Art, Body, Embodiment