ESSCS: 20th ANNUAL WORKSHOP, 29-31 August 2003
Freiburg im Breisgau, Germany

The 20th Annual Workshop will take place in Freiburg im Breisgau, Germany
from Friday 29 August, (10 a.m.) to Sunday 31 August afternoon
(depending on the number of papers).

There will be a welcome party on Thursday 28 August, from 18.00 hours (6.00 pm.).
It will be possible to stay from Thursday 28 to September 1st.


Provisional set of abstracts of papers to be presented.
Abstracts in alphabetical oder.


Robin Allott, Seaford, UK

Language is a skilled activity. In the development and acquisition of
the skill, imitation may play different roles. Imitation in language may
be related to and throw light on the role and functioning of imitation
in other areas including imitation in robotics. What part does imitation
play in the child's acquisition of its mother language? What role did
imitation play in the evolutionary origin and diversification of
language? How much has imitation to do with the sources of the words we
use and the ways those words are put together? These questions can be
considered at different levels, the surface forms of language and
speech, the underlying systematicies of language and speech, the problem
of speech at the articulatory level and beyond or beneath that the
problem of the functioning of imitation at the neural level. Imitation
of any kind involves a relation between motor and perceptual
functioning, between the motor system of the brain and the visual and
other sensory systems. Language and speech also require interaction and
coordination between motor activity and perceptual activity. The role
and functioning of imitation in language and speech are subjects of
study in many different disciplines, not only linguistics proper but
also child development, neurology, evolutionary theory, social
psychology. A central idea in this paper is a new emphasis on the
bodily basis of language in relation to imitated speech and gesture, and
more specifically on cerebral motor organisation as providing a possible
new approach to the symbol-grounding problem.

Introspection and Communications in a Symbolic Interactionist Framework
Iain D. Craig, Univ. of Northampton, UK

Communication acts, with the exception of subliminal and chemical,
are produced in order to achieve some end or goal. In order to
maximise their effectiveness, communicative acts must have the right
context, be articulated in the right form and at the right time to the
right audience. In this paper, we will argue that the Symbolic
Interactionist model, with its notions of Self, Other and situation
provides an interesting framework within which to study communicative
acts. As part of this, we will argue that some measure of
introspection is required in order to produce the right content at the
right time and to the right people.

Event-Based Introspection and Communication
Iain D. Craig, Univ. of Northampton, UK

Over recent years, we have been developing a new introspective
architecture that is based upon events or broadcast messages, rather
than on procedure calls. The architecture is interaction-oriented in
the sense of Milner and can naturally support oracular computation, as
required by the impasse-resolution mechanisms of architectures like
SOAR; it can also engage in planning its own activities. As an
interactive architecture, the system naturally supports rich forms of
communication with outside entities. In this talk, we describe the
architecture and explore a few of its properties. We also outline how
its introspective features can be used in communications with
external environments.

An uncertainty relation between time and progress in evolutionary systems
J. Becker, Universitaet der Bundeswehr, Muenchen, Germany

For various systems an uncertainty relation between the time
horizon and the temporal change of information has been established.
We prove such a relation between the presence ("Gegenwartszeitraum")
and the
learning speed (measured as the time derivative of pragmatic information)
for populations of evolutionary systems. The proof relies on the
Cramer-Rao inequalities. We shall also discuss possible interpretations of
these results.

Correspondences between mental events and neural processes
G.J. Dalenoort, Dept. of Psychology (E&A), Univ. of Groningen, The Netherlands

There are overwhelming arguments that we cannot scientifically
reduce mental events to neural events, whatever that might mean.
This is a relevant question, given that still many influential
scientists within the realms of philososophy as well as biological
psychology, are firmly convinced that this is possible. The question
then arises what 'scientific reduction' means, since subjective
feelings and neural processes belong to different categories.
Instead of the traditional form of scientific reduction, a more
modest form will be described, that only attempts to state the
necessary and sufficient conditions that must be satisfied at the
neural level, in order that a certain mental event can occur. This
form can be considered as one of a complementary pair, the other of
which is functional or goal-directed explanation. This view also
allows to get rid of the strong distinction between so-called emergent
and "ordinary" properties.
A more general formulation makes use of the concept of correspondences,
that has strong roots in the physics of thermodynamics, quantum
mechanics and the theory of relativity. It allows to eliminate many of
the inconsistencies and paradoxes of what is usually called the
mind-body problem; it keeps the apparent distinction between mental
events and brain events.

Design for Intelligent Access to Cultural Heritage Information
Geert de Haan, Maastricht University, The Netherlands

This contribution describes a novel way to access cultural heritage
information. The approach followed in the EC IST project "I-Mass" utilises the
digitisation of cultural heritage objects in combination with metadata
descriptions generated by experts at museums and libraries to create a digital
reference room; much like the reference rooms in big institutions like the
British Library.
The first objective is to create the means to solve the problems of syntactic
and semantic interoperability of metadata standards and to reach
interoperability of content. With interoperability of content it is possible to
provide access to cultural-heritage objects in a unform way, irrespective of
the particular storage forms, storage formats and description standards. In
addition to the direct advantages of interoperability of content, a uniform way
to access information also enables software agents to distinguish between
information that is related and not-related to the objects at stake; in this
way it is possible to provide information in context so that, for instance, a
painting is accompanied by information about the painter, the techniques used,
etc. In addition, the I-Mass project aims to provide end-users of different
backgrounds with uniform and universal access to a wide variety of cultural
heritage information by means of an adaptable and adaptive user interface which
allows for access to contextualised information according to different
perspectives, levels and dimensions. With a user interface that is adaptible to
the specific characteristics of the user it is possible to adapt the content
and the form of the information to what is most appropriate to the particular
The presentation and in the paper will further focus on the user-centred
scenario-based design approach that was used, on the methods and the results of
user-scenario analysis and on the design and results of the usability
evaluation. Keywords Cultural heritage, metadata, user interfaces for all,
user interface design, syntactic interoperability, semantic interoperability,
multimedia, information contextualisation, adaptivity and adaptability, virtual
reference rooms.

McLuhan Institute, Maastricht University
P.O.Box 616, 6200 MD Maastricht, The Netherlands
Tel. +31-43-3882714

Dual Trace and Dual Stable State Memory
Christian R. Huyck, London

Cell Assemblies (CAs) are a description of human memory at a level above the
neural level. It is well known that CAs exhibit a dual trace mechanism that
gives an explanation of both short-term and long-term memory.
A CA becomes active when some of its neurons fire and lead to a cascade of
firing of neurons in the CA. At any given time a net with CAs will only have
a small number of active CAs and these represent short-term memories. A CA
is formed via a Hebbian learning mechanism that changes synaptic weight and
this represents a long-term memory.
Each of these processes, CA ignition and CA learning, represent a type of
stable state. Short-term memory is a stable state because initial
activation leads to the neurons in the CA firing. These neurons then cause
reverberating firing among the neurons in the CA that can persist
practically indefinitely. In an untrained network, there are typically no
CAs; that is activation does not persist. During training, items are
presented to the network and neurons that co-fire have they weight of
synaptic connections increase. Eventually, this leads to neurons causing
other neurons in the CA to fire. Once this reverberation occurs, the
neurons will co-fire even more frequently, and leading to ever increasing
synaptic strength. Once this CA has been learned, it is very hard to reduce
the synaptic strength between neurons in the CA. That is, the system has
arrived ata stable state of synaptic strength.
Our current simulation work focuses on a fatiguing leaky integrator neural
model. For short-term memories a pattern is presented to the trained
network by activating neurons. Neurons then interact and settle into a
stable state of neurons being active and this relates to issues of ambiguous
stimuli. We have calculated networks that can represent hierarchical
concepts and a range of other concepts. This work has drawn on the stable
state work of Hopfield networks. Our CA networks are more biologically
plausible and more complex than Hopfield nets, but they are quite similar.
One difference is that the network does not settle into an entirely stable
state. After repeated firing neurons fatigue and then are unable to fire
for several cycles. Still, a large portion of neurons in the CA fire, and
it is easy to determine which CAs are active. The understanding of the
short-term stable state of one of our networks is relatively advanced.
The long-term stable state is more complex. If a simple Hebbian learning
rule is used and there is no spread of activation, it is clear that synaptic
weights should approximate the correlation of the rate that the pre and
post-synaptic neurons co-fire. That is, if neuron i fires 50% of the time
when neuron j is firing, the synaptic weight from i to j should be .5
assuming a pre-not-post anti-Hebbian learning rule. If the external
activation is understood, it should be easy to calculate the synaptic
weights of such a system.
Unfortunately, this clairity is lost when either spread of activation is
introduced or a more complex Hebbian learning rule is introduced. For some
presentation patterns, it is clear what the desired state should be, and
that some states are poor. For instance, if a square network is presented
with patterns of neurons that are either from the top or from the bottom,
but never from both, then the stable states should be the top neurons, and
the bottom neurons. One poor stable state would be that presenting a
pattern from the top fired all of the neurons, and another poor stable state
would be that no neurons fired.
What more can be said about the eventual long-term stable
state? Moreover, how can these short and long-term stable
states address the capacity, and noise tolerance of a given net?

Comparative Cognitive Robotics: A methodology to develop empirical
autonomous agent models of animal learning
Roul Sebastian John, University of Osnabrueck

Christian W. Werner, Heinrich-Heine University of Duesseldorf
In the last years, there has been a growing interest among artificial
intelligence researchers in the construction of so-called autonomous
agents. Autonomous agents should be able to operate in largely unknown
and unstructured dynamical environments without any need for human
intervention. Previous techniques in AI were found to be inappropriate
in the construction of such agents. This led to a rethinking of the
underlying metaphors, and to the emergence of alternative perspectives
on intelligent behavior. While the importance of this extended view on
cognition developed in autonomous agent research is now widely
recognized in cognitive science, autonomous agents themselves have
received little attention as possible models of cognitive systems.
This might be due to the fact that most of the autonomous agents that
were developed until now are related to natural agents (animals,
humans) only in a very broad, metaphorical sense. Autonomous agents
are still rarely used as empirical models which aim at reproducing
empirical data gathered from animals or humans. We propose a new
methodology, called comparative cognitive robotics, which allows to
use autonomous agents as empirical models in comparative psychology
and cognitive science. As an application of our new methodology, we
developed an autonomous agent model of visual discrimination learning
in chickens (Gallus gallus fd). Chickens were tested in experiments
inspired by human psychology. The main focus of our empirical work
with the animals and of the robot model is to demonstrate that are
number of learning phenomena which were originally believed to
indicate analytical stimulus processing and feature-selective
attention can indeed be explained by a much simpler, exemplar-based
learning mechanism. To evaluate our autonomous agent model as an
empirical model, the robot and the animals were tested in exactly the
same experiments, within the same experimental environment, and the
same methods of measurement and data analysis were applied to both.
This way, we do not need to recur to the abstracted descriptions of an
observer to generate input to our model and to interpret its output.
We discuss this and other advantages of using autonomous agents as
empirical models in comparative psychology and cognitive science.

Roul Sebastian John, University of Osnabrueck,
Institute of Cognitive Science, Katharinenstr. 24,
D-49069 Osnabrueck, Germany

Christian W. Werner, Heinrich-Heine University of Duesseldorf,
C. and O. Vogt Institute for Brain Research, Universitaetsstr. 1,
D-40225 Duesseldorf, Germany

Narrow and wide content in Marr's theory of vision
Basileios Kroustallis, Corfu, Greece

David Marr's (1982) theory of computational vision has been generally
regarded as the paradigm of interdisciplinary research in cognitive science. At
the same time, regarding the meaning of visual representations, it is not clear
whether the processes he advocates provide a determining role for the specific
physical environment (wide content), or representations can only be thought of
as agent abstraction over distal environmental properties (narrow content).
It seems that Marr’s environ mental restrictions on computational processes,
although causally relevant, do not constitute the meaning of each visual stage.
On the other hand, narrow content is a construction that does not explain the
agent interaction with the visual environment; as a consequence, visual
computational processes may be only syntactic processes, without any relevance
to intentional visual performance (Egan, 1996).
The ascription of content to visual states so far proceeds on the assumptions
that the environment is stable throughout computational theory, and that all
distinct visual stages can be characterized in the same way, either narrow or
wide. However, although Marr (1982) does not discriminate between different
physical environments (Segal, 1989), he individuates states according to
different physical phenomena in the same environment. Constraining assumptions
at successive visual stages limit possible environmental options by selecting
exactly those computational processes that match the criteria. The hypothesis
of geometrical origin imposes a boundary to differently grouped areas; this
assumption selects, among other options, only series of elements with an
extremely regular geometrical structure. But not every processing stage is
immediately determined by a restraining assumption. Narrow and wide content are
distinguished by the mediate or immediate influence of a natural constraint in
terms of processing order. Prior to boundary formation, 3element grouping
processes are not environmentally restrained; nevertheless, they are based on a
previous processing stage, which is immediately restrained under a different
natural constraint. This alternation permeates Marr's theory of vision, from
the detection of zero-crossings up to the construction of the 2.5-D sketch.
Egan, Frances (1996). Intentionality and the theory of vision. In Kathleen
Akins (Ed.), Perception. New York: Oxford University Press.
Marr, David (1982). Vision. San Francisco: W.H. Freeman.
Segal, Gabriel (1989). Seeing what is not there. Philosophical Review,
98, 189-214.

Theoretical Corner-Stones and Applications of
Socio-Constructivism in Virtual Learning
Pirkko Hyv..nen, University of Lapland, Finland
Jaana Lahti, University of Helsinki, Finland
Esko Marjomaa and Markku Tukiainen, University of Joensuu

Socio-constructivism can be defined as an approach according to which
interpersonal knowledge can only be achieved by social construction of it.
Especially relevant in this respect are the communication processes
occurring in the situations where there are at least two persons trying to solve
a problem. In order to clarify the theoretical basis for socio-constructivism,
especially concerning virtual environment, we introduce four important
background factors affecting knowledge creation, (i) communication
infrastructures, (ii) mirror neurons, (iii) intentionality, and (iv) social
awareness. We also state testable hypotheses for these theoretical corner
stones. Keywords: virtual collaboration, mental infrastructures, augmented
cognition, imitation neurons, intentionality

Dynamic Image Generating Semantics: Understanding reference in
discourse as a perception-based processing of natural language
texts by SCIPS
Burghard B. Rieger, Computational Linguistics, University of Trier, Germany

Inspired by information systems theory, Semiotic Cognitive Information
Processing (SCIP) is grounded in (natural/artificial) system-environment
situations. SCIP systems' knowledge-based processing of information makes it
cognitive, their sign and symbol generation, manipulation, and understanding
capabilities render it semiotic. Based upon structures whose representational
status is not a presupposition to, but a result from recursive processing, SCIP
algorithms initiate and modify the structure they are operating on to realize
(rather than simulate) language understanding by machine as a process of meaning
constitution. Thus, the symbolic (de)composition of prepositional structures in
traditional semantics is complemented by SCIP, which models learning and
understanding dynamically by visualizing what is understood in a perception-
based, sub-symbolic, multi-resolutional way of processing natural language
discourse. An experimental 2-dim scenario with stationary object locations
described relative to a mobile agent's varying positions will allow to
demonstrate the SCIP systems' performance and to test it against human natural
language understanding in a controlled way.
FB II: Computational Linguistics University of Trier, Germany

Encoding and decoding: one or two?
(Utterances in language)
Pieter A.M. Seuren, Max Planck Institut fuer Psycholinguistik, Nijmegen, The Netherlands

The relation between the procedures for encoding and decoding of
utterances is still unclear. In my view, encoding takes place through
a fully automatic grammar module, which 'prints' a semantic input in
the form of a syntactic output, whereas decoding is essentially an
input-constrained process of reconstruction-by-hypothesis (analysis-by-
synthesis), based not only on morpho-syntactic clues but also on general,
scenario and contextual knowledge, in ways that still escape algorithmic
(modular) modelling. My reasons for thinking so are:
1. Both in encoding and in decoding, conscious control (monitoring) is
always output monitoring, not input monitoring. This indicates a
top-down flow in the grammar module.
2.Multiple possible analyses are normally resolved immediately. This
suggests an early identification of key lexical items, which enables
an automatic activation of the grammar module, which 'prints' the
surface structure.
3. Real-life comprehension is often based on defective input or
on insufficient knowledge of the language in question on the part of
the listener. This makes it unlikely that decoding is an automatic
algorithmic procedure. Reconstruction-by-hypothesis seems more
This is also the view of Townsend & Bever (2001). Yet these authors
regard their model as consistent with minimalist syntax (Chomsky 1995),
which, in their view, comes out as 'a rather compelling model'
(p. 178-9). I argue that the opposite is true. Their model of decoding
agrees much more naturally with mediational forms of grammar, in which
a grammar is seen as a module taking a semantic input and delivering
a surface structure output. I will present detailed diagrams of what
I see as the systems for encoding and decoding of utterances.

The biology of semantics
P.H. de Vries, Department of Psychology, University of Groningen, NL.

The primary concepts on which our semantics is based must
have their origin in early individual development. One of these
concepts is the notion of an object. Characteristic of this
concept is the so-called A-not-B-error. The error occurs in six
to eight months old infants that persist in reaching for location A,
where an object was hidden several times, eventhough the
correct location is B, where the object is actually hidden. In
order to explain the error a conceptual network is described as
a biologically plausible model of human cognition and in which
binding plays a crucial role. According to the model infants make
the error because binding processes did not yet develop
sufficiently in the network.
The binding processes involved will be generalized to explain
neuropsychological syndromes in which patients can describe
the semantics of an object in language but fail to use it in the
proper manner or in which they do use it properly but fail to
report semantic properties of the object . The logistics of neural
processes in a conceptual network will be described on the
basis of computer simulations.

Dr. P.H. de Vries, Dept. of Psychology (E&A), Univ. of Groningen
Grote Kruisstraat 2/1, 9712 TS Groningen NL
Tel. +31 50 3636454 (6472); Fax. +31 50 3636304

Opportunities for Cognitive Science Research in Europe - An Update
Leopold A. Ziegler, Co-ordinator - Cognitive Science Research Initiative
Ministry for Education, Science, and Culture, Vienna, Austria

In the first part of this talk a new activity area of the
European Commission within the 6th Framework Programme regarding
New and Emerging Science and Technology (NEST) will be presented.
NEST is intended to provide a flexible policy instrument for
funding projects in emerging research fields.
In particular, research topics proposed under this scheme will
not be restricted to existing priority themes. Rather, projects
will be favoured which are characterised by significantly challenging
objectives, a potential for high impact results, and the employment
of truly innovative approaches. The chances for cognitive science
research projects offered by NEST will be critically examined.
The second part of the presentation will focus on existing and
planned initiatives for cognitive science research in individual
EU member countries. As an example first results of the current
Austrian initiative will be presented. Finally the question of
finetuning national and European research funding will be
addressed from the perspective of the cognitive science community.

Dr. Leopold A. Ziegler
Rosengasse 4
A-1014 Wien
Phone: +431 53120 5532
Fax: +431 53120 7109

ESSCS european society for the study of cognitive systems