COGNITIVE SYSTEMS, Volume 5, issue 2
Contents and Abstracts
R.P. Würtz,
Neural networks as a model for visual perception: What is lacking? 103-112
D. Gernert,
Cognitive aspects of very large knowledge-based systems 113-122
N. Hayes,
Some cognitive implications of human social evolution, 123-135
V.A. French, E. Anderson, G. Putman, T. Alvager,
The Yerkes-Dodson law simulated with an artificial neural network 136-147
A. Greco and A. Cangelosi,
Language and the acquisition of implicit and explicit knowledge: A pilot study using neural networks 148-165
G. Grössing,
We have in mind - A review of The Journal of Mind and Behavior, Vol. 18, Nrs. 2&3, (Spring and summer 1997), A special issue on 'Advances in chaos theory, quantum theory, and consciousness in psychology' 166-172
G.J. Dalenoort,
Neural networks: The missing intermediate level, - A review of The Journal of Mind and Behavior, Vol. 18, Nrs. 2&3, (Spring and summer 1997), A special issue on 'Advances in chaos theory, quantum theory, and consciousness in psychology' 173-196
COGNITIVE ASPECTS OF VERY LARGE KNOWLEDGE-BASED SYSTEMS
Dieter Gernert
Technische Universität München, Germany
Very large knowledge-based systems occur in various field, both in science and in practical computer applications. This paper focusses onto the cognitive aspects of the design and use of such systems: it is shown how working with the system can be made easier and more efficient. The report is based on practical experience with a specific system, but is written for independent reading. The experience laid down here may be useful for all operating very large knowledge bases, as well as for system designers and experts in human-computer interfaces.
SOME COGNITIVE IMPLICATIONS OF HUMAN SOCIAL EVOLUTION
Nicky Hayes,
The University of Surrey, U.K.
This paper explores some of the cognitive implications which emerge as a consequence of the evolution of human beings as social animals. Animal research allows us to identify four distinctive themes in cognitive evolution: adaptation, critical and sensitive periods, preparedness in learning, and levels of learning. It is argued that each of these themes is apparent in human cognitive development, and has shaped the nature of human cognition. Research into infant sociability reveals a predisposition to respond more strongly to input from other human beings than to other input, and research into social learning shows how this continues in later life. Vygotskyan theory argues that social aspects of language are ontologically distinct from its semantic aspects, and that language continues to be used for separate social and cognitive purposes throughout life. Social representations define and structure social experience so powerfully that human responses to experiences need have no logical connection with direct input ; and identification with social groups colours and shapes human appraisal of stimuli and events.
It is argued that in modelling human cognitive architecture it is necessary to perceive it as occurring within a social matrix which is much more than a set of data inputs, since it shapes the way that incoming data are interpreted. It is also argued that this social matrix occurs as a direct consequence of the human evolution as a social animal.
THE YERKES-DODSON LAW SIMULATED WITH AN ARTIFICIAL NEURAL NETWORK
Valentina A. French(1), Eric Anderson(2), Gregory Putman(3) and Torsten Alvager(4)
(1) Department of Physics
(2) Department of Psychology
(3) Department of Physics
(4)Department of Physics and Department of Life Sciences
Indiana State University, USA
The Yerkes-Dodson law is simulated using a recirculation neural network. The
network is trained to reproduce sequences of three-letter strings that vary in terms of difficulty. Difficulty is varied by changing the percentage of random strings in a sequence from 8%, to 20%, to 50% random content. Arousal is manipulated by using the bias input to simulate arousal. Bias is varied at 0.1 increment gradations from 0.1 to 1. The number of hidden units is also changed to examine the influence of the computational resources of the network. The performance of the network is inversely proportional to the reconstruction error. Results of the simulations clearly show that performance is an inverted U-shaped function of bias and that optimal performance decreases with increasing task difficulty. Thus, the Yerkes-Dodson effect is demonstrated. Simulations on networks with different numbers of hidden units show that the capacity of the network to learn the letter strings decreases with fewer hidden units. The results suggest an interpretation of the Yerkes-Dodson law in terms of data compression. The capacity of a network to learn difficult tasks breaks down at high levels of arousal because the computation of network weights associated with the arousal and other inputs strains computational resources. Easy tasks require less computation and hence the network can perform well at higher levels of arousal.
Language and the acquisition of implicit and explicit knowledge: a pilot study using neural networks
Alberto Greco(1) and Angelo Cangelosi(2)
(1) University of Genoa, Italy
(2) University of Plymouth, UK
In experimental psychology there is wide evidence that language supports thinking. How this support works, however, is still not clear. One hypothesis is that categorization is easier when linguistic labels are available, because implicitly detected similarities and rules can be made explicit. We want to test this hypothesis using a neural-network simulation.
Language is not a common sensorial input, but acts as a comment on the world (Parisi, 1994). When linguistic labels are systematically coupled with objects, either of the two inputs can elicit one single response (e.g. articulating a name). In real situations labels can be names for the objects or may denotate specific features or functions of them.
We constructed a neural-network simulation which learned to label a small set of stimuli in three input conditions (visual features, label, label + visual features), classifying them according to colour, category, object name. Network internal representations were analyzed by using cluster analysis in order to show the influence of linguistic cues in categorization. In the three input conditions a single object was represented very similarly but it had different representations in the label + features condition, depending on the label. These results support evidence on the mediating role of
linguistic labels. Future development lines and model improvements are discussed.
In order to test how the representational redescription hypothesis (Clark & Karmiloff-Smith, 1993) can be implemented to allow the explicitation of previously acquired knowledge, we augmented the model requiring the network to use the already acquired knowledge to extract the explicit semantic structure of the stimulus set, for each of the three subtasks. The hidden-unit layer was connected to a new module with three clusters of output units and the new connection weights were trained with the competitive-learning algorithm.
The results show that the network is able to exploit previously acquired knowledge and to make explicit the stimuli semantic structure using the hidden (implicit) representation. This structure corresponds to that obtained by cluster analysis in the previous research. This method can be considered a first step in testing the representational redescription hypothesis. It will require further exploration and testing of more complete models. Some related issues are discussed, such as the controversial need of hybrid models.
WE HAVE IN MIND
ON CHAOS AND DOUBTS ABOUT THE QUANTUM WORLD
Gerhard Grössing,
Austrian Institute for Nonlinear Studies
Review I of The Journal of Mind and Behavior Vol. 18, Nos. 2 + 3 (Spring and Summer 1997), Understanding Tomorrows Mind: Advances in Chaos Theory, Quantum Theory, and Consciousness in Psychology; a special issue edited by Larry Vandervert.
NEURAL NETWORKS: THE MISSING INTERMEDIATE LEVEL
G.J. Dalenoort
University of Groningen
Review II of The Journal of Mind and Behavior Vol. 18, Nos. 2 + 3 (Spring and Summer 1997), Understanding Tomorrows Mind: Advances in Chaos Theory, Quantum Theory, and Consciousness in Psychology; a special issue edited by Larry Vandervert.
ESSCS | european society for the study of cognitive systems |