DOC PREVIEW
Princeton COS 598B - : A Self-organizing Neural Network Model

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Biol. Cybernetics 36, 193 202 (1980) Biological Cybernetics 9 by Springer-Verlag 1980 Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position Kunihiko Fukushima NHK Broadcasting Science Research Laboratories, Kinuta, Setagaya, Tokyo, Japan Abstract. A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname "neocognitron". After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consists of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of "S-cells', which show charac- teristics similar to simple cells or lower order hyper- complex cells, and the second layer consists of "C-cells" similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any "teacher" during the process of self- organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cells of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern. 1. Introduction The mechanism of pattern recognition in the brain is little known, and it seems to be almost impossible to reveal it only by conventional physiological experi- ments. So, we take a slightly different approach to this problem. If we could make a neural network model which has the same capability for pattern recognition as a human being, it would give us a powerful clue to the understanding of the neural mechanism in the brain. In this paper, we discuss how to synthesize a neural network model in order to endow it an ability of pattern recognition like a human being. Several models were proposed with this intention (Rosenblatt, 1962; Kabrisky, 1966; Giebel, 1971; Fukushima, 1975). The response of most of these models, however, was severely affected by the shift in position and/or by the distortion in shape of the input patterns. Hence, their ability for pattern recognition was not so high. In this paper, we propose an improved neural network model. The structure of this network has been suggested by that of the visual nervous system of the vertebrate. This network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their position nor by small distortion of their shapes. This network is given a nickname "neocognitron"l, because it is a further extention of the "cognitron", which also is a self-organizing multilayered neural network model proposed by the author before (Fukushima, 1975). Incidentally, the conventional cognitron also had an ability to recognize patterns, but its response was dependent upon the position of the stimulus patterns. That is, the same patterns which were presented at different positions were taken as different patterns by the conventional cognitron. In the neocognitron proposed here, however, the response of the network is little affected by the position of the stimulus patterns. 1 Preliminary report of the neocognitron already appeared else- where (Fukushima, 1979a, b) 0340-1200/80/0036/0193/$02.00194 The neocognitron has a multilayered structure, too. It also has an ability of unsupervised learning: We do not need any "teacher" during the process of self- organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. After completion of self-organization, the network acquires a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel (1962, 1965). According to the hierarchy model by Hubel and Wiesel, the neural network in the visual cortex has a hierarchy structure : LGB (lateral geniculate body)--*simple cells-.complex cells~lower order hy- percomplex cells--*higher order hypercomplex cells. It is also suggested that the neural network between lower order hypercomplex cells and higher order hy- percomplex cells has a structure similar to the network between simple cells and complex cells. In this hier- archy, a cell in a higher stage generally has a tendency to respond selectively to a more complicated feature of the stimulus pattern, and, at the same time, has a larger receptive field, and is more insensitive to the shift in position of the stimulus pattern. It is true that the hierarchy model by Hubel and Wiesel does not hold in its original form. In fact, there are several experimental data contradictory to the hierarchy model, such as monosynaptic connections from LGB to complex cells. This would not, however, completely deny the hierarchy model, if we consider that the hierarchy model represents only the main stream of information flow in the visual system. Hence, a structure similar to the hierarchy model is introduced in our model. Hubel and Wiesel do not tell what kind of cells exist in the stages higher than hypercomplex cells. Some cells in the inferotemporal cortex (i.e. one of the association areas) of the monkey, however, are report- ed to respond selectively to


View Full Document

Princeton COS 598B - : A Self-organizing Neural Network Model

Documents in this Course
Lecture

Lecture

117 pages

Lecture

Lecture

50 pages

Load more
Download : A Self-organizing Neural Network Model
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view : A Self-organizing Neural Network Model and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view : A Self-organizing Neural Network Model 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?