DOC PREVIEW
LCA-2

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Optimal In-Place Self-Organization for Cortical Development:Limited Cells, Sparse Coding and Cortical TopographyJuyang Weng and Matthew D. LuciwDepartment of Computer Science and EngineeringMichigan State UniversityEast Lansing, MI 48824 USAAbstract— Cortical self-organization during open-ended de-velopment is a core issue for perceptual development. Tradi-tionally, unsupervised learning and supervised learning are twodifferent types of learning conducted by different networks.However, there is no evidence that the biological nervous systemtreats them in a disintegrated way. The computational modelpresented here integrates both types of learning using a newbiologically inspired network whose learning is in-place. Byin-place learning, we mean that each neuron in the networklearns on its own while interacting with other neurons. Thereis no need for a separate learning network. We present inthis paper the Multi-layer In-place Learning Network (MILN)for regression and classification. This work concentrates on itstwo-layer version for global pattern detection (without incorpo-rating an attention selection mechanism). It reports propertiesabout limited cells, sparse coding and cortical topography. Thenetwork enables both unsupervised and supervised learningto occur concurrently. Within each layer, the adaptation ofeach neuron is nearly-optimal in the sense of the least possibleestimation error given the observations. Experimental resultsare presented to show the effects of the properties investigated.Index Terms— Biological cortical learning, statistical effi-ciency, minimum error, self-organization, incremental learningI. INTRODUCTIONWhat are the possible mechanisms that lead to the emer-gence of the orientation cells in V1? Since V1 takes inputfrom the retina, LGN, and other cortical areas, the issuepoints to the developmental mechanisms for the formationand adaptation of the multi-layer pathways of visual pro-cessing.Well known unsupervised learning algorithms include Self-Organizing Map (SOM), vector quantization, PCA, Indepen-dent Component Analysis (ICA), Isomap, and Non-negativeMatrix Factorization (NMF). Only a few of these algorithmshave been expressed by in-place versions (e.g., SOM andPCA [11]).Supervised learning networks include feed-forward net-works with back-propagation learning, radial-basis functionswith iterative model fitting (based on gradient or similarprinciples), Cascade-Correlation Learning Architecture [2],support vector machines (SVM), and Hierarchical Discrimi-nant Regression (HDR) [3].However, it is not convincing that biological networksuse two different types of networks for unsupervised andsupervised learning, which occur in an intertwined way inthe process of development. When a child learns to draw, hisparent can hold his hand during some periods to guide hishand movement (i.e., supervised) but leave him practicingon his own during other periods (i.e., unsupervised). Doesthe brain switch between two totally different networks,one for supervised moments and the other for unsupervisedmoments? The answer to this type of question is not clear atthe current stage of knowledge. However, there is evidencethat the cortex has wide-spread projections both bottom-upand top-down [8] (pages 99-103). For example, cells in layer6 in V1 project back to the lateral geniculate nucleus [5](page 533). Can projections from later cortical areas be usedas supervision signals?Currently there is a lack of biologically inspired networksthat integrate these two different learning modes using asingle learning network. The network model proposed hereenables unsupervised and supervised learning to take placeat the same time throughout the network.One of the major advantages of supervised learning isthe development of certain invariant representations. Somenetworks have built-in (programmed-in) invariance, eitherspatial, temporal or some other signal properties. Othernetworks do not have built-in invariance. The required globalinvariance then must be learned object-by-object. However,they cannot share invariance of subparts (or locally invariantfeatures) for different objects. Consequently, the number ofsamples needed to reach the desired global invariance inobject recognition is very large.This paper proposes a new, general-purpose, multi-layernetwork, which learns invariance from experience. The net-work is biologically inspired. The network has multiplelayers; later layers take the response from early layers astheir input. This work concentrates on two layers. The net-work enables supervision from two types of projections: (a)supervision from the succeeding layer; (b) supervision fromother cortical regions (e.g., as attention selection signals). Thenetwork is self-organized with unsupervised signals (inputdata) from bottom-up and supervised signals (motor signals,attention selection, etc.) from top-down.From a mathematical point of view, in each layer of thenetwork, unsupervised learning enables nodes (neurons) togenerate a self-organized map that approximates the statisti-cal distribution of the bottom-up signals (input vector space),while supervised learning adjusts the node density in such amap so that those areas in the input space that are not related(or weakly related) to the output from this layer receive no(or fewer) nodes. Therefore, more nodes in each layer willrespond to output-relevant input components. This propertyleads to increasing invariance from one layer to the next ina multi-layer network. Finally, global invariance emerges atthe last (motor) layer.Furthermore, we require in-place learning. By in-placelearning, we mean that the signal processing network itselfdeals with its own adaptation through its own internal phys-iological mechanisms and interactions with other connectednetworks and, thus, there is no need to have an extra networkthat accomplishes the leaning (adaptation). It is apparent thata design of an in-place learning, biologically inspired networkthat integrates both unsupervised and supervised learning isnot trivial.In what follows, we first present the network structure inSection II. Then, in Section III, we explain the in-place learn-ing mechanism within each layer. Experimental examplesthat demonstrate the effects of the discussed principles arepresented in Section IV. Section V provides some concludingremarks.II. THE MULTI-LAYER IN-PLACE LEARNING NETWORKThis section presents the architecture of the new Multi-layer In-place Learning


LCA-2

Download LCA-2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view LCA-2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view LCA-2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?