DOC PREVIEW
UT Dallas CS 6375 - Neural Network Basics

This preview shows page 1-2-24-25 out of 25 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

10/14/2014 Neural Network Basicshttp://www.webpages.ttu.edu/dleverin/neural_network/neural_networks.html 1/25A Basic Introduction toFeedforward Backpropagation Neural NetworksDavid Leverington Associate Professor of GeosciencesThe Feedforward Backpropagation Neural Network AlgorithmAlthough the longterm goal of the neuralnetwork community remains the design of autonomousmachine intelligence, the main modern application of artificial neural networks is in the field of patternrecognition (e.g., Joshi et al., 1997). In the subfield of data classification, neuralnetwork methods havebeen found to be useful alternatives to statistical techniques such as those which involve regressionanalysis or probability density estimation (e.g., Holmström et al., 1997). The potential utility of neuralnetworks in the classification of multisource satelliteimagery databases has been recognized for wellover a decade, and today neural networks are an established tool in the field of remote sensing.The most widely applied neural network algorithm in image classification remains the feedforwardbackpropagation algorithm. This web page is devoted to explaining the basic nature of this classificationroutine.1 Neural Network BasicsNeural networks are members of a family of computational architectures inspired by biological brains(e.g., McClelland et al., 1986; Luger and Stubblefield, 1993). Such architectures are commonly called"connectionist systems", and are composed of interconnected and interacting components called nodes orneurons (these terms are generally considered synonyms in connectionist terminology, and are usedinterchangeably here). Neural networks are characterized by a lack of explicit representation ofknowledge; there are no symbols or values that directly correspond to classes of interest. Rather,knowledge is implicitly represented in the patterns of interactions between network components (Lugarand Stubblefield, 1993). A graphical depiction of a typical feedforward neural network is given in Figure1. The term “feedforward” indicates that the network has links that extend in only one direction. Exceptduring training, there are no backward links in a feedforward network; all links proceed from input nodestoward output nodes.10/14/2014 Neural Network Basicshttp://www.webpages.ttu.edu/dleverin/neural_network/neural_networks.html 2/25 Figure 1: A typical feedforward neural network.Individual nodes in a neural network emulate biological neurons by taking input data and performingsimple operations on the data, selectively passing the results on to other neurons (Figure 2). The outputof each node is called its "activation" (the terms "node values" and "activations" are usedinterchangeably here). Weight values are associated with each vector and node in the network, and thesevalues constrain how input data (e.g., satellite image values) are related to output data (e.g., landcoverclasses). Weight values associated with individual nodes are also known as biases. Weight values aredetermined by the iterative flow of training data through the network (i.e., weight values are establishedduring a training phase in which the network learns how to identify particular classes by their typicalinput data characteristics). A more formal description of the foundations of multilayer, feedforward,backpropagation neural networks is given in Section 5.Once trained, the neural network can be applied toward the classification of new data. Classifications areperformed by trained networks through 1) the activation of network input nodes by relevant data sources[these data sources must directly match those used in the training of the network], 2) the forward flow ofthis data through the network, and 3) the ultimate activation of the output nodes. The pattern ofactivation of the network’s output nodes determines the outcome of each pixel’s classification. Usefulsummaries of fundamental neural network principles are given by Rumelhart et al. (1986), McClellandand Rumelhart (1988), Rich and Knight (1991), Winston (1991), Anzai (1992), Lugar and Stubblefield(1993), Gallant (1993), and Richards and Jia (2005). Parts of this web page draw on these summaries. Abrief historical account of the development of connectionist theories is given in Gallant (1993).10/14/2014 Neural Network Basicshttp://www.webpages.ttu.edu/dleverin/neural_network/neural_networks.html 3/25Figure: 2 Schematic comparison between a biological neuron and an artificial neuron (after Winston,1991; Rich and Knight, 1991). For the biological neuron, electrical signals from other neurons areconveyed to the cell body by dendrites; resultant electrical signals are sent along the axon to bedistributed to other neurons. The operation of the artificial neuron is analogous to (though much simplerthan) the operation of the biological neuron: activations from other neurons are summed at the neuronand passed through an activation function, after which the value is sent to other neurons.2 McCullochPitts NetworksNeural computing began with the development of the McCullochPitts network in the 1940's (McCullochand Pitts, 1943; Luger and Stubblefield, 1993). These simple connectionist networks, shown in Figure 3,are standalone “decision machines” that take a set of inputs, multiply these inputs by associated weights,and output a value based on the sum of these products. Input values (also known as input activations) arethus related to output values (output activations) by simple mathematical operations involving weightsassociated with network links. McCullochPitts networks are strictly binary; they take as input andproduce as output only 0's or 1's. These 0's and 1's can be thought of as excitatory or inhibitory entities,respectively (Luger and Stubblefield, 1993). If the sum of the products of the inputs and their respectiveweights is greater than or equal to 0, the output node returns a 1 (otherwise, a 0 is returned). The value of0 is thus a threshold that must be exceeded or equalled if the output of the system is to be 1. The aboverule, which governs the manner in which an output node maps input values to output values, is known asan activation function (meaning that this function is


View Full Document

UT Dallas CS 6375 - Neural Network Basics

Documents in this Course
ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

hw0

hw0

2 pages

hw5

hw5

2 pages

hw3

hw3

3 pages

20.mdp

20.mdp

19 pages

19.em

19.em

17 pages

16.svm-2

16.svm-2

16 pages

15.svm-1

15.svm-1

18 pages

14.vc

14.vc

24 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

21.rl

21.rl

18 pages

CNF-DNF

CNF-DNF

2 pages

ID3

ID3

4 pages

mlHw6

mlHw6

3 pages

MLHW3

MLHW3

4 pages

MLHW4

MLHW4

3 pages

ML-HW2

ML-HW2

3 pages

vcdimCMU

vcdimCMU

20 pages

hw0

hw0

2 pages

hw3

hw3

3 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

15.svm-1

15.svm-1

18 pages

14.vc

14.vc

24 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

hw0

hw0

2 pages

hw3

hw3

3 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

Load more
Download Neural Network Basics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Network Basics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Network Basics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?