DOC PREVIEW
SBU CSE 634 - Presentation on Neural Network

This preview shows page 1-2-3-4-25-26-27-52-53-54-55 out of 55 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 55 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CSE 634 Data Mining TechniquesReferencesOverviewBasics of Neural NetworkNeural NetworksSimilarity with Biological NetworkSynapses, the basis of learning and memoryNeural NetworkNeural NetworkNeural Network ClassifierData NormalizationExample of Max-Min NormalizationDecimal Scaling NormalizationOne Neuron as a NetworkSlide 15Bias of a NeuronSlide 17Neuron with ActivationWhy We Need Multi Layer ?A Multilayer Feed-Forward Neural NetworkNeural Network LearningA Multilayer Feed Forward NetworkA Multilayered Feed – Forward NetworkA Multilayered Feed–Forward NetworkClassification by Back propagationSteps in Back propagation AlgorithmSteps in Back propagation Algorithm ( cont..)Propagation through Hidden Layer ( One Node )Propagate the inputs forwardSlide 30Back propagate the errorSlide 32Update weights and biasesUpdate weights and biasesTerminating ConditionsBackpropagation FormulasExample of Back propagationExample ( cont.. )Net Input and Output CalculationCalculation of Error at Each NodeCalculation of weights and Bias UpdatingAdvanced Features of Neural NetworkVariants of Neural Networks LearningTraining with SubsetsTraining with subsetsModular Neural NetworkEvolving Network ArchitecturesConstructive vs Destructive AlgorithmTraining Process of the MLPFaster ConvergenceApplications-IApplication-IISummaryRemember…..Question ???CSE 634 Data Mining TechniquesPresentation on Neural NetworkJalal Mahmud ( 105241140)Hyung-Yeon, Gu(104985928)Course Teacher : Prof. Anita Wasilewska State University of New York at Stony BrookReferencesData Mining Concept and Techniques (Chapter 7.5) [Jiawei Han, Micheline Kamber/Morgan Kaufman Publishers2002]Professor Anita Wasilewska’s lecture notewww.cs.vu.nl/~elena/slides03/nn_1light.ppt Xin Yao Evolving Artificial Neural Networks http://www.cs.bham.ac.uk/~xin/papers/published_iproc_sep99.pdfinformatics.indiana.edu/larryy/talks/S4.MattI.EANN.ppt www.cs.appstate.edu/~can/classes/ 5100/Presentations/DataMining1.ppt www.comp.nus.edu.sg/~cs6211/slides/blondie24.ppt www.public.asu.edu/~svadrevu/UMD/ThesisTalk.ppt www.ctrl.cinvestav.mx/~yuw/file/afnn1_nnintro.PPTOverviewBasics of Neural NetworkAdvanced Features of Neural NetworkApplications I-IISummaryBasics of Neural NetworkWhat is a Neural NetworkNeural Network ClassifierData NormalizationNeuron and bias of a neuronSingle Layer Feed ForwardLimitationMulti Layer Feed ForwardBack propagationNeural NetworksWhat is a Neural Network?Similarity with biological networkFundamental processing elements of a neural network is a neuron1.Receives inputs from other source2.Combines them in someway3.Performs a generally nonlinear operation on the result4.Outputs the final result•Biologically motivated approach to machine learningSimilarity with Biological Network•Fundamental processing element of a neural network is a neuron•A human brain has 100 billion neurons•An ant brain has 250,000 neuronsSynapses,the basis of learning and memoryNeural NetworkNeural Network is a set of connected INPUT/OUTPUT UNITS, where each connection has a WEIGHT associated with it.Neural Network learning is also called CONNECTIONIST learning due to the connections between units.It is a case of SUPERVISED, INDUCTIVE or CLASSIFICATION learning.Neural NetworkNeural Network learns by adjusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data.Neural Network needs long time for training.Neural Network has a high tolerance to noisy and incomplete dataNeural Network ClassifierInput: Classification data It contains classification attributeData is divided, as in any classification problem. [Training data and Testing data]All data must be normalized. (i.e. all values of attributes in the database are changed to contain values in the internal [0,1] or[-1,1]) Neural Network can work with data in the range of (0,1) or (-1,1)Two basic normalization techniques [1] Max-Min normalization [2] Decimal Scaling normalizationData NormalizationAnewAnewAnewAAAvv min_)min_max_(minmaxmin' [1] Max- Min normalization formula is as follows:[minA, maxA , the minimun and maximum values of the attribute A max-min normalization maps a value v of A to v’ in the range {new_minA, new_maxA} ]Example of Max-Min NormalizationAnewAnewAnewAAAvv min_)min_max_(minmaxmin' Max- Min normalization formulaExample: We want to normalize data to range of the interval [0,1].We put: new_max A= 1, new_minA =0.Say, max A was 100 and min A was 20 ( That means maximum and minimum values for the attribute ).Now, if v = 40 ( If for this particular pattern , attribute value is 40 ), v’ will be calculated as , v’ = (40-20) x (1-0) / (100-20) + 0 => v’ = 20 x 1/80 => v’ = 0.4Decimal Scaling Normalization[2]Decimal Scaling NormalizationNormalization by decimal scaling normalizes by moving the decimal point of values of attribute A.jvv10'Here j is the smallest integer such that max|v’|<1. Example : A – values range from -986 to 917. Max |v| = 986.v = -986 normalize to v’ = -986/1000 = -0.986One Neuron as a NetworkHere x1 and x2 are normalized attribute value of data. y is the output of the neuron , i.e the class label.x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x. Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by a weight w2.Given that•w1 = 0.5 and w2 = 0.5•Say value of x1 is 0.3 and value of x2 is 0.8,•So, weighted sum is : •sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55One Neuron as a Network•The neuron receives the weighted sum as input and calculates the output as a function of input as follows :•y = f(x) , where f(x) is defined as •f(x) = 0 { when x< 0.5 }•f(x) = 1 { when x >= 0.5 }•For our example, x ( weighted sum ) is 0.55, so y = 1 , •That means corresponding input attribute values are classified in class 1.•If for another input values , x = 0.45 , then f(x) = 0, •so we could conclude that input values are classified to class 0.Bias of a Neuron We need the bias value to be added to the weighted sum ∑wixi so that we can transform it from the origin.v = ∑wixi + b, here b is the bias x1-x2=0 x1-x2= 1 x1 x2 x1-x2= -1Bias as extra inputInputAttributevalues weightsSumming function


View Full Document
Download Presentation on Neural Network
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Presentation on Neural Network and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Presentation on Neural Network 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?