Unformatted text preview:

Neural Networks -IIReferencesBasics of a Neural NetworkSlide 4Slide 5A single NeuronSlide 7Slide 8Bias of a NeuronBias as an inputA Multilayer Feed-Forward Neural NetworkInputs to a Neural NetworkNet Weighted InputBinary activation functionSquashing activation functionLearning in Neural NetworksUsing Error CorrectionUsing Error CorrectionSlide 19Slide 20Slide 21Hebbian Learning FormulaSlide 23Slide 24Competitive LearningSlide 26Slide 27Slide 28The Discrete PerceptronSingle Discrete Perceptron Training Algorithm (SDPTA)Slide 31SDPTA contd..Single Continous Perceptron Training Algorithm (SCPTA)The Continuous PerceptronSlide 35SCPTA contd..R category Discrete Perceptron Training Algorithm (RDPTA)AlgorithmRDPTA contd..What is Backpropagation?Architecture: Backpropagation NetworkSlide 42EBPTA contd..GeneralisationGeneralisation …Handwritten Text RecognitionSteps for ClassificationInput RepresentationPreprocessingSegmentation using ANNIdentifying CharactersRecurrent NetworkSome parametersTraining problems.. solutionsAdvantages/DisadvantagesEffective Data Mining Using Neural NetworksCriticism of Neural NetworksNeural Network based Data MiningSlide 59Rule Extraction Algorithm**Slide 61Slide 62Slide 63Future EnhancementsNeural Networks -IIMihir MohiteJeet KulkarniRituparna BhiseShrinand JavadekarData Mining CSE 634Prof. Anita WasilewskaReferenceshttp://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdfhttp://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdfhttp://www.comp.glam.ac.uk/digimaging/neural.htmhttp://www.nbb.cornell.edu/neurobio/linster/lecture4.pdfsrc:http://www.nbb.cornell.edu/neurobio/linster/lecture4.pdfLecture slides prepared by Jalal Mahmud and Hyung-Yeon Gu under the guidance of Prof. Anita WasilewskaBasics of a Neural NetworkNeural Network is a set of connected INPUT/OUTPUT UNITS, where each connection has a WEIGHT associated with itNeural Network learns by adjusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data.Basics of a Neural NetworkInput: Classification data It contains classification attributeData is divided, as in any classification problem. [Training data and Testing data]All data must be normalized (i.e. all values of attributes in the database are changed to contain values in the internal [0,1] or[-1,1]) Neural Network can work with data in the range of (0,1) or (-1,1)Basics of a Neural NetworkAnewAnewAnewAAAvv min_)min_max_(minmaxmin' Example: We want to normalize data to range of the interval [0,1].We put: new_max A= 1, new_minA =0.Say, max A was 100 and min A was 20 ( That means maximum and minimum values for the attribute ).Now, if v = 40 ( If for this particular pattern , attribute value is 40 ), v’ will be calculated as , v’ = (40-20) x (1-0) / (100-20) + 0 => v’ = 20 x 1/80 => v’ = 0.4A single NeuronHere x1 and x2 are normalized attribute value of data. y is the output of the neuron , i.e the class label.x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x. Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by a weight w2.A single NeuronGiven thatw1 = 0.5 and w2 = 0.5Say value of x1 is 0.3 and value of x2 is 0.8,So, weighted sum is : sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55A single NeuronThe neuron receives the weighted sum as input and calculates the output as a function of input as follows :y = f(x) , where f(x) is defined as f(x) = 0 { when x< 0.5 }f(x) = 1 { when x >= 0.5 }For our example, x ( weighted sum ) is 0.55, so y = 1 , That means corresponding input attribute values are classified in class 1.If for another input values , x = 0.45 , then f(x) = 0, so we could conclude that input values are classified to class 0.Bias of a NeuronWe need the bias value to be added to the weighted sum ∑wixi so that we can transform it from the origin. x1-x2=0 x1-x2= 1 x1 x2 x1-x2= -1Bias as an input∑ fw0w1wnX0= +1x1xn Summing funcActivation funco/p classkOjkwOutput nodesInput nodesHidden nodesOutput ClassInput Record : xi wij- weightsNetwork is fully connectedjOA Multilayer Feed-Forward Neural NetworkInputs to a Neural NetworkINPUT: records without class attribute with normalized attributes values. INPUT VECTOR: X = { x1, x2, …. xn} where n is the number of (non class) attributes. WEIGHT VECTOR: W = {w1,w2,….wn} where n is the number of (non-class) attributesINPUT LAYER – there are as many nodes as non-class attributes i.e. as the length of the input vector.HIDDEN LAYER – the number of nodes in the hidden layer and the number of hidden layers depends on implementation.Net Weighted Input•Given a unit j in a hidden or output layer, the net input iswhere wij is the weight of the connection from unit i in the previous layer to unit j; Oi is the output of unit I from the previous layer; is the bias of the unitijiijjOwIjBinary activation functionGiven a net input Ij to unit j, then Oj = f(Ij), the output of unit j, is computed asOj = 1 if lj>TOj= 0 if lj<=TWhere T is known as the ThresholdSquashing activation functionEach unit in the hidden and output layers takes its net input and then applies an activation function. The function symbolizes the activation of the neuron represented by the unit. It is also called a logistic, sigmoid, or squashing function.Given a net input Ij to unit j, then Oj = f(Ij), the output of unit j, is computed asjIjeO11Learning in Neural NetworksLearning in Neural Networks-what is it?Why is learning required?Supervised and Unsupervised learningIt takes a long time to train a neural networkA well trained network is tolerant to noise in dataUsing Error Correction Used for supervised learningPerceptron Learning FormulaFor binary-valued response functionDelta Learning FormulaFor continuous-valued response functionUsing Error CorrectionPerceptron Learning Formula∆wi = c[di –oi]xiSo the value of ∆wi is either 0 (when expected output and actual output are the same) Or2cxi (when di –oi is +/-2)Using Error CorrectionPerceptron Learning Formula(http://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdf)Using Error CorrectionDelta Learning Formula∆wi = c[di –oi]xi * o’iIn case of a unipolar


View Full Document
Download Neural networks 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural networks 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural networks 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?