Neural Networks -IIReferencesBasics of a Neural NetworkSlide 4Slide 5A single NeuronSlide 7Slide 8Bias of a NeuronBias as an inputA Multilayer Feed-Forward Neural NetworkInputs to a Neural NetworkNet Weighted InputBinary activation functionSquashing activation functionLearning in Neural NetworksUsing Error CorrectionUsing Error CorrectionSlide 19Slide 20Slide 21Hebbian Learning FormulaSlide 23Slide 24Competitive LearningSlide 26Slide 27Slide 28The Discrete PerceptronSingle Discrete Perceptron Training Algorithm (SDPTA)Slide 31SDPTA contd..Single Continous Perceptron Training Algorithm (SCPTA)The Continuous PerceptronSlide 35SCPTA contd..R category Discrete Perceptron Training Algorithm (RDPTA)AlgorithmRDPTA contd..What is Backpropagation?Architecture: Backpropagation NetworkSlide 42EBPTA contd..GeneralisationGeneralisation …Handwritten Text RecognitionSteps for ClassificationInput RepresentationPreprocessingSegmentation using ANNIdentifying CharactersRecurrent NetworkSome parametersTraining problems.. solutionsAdvantages/DisadvantagesEffective Data Mining Using Neural NetworksCriticism of Neural NetworksNeural Network based Data MiningSlide 59Rule Extraction Algorithm**Slide 61Slide 62Slide 63Future EnhancementsNeural Networks -IIMihir MohiteJeet KulkarniRituparna BhiseShrinand JavadekarData Mining CSE 634Prof. Anita WasilewskaReferenceshttp://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdfhttp://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdfhttp://www.comp.glam.ac.uk/digimaging/neural.htmhttp://www.nbb.cornell.edu/neurobio/linster/lecture4.pdfsrc:http://www.nbb.cornell.edu/neurobio/linster/lecture4.pdfLecture slides prepared by Jalal Mahmud and Hyung-Yeon Gu under the guidance of Prof. Anita WasilewskaBasics of a Neural NetworkNeural Network is a set of connected INPUT/OUTPUT UNITS, where each connection has a WEIGHT associated with itNeural Network learns by adjusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data.Basics of a Neural NetworkInput: Classification data It contains classification attributeData is divided, as in any classification problem. [Training data and Testing data]All data must be normalized (i.e. all values of attributes in the database are changed to contain values in the internal [0,1] or[-1,1]) Neural Network can work with data in the range of (0,1) or (-1,1)Basics of a Neural NetworkAnewAnewAnewAAAvv min_)min_max_(minmaxmin' Example: We want to normalize data to range of the interval [0,1].We put: new_max A= 1, new_minA =0.Say, max A was 100 and min A was 20 ( That means maximum and minimum values for the attribute ).Now, if v = 40 ( If for this particular pattern , attribute value is 40 ), v’ will be calculated as , v’ = (40-20) x (1-0) / (100-20) + 0 => v’ = 20 x 1/80 => v’ = 0.4A single NeuronHere x1 and x2 are normalized attribute value of data. y is the output of the neuron , i.e the class label.x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x. Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by a weight w2.A single NeuronGiven thatw1 = 0.5 and w2 = 0.5Say value of x1 is 0.3 and value of x2 is 0.8,So, weighted sum is : sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55A single NeuronThe neuron receives the weighted sum as input and calculates the output as a function of input as follows :y = f(x) , where f(x) is defined as f(x) = 0 { when x< 0.5 }f(x) = 1 { when x >= 0.5 }For our example, x ( weighted sum ) is 0.55, so y = 1 , That means corresponding input attribute values are classified in class 1.If for another input values , x = 0.45 , then f(x) = 0, so we could conclude that input values are classified to class 0.Bias of a NeuronWe need the bias value to be added to the weighted sum ∑wixi so that we can transform it from the origin. x1-x2=0 x1-x2= 1 x1 x2 x1-x2= -1Bias as an input∑ fw0w1wnX0= +1x1xn Summing funcActivation funco/p classkOjkwOutput nodesInput nodesHidden nodesOutput ClassInput Record : xi wij- weightsNetwork is fully connectedjOA Multilayer Feed-Forward Neural NetworkInputs to a Neural NetworkINPUT: records without class attribute with normalized attributes values. INPUT VECTOR: X = { x1, x2, …. xn} where n is the number of (non class) attributes. WEIGHT VECTOR: W = {w1,w2,….wn} where n is the number of (non-class) attributesINPUT LAYER – there are as many nodes as non-class attributes i.e. as the length of the input vector.HIDDEN LAYER – the number of nodes in the hidden layer and the number of hidden layers depends on implementation.Net Weighted Input•Given a unit j in a hidden or output layer, the net input iswhere wij is the weight of the connection from unit i in the previous layer to unit j; Oi is the output of unit I from the previous layer; is the bias of the unitijiijjOwIjBinary activation functionGiven a net input Ij to unit j, then Oj = f(Ij), the output of unit j, is computed asOj = 1 if lj>TOj= 0 if lj<=TWhere T is known as the ThresholdSquashing activation functionEach unit in the hidden and output layers takes its net input and then applies an activation function. The function symbolizes the activation of the neuron represented by the unit. It is also called a logistic, sigmoid, or squashing function.Given a net input Ij to unit j, then Oj = f(Ij), the output of unit j, is computed asjIjeO11Learning in Neural NetworksLearning in Neural Networks-what is it?Why is learning required?Supervised and Unsupervised learningIt takes a long time to train a neural networkA well trained network is tolerant to noise in dataUsing Error Correction Used for supervised learningPerceptron Learning FormulaFor binary-valued response functionDelta Learning FormulaFor continuous-valued response functionUsing Error CorrectionPerceptron Learning Formula∆wi = c[di –oi]xiSo the value of ∆wi is either 0 (when expected output and actual output are the same) Or2cxi (when di –oi is +/-2)Using Error CorrectionPerceptron Learning Formula(http://www.csse.uwa.edu.au/teaching/units/233.407/lectureNotes/Lect4-UWA.pdf)Using Error CorrectionDelta Learning Formula∆wi = c[di –oi]xi * o’iIn case of a unipolar
View Full Document