Multilayer NetworksQ550: Models in Cognitive ScienceX1X2X3X4XnΣ+1-1w3w2w1wnw4ArtificialRetinaSingle-Layer PerceptronInputNodesInput SummationTHD(McCulloch-Pitts neuron)! "wij=#(ti$ yi)xjIf y = t, do nothingIf y ≠ t, then delta update:X1X2X3X4XnΣ+1-1Multiclass Single-Layer PerceptronΣΣInputHiddenOutputHeteroassociatorAutoassociator! neti= ijwijj=1n"! neti= hjwijj=1n"! oi=11+ exp"netiLogistic sigmoid transformTraining the network• Backwards propagation of errors (“backprop”)DO i = 1 to N_Training_Examples• Present a training example and compute output• Compare actual output to desired output; determine error for eachnode• For each node, calculate what the output should have been, and ascaling factor to produce the desired output• Adjust the weights of each node to minimize error• Assign “blame” for error to nodes at the previous level, giving moreblame for nodes more responsible for the error• Repeat for the previous layer, using its blame as errorENDDOiihjokiihjok! hj= iiwjii=1I"iihjok! netk= hjwkjj=1J"! ok=11+ exp"netktk is the desired output! hj= iiwjii=1I"iihjoktk is the desired output! "k= tk# ok( )ok1# ok( )output error foreach nodeiihjok! "wjk=#$khjUpdate H-->Oiihjok! "j= hj1# hj( )wkj"kk=1K$error for eachhidden nodeiihjokUpdate I-->H! "wij=#$jiiThen try with a new training pattern and backpropagatethe errors until the system is “trained up”1274365981110121314E = [1 0 0 1 1 1 0 0 1 0 0 0 1 0]• Recurrent Networks• PDP++ Software• Random Walk/Diffusion Model• DMDX for behavioral
View Full Document