DOC PREVIEW
UT Dallas CS 6375 - Explain Biased and non biased neurons

This preview shows page 1-2 out of 7 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

10 14 2014 Biased and non biased neurons Biased and non biased neurons Theoretical preface Neuron is a basic element of the neural network The main feature of the single neuron is that it has many inputs and only one output From the mathematical point of view neuron is an element that realizes function where f is the activation function wi weights for all inputs xi neuron input values Neuron is summing all elements of the input vector multiplied by the weights and the result is used as the argument of the activation function this way the neuron output value is created In most of applications neuron inputs and weights are normalized In geometry it is equal to move of input vector points to the surface of N dimensional sphere with unitary radius where N is a size of the input vector In the simplest case for the two dimensional vector normalization is the movement of all input points to the edge of the unitary radius circle Normalization could be written as where xi coordinate to normalize xj all of vector coordinates Use of normalization either to input vectors or input weights of the neuron improves learning neuron properties We can use linear or nonlinear function as an activation function In the event of linear neuron its mathematical equation could be written as follows It is one of the simplest of the neuron models which is only occasionally used in practice because the most of phenomena in the surrounding world have nonlinear characteristics As the example we can mean the biological neurons Neuron could be biased it means that it has additional input with constant value The weight of that input is modified during the learning process like the other neuron weights Generally we assume the bias input equal to one in this case the neuron mathematical equation could be written as follows where f is the activation function wi weights of all inputs xi neuron input values and http galaxy agh edu pl vlsi AI bias bias eng html 1 7 10 14 2014 Biased and non biased neurons w0 is the weight value of the bias When we assume that the bias input value is zero we obtain equation for non biased neuron Now we should say what is the use of that bias One dimensional case The simplest way to describe the function of bias is to present it s graphical interpretation for single input neuron with two activation functions This interpretation is shown in the figure below fig 1a Neuron without bias activate function Signum and sigmoidal function fig 1b Neuron with bias activate function Signum and sigmoidal function In these two figures we can see that the bias enables moving the activation threshold along the x axis When the bias is negative the movement is made to the right side and when the bias is positive it the movement is made to the left side Conclusion is that the http galaxy agh edu pl vlsi AI bias bias eng html 2 7 10 14 2014 Biased and non biased neurons biased neuron should learn even such input vectors that non biased neuron is not able to learn We come to the conclusion that the additional weight cost us more calculations but it improves neuron properties Normalization does not have any sense for single input neuron because every normalized point could have only three different values 1 0 or 1 Lets see normalization for biased single dimensional neuron Realization of normalization of input vectors the bias is the input equal to 1 and weights cause movement of all points to the edge of the circle with unitary radius Result of that operation is shown in the figure below fig 2 Result of the normalize operation for single dimensional neuron According to the sign of bias normalization all points are moved to the adequate part of the circle for positive bias to the upper part of the circle and for negative bias to the bottom one Increase of the dimension cause that we can simply draw the line to separate points of different neuron responses This straight line passes through the center of coordinate system and its gradient depends on w0 bias weight So bias causes move of the result to the additional dimension and makes solution of some problem possible to solve Two dimensional case Now lets see the case of two dimensional neuron In the geometrical interpretation all input vectors are in the OXY plane and the neuron output is the third dimension So the activation function is a surface in the 3 dimensional space the example of the sigmoidal function is shown in the figure below http galaxy agh edu pl vlsi AI bias bias eng html 3 7 10 14 2014 Biased and non biased neurons fig 3 Activate function for dual input neuron Normalization of the input vectors causes that all of them are moved to the edge of a unitary radius circle with one exception of point 0 0 which stay on its place Now we might think about how the bias works in the dual input neuron At first lets take a look to the activation function only As we know from the previous chapter bias input is responsible for moving of activation function toward the straight line In the two dimensional case bias moves the activate function towards the direction that is perpendicular to the line given by the equation Examples of activation function for neuron with and without a bias are shown in the figure below http galaxy agh edu pl vlsi AI bias bias eng html 4 7 10 14 2014 Biased and non biased neurons fig 4 Biased and non biased activate function Taking into account neuron with bias addition of an extra weight cause move of input vectors from the two to the three dimensional space All of points are situated on the sphere however for the positive bias on top half of sphere and for negative bias on a bottom sphere part This is result of the input vectors normalization namely the third coordinate is constant and that causes points separation for negative and positive bias The 0 0 point changes to 0 0 1 the highest point of sphere or to 0 0 1 the lowest point of sphere Use of bias is necessary to reach any result in some cases Example of the solution of the one problem using biased and non biased neuron is shown below http galaxy agh edu pl vlsi AI bias bias eng html 5 7 10 14 2014 Biased and non biased neurons fig 5a Solution for non biased neuron is impossible fig 5b Solution for biased neuron exists In these figures we can see that the points are chosen in such a way that for non biased neuron we cannot drive straight line through the center of coordinate system that separates different values of neuron response Neuron responses for each point was


View Full Document

UT Dallas CS 6375 - Explain Biased and non biased neurons

Documents in this Course
ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

vc

vc

24 pages

svm-2

svm-2

16 pages

svm-1

svm-1

18 pages

rl

rl

18 pages

mle

mle

16 pages

mdp

mdp

19 pages

knn

knn

11 pages

intro

intro

19 pages

hmm-train

hmm-train

26 pages

hmm

hmm

28 pages

ensemble

ensemble

17 pages

em

em

17 pages

dtree

dtree

41 pages

cv

cv

9 pages

bayes

bayes

19 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

hw0

hw0

2 pages

hw5

hw5

2 pages

hw3

hw3

3 pages

20.mdp

20.mdp

19 pages

19.em

19.em

17 pages

16.svm-2

16.svm-2

16 pages

15.svm-1

15.svm-1

18 pages

14.vc

14.vc

24 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

21.rl

21.rl

18 pages

CNF-DNF

CNF-DNF

2 pages

ID3

ID3

4 pages

mlHw6

mlHw6

3 pages

MLHW3

MLHW3

4 pages

MLHW4

MLHW4

3 pages

ML-HW2

ML-HW2

3 pages

vcdimCMU

vcdimCMU

20 pages

hw0

hw0

2 pages

hw3

hw3

3 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

15.svm-1

15.svm-1

18 pages

14.vc

14.vc

24 pages

hw2

hw2

2 pages

hw1

hw1

4 pages

hw0

hw0

2 pages

hw3

hw3

3 pages

9.hmm

9.hmm

28 pages

5.mle

5.mle

16 pages

3.bayes

3.bayes

19 pages

2.dtree

2.dtree

41 pages

1.intro

1.intro

19 pages

Load more
Download Explain Biased and non biased neurons
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Explain Biased and non biased neurons and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Explain Biased and non biased neurons and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?