DOC PREVIEW
NCSU MAE 208 - LINEAR ALGEBRAIC EQUATIONS

This preview shows page 1-2-14-15-30-31 out of 31 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1. How to Minimize a Function of Several Variables2. Matrix-Vector Calculus3. Types of Linear Algebraic Equations4. Under-Determined Systems: Minimum Norm Solutions5. Uniquely Determined Systems: Exact Solutions6. Over-Determined Systems: Least Squares Solutions7. Weighting8. Singular Value DecompositionLINEAR ALGEBRAIC EQUATIONS 14 LINEAR ALGEBRAIC EQUATIONS Systems of linear algebraic equations arise in all walks of life. They represent the most basic type of system of equations and they’re taught to everyone as far back as 8-th grade. Yet, the complete story about linear algebraic equations is usually not taught. What happens when there are more equations than unknowns or fewer equations than unknowns? What happens when some of the equations are repeated? These are precisely the questions that are answered in this chapter. Before proceeding with this, we have some background material to learn. We first need to discuss ways to minimize a function of several variables. Then, we’ll need to understand how to do this using a matrix-vector notation. After this is done, we’ll look at linear algebraic equations. 1. How to Minimize a Function of Several Variables The best way to introduce this topic is with an example. Let’s minimize the function MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS (14 - 1) 22yxJ += where x and y are both constrained to lie on the line (See Fig. 14 – 1) (14 - 2) baxy += Figure 14 - 1 This problem amounts to finding the point on the line that is closer to the origin than any other point. There are several ways to solve this problem. Let’s look at the following three ways. Ordinary Geometry The first way to solve this problem is to use ordinary geometry. Draw a perpendicular to the line that intersects the origin. The equation of the perpendicular line is y = -x/a (See Fig. 14 – 2). MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS Figure 14 - 2 Substituting the equation for the perpendicular line into the equation for the line yields the intercepts (14 - 3) 202011 abyaabx+=+−= Calculus with Substitution The second way to solve this problem is to recognize that this problem is a constrained optimization problem; a problem in which a function needs to be minimized while it’s being subjected to a constraint. The constrained minimization problem is converting into an unconstrained minimization problem. This is done using a substitution step. The constraint, Eq. (14 - 2), is substituted into the minimizing function, Eq. (14 - 1), to get (14 - 4) 22)( baxxJ ++= The minimum of J is now found by taking the derivative of J with respect to x and setting it to zero. This yields MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS (14 - 5) abaxx)(220 ++= which again leads to the answer given in Eq. (14 - 3). Calculus with No Substitution The third way of solving this problem is also done by converting the constrained minimization problem into an unconstrained minimization problem. However, this time, no substitution step will be needed to create the unconstrained minimization problem. This third method, which is the method that we’ll later employ to solve linear algebraic equations, proceeds by first writing the constraint as (14 - 6) 0=−−= baxyf We then define the augmented minimizing function (14 - 7) )(22baxyyxfJJa−−++=+=λλ where λ is a new variable. The minimum of Ja subject to no constraints is the same as the minimum of J subject to the constraint, Eq. (14 - 2). (This will be shown to be true in a moment.) In other words, we have replaced the constrained minimization problem (Eq. (14 - 1) and Eq. (14 - 2)) with an unconstrained minimization problem (Eq. (14 - 7)). Although Ja and J are different functions, their values are the same at the minimum, implying from Eq. (14 - 7) that the constraint is satisfied at the minimum (f = 0). Let’s now show that indeed J = Ja at the minimum and that we get the same answer as the first two MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS methods. To do this, first notice that Ja is a function of 3 variables – x, y, and λ. Thus, the minimum of Ja satisfies the three conditions (14 - 8a-c) fJyfyJyJxfxJxJaaa=∂∂=∂∂+∂∂=∂∂=∂∂+∂∂=∂∂=λλλ000 We already see in Eq. (14 - 8c) that the constraint is satisfied at the minimum. In order to conclude that minimizing Ja also minimizes J, we’ll need to show that Eqs. (14 - 8a,b) imply that .0=dxdJ From Eqs. (14 - 8a, b) we get, 01=⎟⎟⎠⎞⎜⎜⎝⎛∂∂−∂∂−∂∂+∂∂=⎟⎟⎠⎞⎜⎜⎝⎛∂∂∂∂−∂∂+∂∂=∂∂+∂∂=⎥⎦⎤⎢⎣⎡∂∂+∂∂=yJxJyJxJyfxfyJxJdxdyyJxJdxdyyJdxxJdxdJ Hence, the minima of Ja and J are the same. From Eqs. (14 - 7) and (14 - 8), the specific minimizing conditions are (14 - 9a-c) baxyyax−−=+=−=02020λλ The solution yields Eq. (14 - 3), as expected. MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS 2. Matrix-Vector Calculus The functions that we’ll be minimizing shortly will be expressed using a compact matrix-vector notation. We’ll come across functions like (14 - 10a-c) xxyxxx AJJJTTT===321 where ⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡=⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎝⎛=⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎝⎛=−−−−−−−−−−−−nnnnnnnnnnnnnnnnnnnnaaaaaaaaaaaaaaaaAxxxxxxxx)1(21)1()1)(1(2)1(1)1(2)1(222211)1(1121111211121LLMMOMMLLMMyx The T means transpose rows and columns. Notice by transposing a vector or a matrix twice that we get back the original vector or matrix. Also, notice that the transpose of a 1 x 1 vector (which is called a scalar) yields the original scalar. The derivatives of the functions J with respect to the coordinates x1, x2, x3,… , xn-1, xn will be placed in a vector as follows: (14 - 11) ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝⎛∂∂∂∂∂∂∂∂=∂∂−nnxJxJxJxJJ121Mx MAE 461: DYNAMICS AND CONTROLSLINEAR ALGEBRAIC EQUATIONS Thus, you can verify quite easily that the derivatives of the functions J given in Eqs. (14 - 10) with respect to x are (14 - 11a-d) xxyxxx)(2321TAAJJJ+=∂∂=∂∂=∂∂ Finally, we’ll need the following two properties of the inverse of a square matrix and of the transpose of a matrix: First, for any product of matrices, we have (14 - 12)


View Full Document

NCSU MAE 208 - LINEAR ALGEBRAIC EQUATIONS

Download LINEAR ALGEBRAIC EQUATIONS
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view LINEAR ALGEBRAIC EQUATIONS and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view LINEAR ALGEBRAIC EQUATIONS 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?