DOC PREVIEW
MIT 10 34 - Lecture Notes

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Produced using MATLAB® software. % simple_nonlinear_LS Page 1 of 10 % simple_nonlinear_LS.m % % This MATLAB function uses only standard MATLAB % routines to implement the nonlinear fitting % of parameters using the Marquardt method. % The errors for each response measurement are % assumed to be independently distributed % according to a normal distribution with a % mean of zero and a common variance. % % Once the parameter vector minimizing the % residual sum of squarred errors is calculated, % approximate confidence intervalues are generated % using the linearized design matrix around the % minimum. The normal equations are solved using % singular value decomposition. % % % INPUT : % ======= % y - the vector of observed responses % % calc_yhat - the name of a function that takes as input % a given estimate of the model parameters and returns % as output the vector of model predictions % % theta_guess - the initial guess of the vector of parameters % % alpha_CI - determines the width of the confidence intervals; % a value of 0.05 returns a 95% confidence interval. % % Options - a data structure that provides information to % override the default options of the solver routine. % Fields are (with default values in parentheses) : % .atol - convergence tolerance of gradient norm (1e-8) % .max_iter - max # of Gauss-Newton iterations (100) % .max_line - max # of weak line steps (10) % .tau_LM - the initial gues of the positive % value added to the diagonal elements of the % approximate Hessian in the Levenberg-Marquardt % method (1e-3) % .search_angle_max - the maximum angle, in degrees, allowed % between the search direction and the direction of % steepest descent (80) % .verbose - if 0, don't print to screen results. If 1, % print final results. If >1, print results of % each iteration (0) % % Param_fix - a data structure of fixed parameters that may be 7/16/2002% simple_nonlinear_LS Page 2 of 10 % passed to the function calc_yhat % % % OUTPUT : % ======== % theta - the parameter found as a local minimum % of the residual sum of squared errors % % theta_CI - a vector containing the half-widths % for the confidence intervals for each parameter % % yhat - the vector of model predictions using the % fitted values of the model parameters % % yhat_CI - a vector containing the half-widths for % the confidence intervals for each model % prediction. % % Stats - a data structure containing various measures % of fit of the data % .RSS - the residual sum of squared errors % .sample_var - the sample variance % .sample_std - the sample standard deviation % .R2_corr - the R^2 correlation coefficient that % reports the quality of the fit % % iflag - an integer that takes a value of 1 upon % successful completion of the routine % % K. Beers % MIT ChE % 12/8/2001 function [theta,theta_CI,yhat,yhat_CI,Stats,iflag] = ... simple_nonlinear_LS(... y,calc_yhat,theta_guess,alpha_CI,Options,Param_fix); iflag = 0; % We extract the number of the measurements and % the number of parameters to be fitted. num_exp = length(y); num_param = length(theta_guess); % From the initial guess we now calculate the initial % model predicions. From this, we then calculate the % residual error vector. theta = theta_guess; [yhat,iflag2] = feval(calc_yhat,theta,Param_fix); if(iflag2 <= 0) 7/16/2002% simple_nonlinear_LS Page 3 of 10 iflag = -1; error(['calc_yhat returned error : ', ... int2str(iflag2)]); end residual = y - yhat; RSS = dot(residual,residual); % We now use finite differences to estimate the linearized % design matrix X. [X,iflag2] = approx_X_FD(theta,yhat,calc_yhat,Param_fix); if(iflag2 <= 0) iflag = -2; error(['approx_X_FD returned error : ', ... int2str(iflag2)]); end % We now set the parameters that direct the optimization % algorithm. if(~exist('Options')) Options.null = 0; end atol = 1e-8; if(isfield(Options,'atol')) atol = Options.atol; end max_iter = 100; if(isfield(Options,'max_iter')) max_iter = Options.max_iter; end max_line = 10; if(isfield(Options,'max_line')) max_line = Options.max_line; end tau_LM = 1e-3; if(isfield(Options,'tau_LM')) tau_LM = Options.tau_LM; end search_angle_max = 80; if(isfield(Options,'angle_max')) search_angle_max = Options.search_angle_max; end search_cos_min = cos(search_angle_max/180*pi); verbose = 0; if(isfield(Options,'verbose')) verbose = Options.verbose; 7/16/2002% simple_nonlinear_LS Page 4 of 10 if(verbose < 0) verbose = 0; end end if(verbose) disp(' '); disp(['Initial RSS = ', num2str(RSS)]); end % We now begin the iterations of the Gauss-Newton method, % using the Levenberg-Marquardt method to modify the % approximate Hessian if the search direction is too far % away from the direction of steepest descent. if(verbose) disp(' '); disp('Iteration RSS |grad.| tau_LM'); end coverged = 0; for iter=1:max_iter % First, we store the old value of the % residual sum of squares to use in the % descent criterion. RSS_old = RSS; % We now calculate the search direction using the % Levenberg-Marquardt method of adding a diagonal % matrix (here we use Levenberg's choice of I) to % bias the search direction towards the steepest % descent direction when the approximate Hessian % behaves poorly. Since we are adding a postive % diagonal matrix, problems with near-singularity % are somewhat reduced. We use a QR decomposition % to solve. % We use a while loop to keep calculating search % directions until we find one that is less than % the maximum angle away from the direction of % steepest descent. accept_search = 0; d = X'*residual; while(~accept_search) [Q,R] =qr(X'*X + tau_LM*eye(num_param)); p = R\(Q'*X'*residual); try search_cos = dot(p,d)/sqrt(dot(p,p))/ ... 7/16/2002% simple_nonlinear_LS Page 5 of 10 sqrt(dot(d,d)); if(search_cos >= search_cos_min) accept_search = 1; tau_LM = 0.1*tau_LM; else tau_LM = 10*tau_LM; end catch accept_search = 1; end end % Now that we have selected the search direction, we begin % a weak line search to force acceptance of the descent % criterion. ifound_descent = 0; for iter_line = 0:max_line % We try the following fractional step length. alpha = 2^-iter_line; % We now calculate the model predictions for this % new estimate of the parameter vector. yhat = feval(calc_yhat,theta+alpha*p,Param_fix); RSS = dot(y-yhat,y-yhat); % If this fractional step satisfies the descent % criterion, accept it. Otherwise, try a smaller % fractional step. if(RSS < RSS_old) theta = theta + alpha*p; X = approx_X_FD(theta, ...


View Full Document

MIT 10 34 - Lecture Notes

Documents in this Course
Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?