DOC PREVIEW
MIT 10 34 - Numerical Methods Applied to Chemical Engineering

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

10.34 Numerical Methods Applied to Chemical Engineering MATLAB Tutorial Kenneth Beers Department of Chemical Engineering Massachusetts Institute of Technology August 1, 2001 The Nature of Scientific Computing This course focuses on the use of computers to solve problems in chemical engineering. We will learn how to solve the partial differential equations that describe momentum, energy, and mass transfer, integrate the ordinary differential equations that model a chemical reactor, and simulate the dynamics and predict the minimum-energy structures of molecules. These problems are expressed in terms of mathematical operations such as partial differentiation and integration that computers do not understand. All that they know how to do is store numbers at locations in their memory and perform simple operations on them like addition, subtraction, multiplication, division, and exponentiation. Somehow, we need to translate our higher-level mathematical description of these problems into a sequence of these basic operations. It is logical to develop simulation algorithms that decompose each problem into sets of linear equations of the following form. a11* x1+ a12*x2 + ... + a1n*xn = b1 a21*x1 + a22*x2 + ... + a2n*xn = b2 . . . an1*x1 + an2*x2 + ... + ann*xn = bn A computer understands how to do the operations found in this system (multiplication and addition), and we can represent this set of equations very generally by the matrix equation Ax = b, where A={aij} is the matrix of coefficients on the left hand side, x is the solution vector, and b is the vector of the coefficients on the right hand side. This general representation allows us to pass along, in a consistent language, our system-specific linear equation sets to pre written algorithms that have been optimized to solve them very efficiently. This saves us the effort of coding a linear solver every time we write a new program. This method of relegating repetitive tasks to re-usable, pre written subroutines makes the idea of using a computer to solve complex technical problems feasible. It also allows us to take advantage of the decades of applied mathematics research that have gone into developing efficient numerical algorithms. Scientific programs typically involve problem-specific sections that perform the parameter input and results output, phrase the problem into a series of linear algebraic systems, and then the program spends most of its execution time solving these linear systems. This course focuses primarily on understanding the theory and concepts fundamental to scientific computing, but we also need to know how to translate these concepts into working programs and to combine our problem-specific code with pre written routines that efficiently perform the desired numerical operations. So, how do we instruct the computer to solve our specific problem? At a basic level, all a computer does is follow instructions that tell it to retrieve numbers from specified memorylocations, perform some simple algebraic operations on them, and store them in some (possibly new) places in memory. Rather than force computer users to deal with details like memory addresses or the passing of data from memory to the CPU, computer scientists develop for each type of computer a program called a compiler that translates ãhuman-levelä code into the set of detailed machine-level instructions (contained in an executable file) that the computer will perform to accomplish the task. Using a compiler, it is easy to write code that tells a computer to do the following : 1. Find a space in memory to store a real number x 2. Find a space in memory to store a real number y 3. Find a space in memory to store a real number z 4. Set the value of x to 2 5. Set the value of y to 4 6. Set the value stored at the location z to equal 2*x + 3*y, where the symbol * denotes multiplication In FORTRAN, the first modern scientific programming language that, in modified form -commonly FORTRAN 77, is still in wide use today, you can accomplish these tasks by writing the code : REAL x, y, z x = 2 y = 4 z = 2*x + 3*y By itself, however, this code performs the desired task, but does not provide any means for the user to view the results. A full FORTRAN program to perform the task and write the result to the screen is : IMPLICIT NONE REAL x, y, z x = 2 y = 4 z = 2*x + 3*y PRINT *, 'z = ',z END When this code is compiled with a FORTRAN 77 compiler, the output to the screen from running the executable is : z = 16.0000. Compiled programming languages allow only the simple output of text, numbers, and binary data, so any graphing of results must be performed by a separate program. In practice, this requirement of writing the code, storing the output in a file with the appropriate format, and reading this file into a separate graphing or analysis program leads one to use for small projects "canned" software such as EXCEL that are ill-suited for technical computing; after all, EXCEL is intended for business spreadsheets! Other compiled programming languages exist, most being more powerful than FORTRAN 77, a legacy of the past that is retained mostly due to the existence of highly efficient numerical routines written in the language. While FORTRAN 77 lacks the functionality of more modern languages, in terms of execution speed it usually has the advantage. In the 80's and 90's, C and C++ became highly popular within the broader computer science community because they allow one to organize and structure data more conveniently and to write highly-modular code for large programs. C and C++ have never gained the same level of popularity within the scientific computing community, mainly because their implementation has been focused more towards robustness and generality with less regard for execution speed. Many scientific programs have comparatively simple structures so that execution speed is the primary concern. This situation is changing somewhat today; however, the introduction of FORTRAN 90 and its update FORTRAN 95 have given the FORTRAN language a new lease on life.FORTRAN 90/95 includes many of the data structuring capabilities of C/C++, but was written with a technical audience in mind. It is the language of choice for parallel scientific computing, in which tasks are parceled during execution to one or more CPU's. With the growing popularity of dual processor workstations and BEOWOLF-type clusters, FORTRAN 90/95 and variants such as High


View Full Document

MIT 10 34 - Numerical Methods Applied to Chemical Engineering

Documents in this Course
Load more
Download Numerical Methods Applied to Chemical Engineering
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Numerical Methods Applied to Chemical Engineering and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Numerical Methods Applied to Chemical Engineering 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?