DOC PREVIEW
MIT 6 971 - Study Guide

This preview shows page 1-2-3-4-5-6 out of 17 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CHARMM Element doc/parallel.doc 1.1This is Info file parallel.doc, produced by Makeinfo-1.61 from theinput file parallel.texi.#File: Parallel, Node: Top, Up: (chmdoc/charmm.doc), Next: (chmdoc/commands.doc), Prev: (chmdoc/changelog.doc) Parallel Implementation of CHARMMCHARMM has been modified to allow computationally intensive simulationsto be run on multi-machines using a replicated data model. Thisversion, though employing a full communication scheme, uses an efficientdivide-and-conquer algorithm for global sums and broadcasts.Curently the following hardware platforms are supported: 1. Cray T3D/T3E 7. Intel Paragon machine 2. Cray C90, J90 8. Thinking Machines CM-5 3. SGI Power Challenge 9. IBM SP1/SP2 machines 4. Convex SPP-1000 Exemplar 10. Parallel Virtual Machine (PVM) 5. Intel iPSC/860 gamma 11. Workstation clusters (SOCKET) 6. Intel Delta machine 12. Alpha Servers (SMP machines, PVMC) 13. TERRA 2000 14. HP SMP machines 15. Convex SPP-2000 16. SGI Origin 17. LoBoS (any Beowulf)* Menu:* Installation:: Installing CHARMM on parallel systems* Running:: Running CHARMM on parallel systems* PARAllel:: Command PARAllel controls parallel communication* Status:: Parallel Code Status (as of September 1998)* Using PVM:: Parallel Code implemented with PVM* Implementation:: Description of implementation of parallel code#File: Parallel, Node: Installation, Next: Running, Prev: Top, Up: TopFor support of many parallel comunication libraries the CMPI keywordwas added. In order to get the old communication routines alwaysspecify CMPI otherwise MPI is the default choice (see recommendedkeyword combination for each specific platform). On some platformsrecommended preflx directives prepare the code which does thecommunication much faster, eg on 128 nodes T3E CMPI is 4 times fasterthan MPI.This is a complete list of supported combinations for message passinglibraries implemented in the parallel CHARMMCombinations of pref.dat keywords for MPI library (can be specified onany platform that support MPI):1. < no extra keywords > (Calls to MPI collective routines)2. CMPI MPI (non-blocking cube topology using send/receive from MPI)3. CMPI MPI GENCOMM (non-blocking ring topology, MPI send/receive)4. CMPI MPI SYNCHRON (blocking cube topology, MPI send/receive)5. CMPI MPI GENCOMM SYNCHRON (blocking ring topology, MPI send/receive)Native library options6. CMPI DELTA (for Intel Paragon)7. CMPI IBMSP (for IBM SP2)8. TERRA (for TERRA 2000)9. CMPI CM5 (For CM5)10. CSPP (Convex version of MPI)Workstation clusters using SOCKET11. CMPI SOCKET SYNCRON (blocking cube topology)12. CMPI SOCKET SYNCRON GENCOMM (blocking ring topology)PVM library13. CMPI PVMC SYNCHRON (blocking cube, PVM send/receive)14. CMPI PVMC GENCOMM SYNCHRON (blocking ring, PVM send/receive)Combination 1., 8. and 10. are currently implemented inmachdep/paral1.src so there is no need for paral2.src and paral3.srcfiles, which will eventually become unnecessary. Efficiency ofdifferent topologies also varies with the number of nodes.Also on some platforms EXPAND keyword is recommended in the combinationof the fastest FAST option in the CHARMM input script, eg for IBMSP:EXPAND (fast parvect)The installation script now installs default configuration for anyparallel platform. If one of X,G,P,M,1,2,64,Q,S is specified sizekeyword must be specified too. Run install.com without parametersfor current set of options.Installation command for parallel machines with relevant options:1. Cray T3Einstall.com t3e [size] [Q] [P] or [M]2. Cray T3Dinstall.com t3d [size] [Q] [P] or [M]3. Cray C90, J90install.com t3d [size] 4. SGI Power Origininstall.com sgi64 size M [Q] [X] uname -a : IRIX64 icpsg1 6.2 03131016 IP255. SGI Power Chellengeinstall.com sgi size P 64 [Q] [X] uname -a : IRIX64 icpsg1 6.2 03131016 IP254a. SGI Origininstall.com sgi64 size M 64/usr/include/usr/lib64 uname -a : IRIX64 atlas 6.5 04131233 IP276. Convex SPP-1000 or SPP-2000install.com cspp size P or M [Q]7. Intel Paragon machineinstall.com inteluname -a : Paragon OSF/1 timewarp 1.0.4 R1_4 paragon8. IBM SP1/SP2 machinesinstall.com ibmsp size [Q]uname -a: AIX f1n3 1 4 0001046970008a. IBM SP3 machinesinstall.com ibmsp3 size [Q]9. Generic Parallel Virtual Machine (PVM)install.com machine size P10. TERRA 2000install.com terra size11. Workstation clustersinstall.com machine size S [Q] [X]12. Alpha Servers (SMP)install.com alphamp size M13. Cluster of PCs using GNU/Linux OS - Beowulf class of machinesA. Using RedHat-6.0: ================= Get and instal the official LAM MPI rpm package from rpm -i http://www.mpi.nd.edu/downloads/lam/lam-6.31b1-tcp.1.i386.rpm install.com gnu size M [Q] [X] # this asks 2 question - answers are: /usr/local/lam-6.3-b1/include /usr/local/lam-6.3-b1/libB. Using Debian-potato: ==================== One can use g77 with either lam or mpich (preferred) install.com gnu size M [Q] [X] # this asks 2 question - answers are: /usr/include/lam /usr/lib/lam/lib or install.com gnu size M mpich [Q] [X] # this asks 2 question - answers are:/usr/lib/mpich/build/LINUX/ch_p4/include /usr/lib/mpich/build/LINUX/ch_p4/libThis small performance table executed on a single processor PentiumII/450MHz machine might help you to decide which system/compiler isbest for your needs:B1 = 50 steps of MbCO dynamcs + water with spherical cutoffsB2 = 25 steps of MbCO dynamcs + water with PM EwaldB3 = 10 steps of minimization of QM/MM for alanineAll timing in seconds of elapsed time on empty machines using theabove install procedure. (This table was made July 31, 99).Benchmark | g77/RH-6.0 | g77*/Debian| f2c/Debian | pgf77 | f77/Absoft========================================================================= B1 | 290.6 s | 197.6 s | 211.1 s | 189.5 s | 196.0 s ------------------------------------------------------------------------- B2 | 223.3 s | 193.7 s | 234.6 s | 199.2 s | 211.3 s------------------------------------------------------------------------- B3 | 70.5 s | 64.3 s | 74.3 s | 59.8 s |not working=========================================================================g77*/Debian is the newest g77-2.95 compiler from July 31, 1999. pgf77and f77/Absoft are also the most recent versions.[NOTE: pgf77 and MPI don't work out of the box. One has to recompileMPI library with


View Full Document

MIT 6 971 - Study Guide

Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?