Introduction to Parallel ProcessingComputing ElementsTwo Eras of ComputingHistory of Parallel ProcessingWhy Parallel Processing?Human Architecture! Growth PerformanceComputational Power ImprovementSlide 8Slide 9Parallel Program has & needs ...Processing Elements ArchitectureProcessing ElementsSISD : A Conventional ComputerThe MISD ArchitectureSIMD ArchitectureMIMD ArchitectureShared Memory MIMD machineDistributed Memory MIMDLaws of caution.....Caution....PowerPoint PresentationSlide 22Slide 23Types of Parallel SystemsOperating Systems for PPSlide 26Monolithic Operating SystemLayered OSTraditional OSNew trend in OS designMicrokernel/Client Server OS (for MPP Systems)Few Popular Microkernel SystemsReferenceIntroduction to Parallel ProcessingCS 147November 12, 2004Johnny LaiP PP P P PMicrokernelMicrokernelMulti-Processor Computing SystemThreads InterfaceThreads InterfaceHardwareOperating SystemProcessProcessorThreadPPApplicationsComputing ElementsProgramming paradigmsArchitectures System Software/Compiler Applications P.S.Es Architectures System Software Applications P.S.Es SequentialEraParallelEra1940 50 60 70 80 90 2000 2030Two Eras of Computing Commercialization R & D CommodityHistory of Parallel ProcessingPP can be traced to a tablet dated around 100 BC.Tablet has 3 calculating positions.Infer that multiple positions:Reliability/ SpeedWhy Parallel Processing?Computation requirements are ever increasing -- visualization, distributed databases, simulations, scientific prediction (earthquake), etc.Sequential architectures reaching physical limitation (speed of light, thermodynamics)AgeGrowth5 10 15 20 25 30 35 40 45 . . . . Human Architecture! Growth PerformanceVerticalHorizontalNo. of ProcessorsC.P.I.1 2 . . . .Computational Power ImprovementMultiprocessorUniprocessorThe Tech. of PP is mature and can be exploited commercially; significant R & D work on development of tools & environment.Significant development in Networking technology is paving a way for heterogeneous computing.Why Parallel Processing?Hardware improvements like Pipelining, Superscalar, etc., are non-scalable and requires sophisticated Compiler Technology.Vector Processing works well for certain kind of problems.Why Parallel Processing?Parallel Program has & needs ...Multiple “processes” active simultaneously solving a given problem, general multiple processors.Communication and synchronization of its processes (forms the core of parallel programming efforts).Processing Elements ArchitectureSimple classification by Flynn: (No. of instruction and data streams)SISD - conventionalSIMD - data parallel, vector computingMISD - systolic arraysMIMD - very general, multiple approaches.Current focus is on MIMD model, using general purpose processors. (No shared memory)Processing ElementsSISD : A Conventional ComputerSpeed is limited by the rate at which computer can transfer information internally.ProcessorProcessorData InputData OutputInstructionsEx:PC, Macintosh, WorkstationsThe MISD ArchitectureMore of an intellectual exercise than a practicle configuration. Few built, but commercially not availableData InputStreamData OutputStreamProcessorAProcessorBProcessorCInstructionStream AInstructionStream BInstruction Stream CSIMD ArchitectureEx: CRAY machine vector processing, Thinking machine cm*Intel MMX (multimedia support)Ci<= Ai * BiInstructionStreamProcessorAProcessorBProcessorCData Inputstream AData Inputstream BData Inputstream CData Outputstream AData Outputstream BData Outputstream CUnlike SISD, MISD, MIMD computer works asynchronously.Shared memory (tightly coupled) MIMDDistributed memory (loosely coupled) MIMDMIMD ArchitectureProcessorAProcessorBProcessorCData Inputstream AData Inputstream BData Inputstream CData Outputstream AData Outputstream BData Outputstream CInstructionStream AInstructionStream BInstructionStream CMEMORYBUSShared Memory MIMD machineComm: Source PE writes data to GM & destination retrieves it Easy to build, conventional OSes of SISD can be easily be portedLimitation : reliability & expandibility. A memory component or any processor failure affects the whole system.Increase of processors leads to memory contention.Ex. : Silicon graphics supercomputers....MEMORYBUSGlobal Memory SystemGlobal Memory SystemProcessorAProcessorAProcessorBProcessorBProcessorCProcessorCMEMORYBUSMEMORYBUSDistributed Memory MIMDCommunication : IPC on High Speed Network.Network can be configured to ... Tree, Mesh, Cube, etc.Unlike Shared MIMDeasily/ readily expandableHighly reliable (any CPU failure does not affect the whole system)ProcessorAProcessorAProcessorBProcessorBProcessorCProcessorCMEMORYBUSMEMORYBUSMemorySystem AMemorySystem AMemorySystem BMemorySystem BMemorySystem CMemorySystem CIPCchannelIPCchannelLaws of caution.....Speed of computers is proportional to the square of their cost. i.e. cost = SpeedSpeedup by a parallel computer increases as the logarithm of the number of processors.Speedup = log2(no. of processors)SPlog2PCS(speed = cost2)Caution....Very fast development in PP and related area have blurred concept boundaries, causing lot of terminological confusion : concurrent computing/ programming, parallel computing/ processing, multiprocessing, distributed computing, etc.It’s hard to imagine a field that changes as rapidly as computing.Even well-defined distinctions like shared memory and distributed memory are merging due to new advances in technolgy.Good environments for developments and debugging are yet to emerge.Caution....There is no strict delimiters for contributors to the area of parallel processing : CA,OS, HLLs, databases, computer networks, all have a role to play.This makes it a Hot Topic of ResearchCaution....Types of Parallel SystemsShared Memory ParallelSmallest extension to existing systemsProgram conversion is incrementalDistributed Memory ParallelCompletely new systemsPrograms must be reconstructedClustersSlow communication form of DistributedOperating Systems for PPMPP systems having thousands of processors requires OS radically different fromcurrent
View Full Document