Organization of a Simple ComputerComputer Systems OrganizationProcessorsCPU OrganizationSlide 5A von Neumann MachineInstruction ExecutionSlide 8Slide 9The RISC Design PrinciplesInstruction-Level ParallelismPipeliningSlide 13Superscalar ArchitecturesSuperscalar ArchitectureSlide 16Processor-Level ParallelismSlide 18Array ProcessorsMultiprocessorsSlide 21Primary MemorySlide 23Memory OrganizationSlide 25Byte OrderingSlide 27Slide 28Error-Correcting CodesSlide 30Slide 31Slide 32Slide 33Slide 34Hamming’s AlgorithmSlide 36Slide 37Slide 38Slide 39Organization of a Simple ComputerComputer Systems OrganizationThe CPU (Central Processing Unit) is the “brain” of the computer.•Fetches instructions from main memory.•Examines them, and then executes them one after another.•The components are connected by a bus, which is a collection of parallel wires for transmitting address, data, and control signals. Busses can be external to the CPU, connecting memory and I/O devices, but also internal to the CPU.ProcessorsThe CPU is composed of several distinct parts:•The control unit fetches instructions from main memory and determines their type.•The arithmetic logic unit performs operations such as addition and boolean AND needed to carry out the instructions.•A small, high-speed memory made up of registers, each of which has a certain size and function.The most important register is the Program Counter (PC) which points to the next instruction to be fetched.The Instruction Register (IR) holds the instruction currently being executed.CPU OrganizationAn important part of the organization of a computer is called the data path.•It consists of the registers, the ALU, and several buses connecting the pieces.•The ALU performs simple operations on its inputs, yielding a result in the output register. Later the register can be stored into memory, if desired.•Most instructions can be divided into two categories:Register-memory instructions allow memory words to be fetched into registers, where they can be used as inputs in subsequent instructions, for example.CPU Organization•Register-register instructions fetch two operands from the registers, brings them into the ALU input registers, performs an operation, and stores the result back in a register.The process of running two operands through the ALU and storing the result is called the data path cycle.•The faster the data path cycle, the faster the machine.A von Neumann MachineInstruction Execution•The CPU executes as a series of small steps:1. Fetch the next instruction from memory into the IR.2. Change the PC to point to the following instruction.3. Determine the type of instruction fetched.4. If the instruction uses a word in memory, determine where it is.5. Fetch the word into a CPU register.6. Execute the instruction.7. Go to step 1 to execute next instruction.Instruction ExecutionA program that fetches, examines, and executes the instructions of another program is called an interpreter. Interpreted (as opposed to direct hardware implementation) of instructions has several benefits:•Incorrectly implemented instructions can be fixed in the field.•New instructions can be added at minimal cost.•Structured design permitting efficient development, testing, and documenting of complex instructions.Instruction ExecutionBy the late 70s, the use of simple processors running interpreters was widespread.The interpreters were held in fast read-only memories called control stores.In 1980, a group at Berkeley began designing VLSI CPU chips that did not use interpretation. They used the term RISC for this concept.RISC stands for Reduced Instruction Set Computer, contrasted with CISC (Complex Instruction Set Computer)The RISC Design PrinciplesCertain of the RISC design principles have now been generally accepted as good practice:•All instructions are executed directly by hardware.•Maximize the rate at which instructions are issued.Use parallelism to execute multiple slow instructions in a short time period.•Instructions should be easy to decode.•Only loads and stores should reference memory.Since memory access time is unpredictable, it makes parallelism difficult.•Provide plenty of registers.Since accessing memory is slow.Instruction-Level Parallelism•Parallelism comes in two varieties:Instruction-level parallelism exploits parallelism within individual instructions to get more instructions/secondProcessor-level parallelism allows multiple CPUs to work together on a problem•Fetching instructions from memory is a bottleneck.Instructions can be fetched in advance and stored in a prefetch buffer.PipeliningThis breaks up the instruction execution into two parts - fetch and execute.In pipelining, we break an instruction up into many parts, each one handled by dedicated hardware units running in parallel.Each unit is called a stage. After the pipeline is filled, an instruction completes at each (longest stage length) time interval. This time interval is the clock cycle of the CPU. The time to fill the pipeline is called the latency.PipeliningSuperscalar ArchitecturesWe can also imagine having multiple pipelines.•One possibility is to have multiple equivalent pipelines with a common instruction fetch unit. The Pentium adopted this approach with two pipelines. Complex rules must be used to determine that the two instructions don’t conflict. Pentium-specific compilers produced compatible pairs of instructions.•Another approach is to have a single pipeline with multiple functional units. This approach is called superscalar architecture and is used on high-end CPUs (including the Pentium II).Superscalar ArchitectureSuperscalar ArchitectureProcessor-Level ParallelismInstruction-level parallelism speed up execution by a factor of five or ten. To get speed-ups of 50, 100, or more, we need to use multiple CPUs.Array processors consist of a large number of identical processors that perform the same sequence of instructions on different sets of data.•The first array processor was the ILLIAC IV (1972) with an 8x8 array of processors.Processor-Level ParallelismA vector processor is similar to an array processor but while the array processor has as many adders as data elements, in the vector processor the addition operations are performed in a single, highly pipelined adder.Vector processors use vector registers which are a set of conventional
View Full Document