DOC PREVIEW
CMU CS 15122 - Lecture

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IntroductionBinary Representation of Natural NumbersModular ArithmeticAn Algorithm for Binary AdditionTwo's Complement RepresentationHexadecimal NotationBitwise Operations on IntsImplementing AdditionInteger Division and ModulusShiftsRepresenting ColorsLecture Notes onInts15-122: Principles of Imperative ComputationFrank PfenningLecture 2August 26, 20101 IntroductionTwo fundamental types in almost any programming language are booleansand integers. Booleans are comparatively straightforward: they have twopossible values (true and false) and conditionals to test boolean values.We will return to their properties in a later lecture.Integers . . . , −2, −1, 0, 1, 2, . . . are considerably more complex, becausethere are infinitely many of them. Because memory is finite, only a finitesubrange of them can be represented in computers. In this lecture we dis-cuss how integers are represented, how we can deal with the limited preci-sion in the representation, and how various operations are defined on theserepresentations.2 Binary Representation of Natural NumbersFor the moment, we only consider the natural numbers 0, 1, 2, . . . and wedo not yet consider the problems of limited precision. Number notationshave a base b. To write down numbers in base b we need b distinct digits.Each digit is multiplied by an increasing power of b, starting with 20at theright end. For example, in base 10 we have the ten digits 0–9 and the string9380 represents the number 9 ∗ 103+ 3 ∗ 102+ 8 ∗ 101+ 0 ∗ 100. We callnumbers in base 10 decimal numbers. Unless it is clear from context that weare talking about a certain base, we use a subscript[b]to indicate a numberin base b.LECTURE NOTES AUGUST 26, 2010Ints L2.2In computer systems, two bases are of particular importance. Binarynumbers use base 2, with digits 0 and 1, and hexadecimal numbers (explainedmore below) use base 16, with digits 0–9 and A–F . Binary numbers areso important because the basic digits, 0 and 1, can be modeled inside thecomputer by two different voltages, usually “off” for 0 and “on” for 1. Tofind the number represented by a sequence of binary digits we multiplyeach digit by the appropriate power of 2 and add up the results. In general,the value of a bit sequencebn−1. . . b1b0[2]= bn−12n−1+ · · · + b121+ b020=n−1Xi=0bi2iFor example, 10011[2]represents 1 ∗ 24+ 0 ∗ 23+ 0 ∗ 22+ 1 ∗ 21+ 1 ∗ 20=16 + 2 + 1 = 19.We can also calculate the value of a binary number in a nested way,exploiting Horner’s rule for evaluating polynomials.10011[2]= (((1 ∗ 2 + 0) ∗ 2 + 0) ∗ 2 + 1) ∗ 2 + 1 = 19In general, if we have an n-bit number with bits bn−1. . . b0, we can calculate(· · · ((bn−1∗ 2 + bn−2) ∗ 2 + bn−3) ∗ 2 + · · · + b1) ∗ 2 + b0For example, taking the binary number 10010110[2]write the digitsfrom most significant to least significant, calculating the cumulative valuefrom left to right by writing it top to bottom.1 = 11 ∗ 2 + 0 = 22 ∗ 2 + 0 = 44 ∗ 2 + 1 = 99 ∗ 2 + 0 = 1818 ∗ 2 + 1 = 3737 ∗ 2 + 1 = 7575 ∗ 2 + 0 = 150Reversing this process allows us to convert a number into binary form.Here we start with the number and successively divide by two, calculatingthe remainder. At the end, the least significant bit is at the top.LECTURE NOTES AUGUST 26, 2010Ints L2.3For example, converting 198 to binary form would proceed as follows:198 = 99 ∗ 2 + 099 = 49 ∗ 2 + 149 = 24 ∗ 2 + 124 = 12 ∗ 2 + 012 = 6 ∗ 2 + 06 = 3 ∗ 2 + 03 = 1 ∗ 2 + 11 = 0 ∗ 2 + 1We read off the answer, from bottom to top, arriving at 11000110[2].3 Modular ArithmeticWithin a computer, there is a natural size of words that can be processedby single instructions. In early computers, the word size was typically 8bits; now it is 32 or 64. In programming languages that are relatively closeto machine instructions like C or C0, this means that the native type int ofintegers is limited to the size of machine words. In C0, we decided that thevalues of type int occupy 32 bits.This is very easy to deal with for small numbers, because the more sig-nificant digits can simply be 0. According to the formula that yields theirnumber value, these bits do not contribute to the overall value. But wehave to decide how to deal with large numbers, when operations such asaddition or multiplication would yield numbers that are too big to fit intoa fixed number of bits. One possibility would be to raise overflow excep-tions. This is somewhat expensive (since the overflow condition must beexplicitly detected), and has other negative consequences. For example,(n+n)−n is no longer equal to n+(n−n) because the former can overflowwhile the latter always yields n and does not overflow. Another possibilityis to carry out arithmetic operations modulo the number of representableintegers, which would be 232in the case of C0. We say that the machineimplements modular arithmetic.In higher-level languages, one would be more inclined to think of thetype of int to be inhabited by integers of essentially unbounded size. Thismeans that a value of this type would consist of a whole vector of machinewords whose size may vary as computation proceeds. Basic operationssuch as addition no longer map directly onto machine instruction, but areLECTURE NOTES AUGUST 26, 2010Ints L2.4implemented by small programs. Whether this overhead is acceptable de-pends on the application.Returning to modular arithmetic, the idea is that any operation is car-ried out modulo 2pfor the precision p. Even when the modulus is not apower of two, many of the usual laws of arithmetic continue to hold, whichmakes it possible to write programs confidently without having to worry,for example, about whether to write x + (y + z) or (x + y) + z. We havethe following properties of the abstract algebraic class of rings which areshared between ordinary integers and integers modulo a fixed number n.Commutativity of addition x + y = y + xAssociativity of addition (x + y) + z = x + (y + z)Additive unitx + 0 = xAdditive inverse x + (−x) = 0Cancellation −(−x) = xCommutativity of multiplication x ∗ y = y ∗ xAssociativity of multiplication (x ∗ y) ∗ z = x ∗ (y ∗ z)Multiplicative unit x ∗ 1 = xDistributivity x ∗ (y + z) = x ∗ y + x ∗ zAnnihilation x ∗ 0 = 0Some of these laws, such as associativity and distributivity, do not holdfor so-called floating point numbers that approximate real numbers. Thissignificantly complicates the task of reasoning about


View Full Document

CMU CS 15122 - Lecture

Download Lecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?