Unformatted text preview:

Math 55 - Spring 2004 - Lecture notes # 5 - Feb 3 (Tuesday)Read: Sections 2.1-2.3Note: we will not cover binary search, sorting,greedy algorithms, which are covered elsewhere (CS61B)Homework : Due Feb 11 in section.1) Show that if the first 14 positive integersare placed around a circle in any order, thereexist 5 integers in consecutive locationsaround the circle whose sum is 38 or greater.Hint: Use the result of question 1.5-71.2) A real number is called "algebraic" ifit is the root of some polynomial withinteger coefficients and degree at least 1.Let A be the set of all algebraic numbers.So A includes sqrt(2),cuberoot((5-sqrt(2))/sqrt(3)), any othersuch expression with roots and integers, andmany other real numbers besides.This exercise will show that A is countable.2.1) Show that if r is a root of a polynomialwith rational coefficients, it is alsoa root of a polynomial with integercoefficients. So we won’t miss any realnumbers by restricting ourselves topolynomials with integer coefficients.2.2) Show that the set P_d of polynomialsof degree d >= 1 and with integercoefficients is countable.(A polynomial has degree d if it can bewritten asa_d*x^d + a_{d-1}*x^{d-1} + ... + a_1*x + a_0with a_d nonzero)2.3) Show that the set A_d of all real rootsof all polynomials in P_d is countable.2.4) Show that the set A of all algebraicnumbers is countable.2.5) Conclude that there are a great manymore real numbers that are not rootsof polynomials with integer coefficientsthan real numbers that are roots of suchpolynomials.13) Simplify O(f(n)) where f(n) is given below.Your expression should be both as simple andaccurate as possible (it should notoverestimate f(n) by more than a constantfactor). All logarithms are base pi.f(n) =[ 9^(2^(n^.3)) - 2^(9^(n^.3)) ] *[ .99^(n^3) + log (log (log (log n ))) ] *[ (log (log n))^(log (log (log n))) +(log (log (log n)))^(log (log n)) ] *[ 3^n * n^4 - 4^n * n^3 ] *[ 89*n^4 + 1234 * n * (log n)^18 ]4) Show that log_(1.75) 3.5 must be irrational.Hint: proof by contradiction5) sec 2.2: 20, 36, 60, 626) sec 2.3: 87) Modify the algorithm in the last question(2.3-8) to compute the derivative of thegiven polynomial. How many additions andmultiplications does your algorithm take(ignoring additions to increment the loopvariable)?Goals for today: Expressing algorithmshow do we measure, compare running times?Big-O notationIntroduction to Complexity TheoryAlgorithms:ASK&WAIT: what does this program do?prog1(integer n, integer array a(1),...,a(n))M = a[1]fori=2tonM = max(M,a(i))end forreturn MASK&WAIT: What does this program do?prog2(integer n, integer array a(1),...,a(n))fori=1tonM = a(i)2for j = 1 to n except iif a(j)>M goto nextend forreturn Mnext:endforASK&WAIT: Which program do you think is faster? How much faster?How do we determine how long prog1 takes to run?Approach 1: run it and measure the run time in seconds,ASK&WAIT: What are the pros and cons of this approach?Approach 2: for each operation performed by the program,find out how many seconds it takes, and add them all upn-1: maxn-1: increment in-1: test to see if loop is donestart up cost of M=a[1], etc...ASK&WAIT: What are the pros and cons of this approach?Approach 3: it takes time proportional to n, ie. about k*nfor some constant k, large enough n that startup is smallASK&WAIT: What are the pros and cons of this approach?ASK&WAIT: How long does prog2 take to run? i.e. proportional to whatfunction of n?If it depends on values of a(1),...,a(n), what is the worst case?Is prog2 or prog1 faster, in the worst case?Big-O notationMotivation: given a complicated function f(n), whichmay represent how long a program runs on a problem of size n,(or how much memory it takes), quickly approximate it by a muchsimpler function g(n) that roughly bounds how fast f(n)increases as a function of nEx: Consider f(n) = (pi+7)/3*n^2 + n + 1 + sin(n). When n is large,n^2 is so much larger than n+1+sin(n) that we would like to ignorethese terms. Furthermore, (pi+7)/3 is just a constant, andfor many purposes we don’t care exactly what the constant is.So we’d like some notation to say that f(n) grows likesome constant time n^2: we will write "f(n) is O(n^2)".Introduce notation to mean "proportional to n", or any other g(n):DEF: Let f and g map integers (or real) to reals. We say that3f(x) is O(g(x)) (read "Big-O of g") if there areconstants C>0 and k>=0 such that |f(x)| <= C*|g(x)| whenever x>kIntuitively, if f(x) grows as x grows, then g(x) grows at least as fastEG: f(x) = 100*x and g(x)=x^2, then for x>100, g(x)>f(x) and f(x)=O(g(x))EG: if n>=1 thenf(n) = (pi+7)/3*n^2 + n + 1 + sin(n)<= (pi+7)/3*(n^2 + n^2 + n^2 + n^2)= 4*(pi+7)/3*n^2so f(n) = O(n^2), with k=1 and C=4*(pi+7)/3 in the definition(Draw pictures to illustrate)Remark: Sometimes we write f(x) = O(g(x)), but this is misleadingnotation, because f1(x) = O(g(x)) and f2(x) = O(g(x))does not mean f1(x) = f2(x), for examplex = O(x) and 2*x = O(x)EG: f(n) = run-time of prog1 for input of size n = O(n)ASK&WAIT: what is (worst case) running time ofinput: x, array a(1),...,a(n)found = falsefor i=1 to nif x = a(i)found = trueexit loopendifend forIn some of most important applications of O(), we never have anexact formula for f(n), such as when f(n) is the exact runningtime of a program. In such cases all we can hope for is asimpler function g(n) such that f(n) is O(g(n)).But to teach you how to simplify f(n) to get g(n),we will use exact expressions f(n) as examples.Goals of O() are1) simplicity: O(n^2) is simpler than (pi+7)/3*x^2+n+1+sin(n),O(n) is simpler than actual run time of prog12) reasonable "accuracy":EG: Consider f(x) = (pi+7)/3*n^2 + n + 1 + sin(n)f(n) is both O(n^2) and O(n^3)ASK&WAIT: why?4ASK&WAIT: Which is a "better" answer, to say f(n) is O(n^2) or O(n^3),since both are true?EX: Suppose we have two programs for the same problem and want topick the fastest. Suppose prog1 runs in exactly time 10*n and prog2runs in time n^2, so prog1 is clearly faster when n>10. But if we are"sloppy" and say that both run in time O(n^2), then we can’tcompare them.So we would like rules that make it easy to find simpleand accurate g(x) so f(x)=O(g(x)) for complicated f(x)that avoid ever needing explicit values of C and k inthe definition of Big-ORule 1: if c is a constant, c*f(x) is O(f(x))ASK&WAIT: what are C and k in definition of O() that proves this?EX given any a,b >0, we have log_a x = O(log_b x)ASK&WAIT: why?Rule 2: x^a = O(x^b) if 0 < a < bASK&WAIT: what are C and k in


View Full Document

Berkeley MATH 55 - Lecture notes

Download Lecture notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?