Upgrade to remove ads

This preview shows page 1-2 out of 5 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Lecture Notes CMSC 251Analysis: What remains is to analyze the running time of MergeSort. First let us consider the running timeof the procedure Merge(A, p, q, r). Let n = r − p +1denote the total length of both the leftand right subarrays. What is the running time of Merge as a function of n? The algorithm contains fourloops (none nested in the other). It is easy to see that each loop can be executed at most n times. (Ifyou are a bit more careful you can actually see that all the while-loops together can only be executed ntimes in total, because each execution copies one new element to the array B, and B only has space forn elements.) Thus the running time to Merge n items is Θ(n). Let us write this without the asymptoticnotation, simply as n. (We’ll see later why we do this.)Now, how do we describe the running time of the entire MergeSort algorithm? We will do this throughthe use of a recurrence, that is, a function that is defined recursively in terms of itself. To avoidcircularity, the recurrence for a given value of n is defined in terms of values that are strictly smallerthan n. Finally, a recurrence has some basis values (e.g. for n =1), which are defined explicitly.Let’s see how to apply this to MergeSort. Let T(n) denote the worst case running time of MergeSort onan array of length n. For concreteness we could count whatever we like: number of lines of pseudocode,number of comparisons, number of array accesses, since these will only differ by a constant factor.Since all of the real work is done in the Merge procedure, we will count the total time spent in theMerge procedure.First observe that if we call MergeSort with a list containing a single element, then the running time is aconstant. Since we are ignoring constant factors, we can just write T (n)=1. When we call MergeSortwith a list of length n>1, e.g. Merge(A, p, r), where r−p+1 = n, the algorithm first computesq = b(p + r)/2c. The subarray A[p..q], which contains q − p +1elements. You can verify (by sometedious floor-ceiling arithmetic, or simpler by just trying an odd example and an even example) that isof size dn/2e. Thus the remaining subarray A[q +1..r] has bn/2celements in it. How long does it taketo sort the left subarray? We do not know this, but because dn/2e <nfor n>1, we can express thisas T (dn/2e). Similarly, we can express the time that it takes to sort the right subarray as T (bn/2c).Finally, to merge both sorted lists takes n time, by the comments made above. In conclusion we haveT (n)=1 if n =1,T(dn/2e)+T(bn/2c)+n otherwise.Lecture 7: Recurrences(Tuesday, Feb 17, 1998)Read: Chapt. 4 on recurrences. Skip Section 4.4.Divide and Conquer and Recurrences: Last time we introduced divide-and-conquer as a basic techniquefor designing efficient algorithms. Recall that the basic steps in divide-and-conquer solution are (1)divide the problem into a small number of subproblems, (2) solve each subproblem recursively, and (3)combine the solutions to the subproblems to a global solution. We also described MergeSort, a sortingalgorithm based on divide-and-conquer.Because divide-and-conquer is an important design technique, and because it naturally gives rise torecursive algorithms, it is important to develop mathematical techniques for solving recurrences, eitherexactly or asymptotically. To do this, we introduced the notion of a recurrence, that is, a recursivelydefined function. Today we discuss a number of techniques for solving recurrences.MergeSort Recurrence: Here is the recurrence we derived last time for MergeSort. Recall that T (n) is thetime to run MergeSort on a list of size n. We argued that if the list is of length 1, then the total sortingtime is a constant Θ(1).Ifn>1, then we must recursively sort two sublists, one of size dn/2e andthe other of size bn/2c, and the nonrecursive part took Θ(n) time for splitting the list (constant time)23Lecture Notes CMSC 251and merging the lists (Θ(n) time). Thus, the total running time for MergeSort could be described bythe following recurrence:T (n)=1 if n =1,T(dn/2e)+T(bn/2c)+n otherwise.Notice that we have dropped the Θ()’s, replacing Θ(1) and Θ(n) by just 1 and n, respectively. Thisis done to make the recurrence more concrete. If we had wanted to be more precise, we could havereplaced these with more exact functions, e.g., c1and c2n for some constants c1and c2. The analysiswould have been a bit more complex, but we would arrive at the same asymptotic formula.Getting a feel: We could try to get a feeling for what this means by plugging in some values and expandingthe definition.T (1) = 1 (by the basis.)T (2) = T(1) + T (1)+2=1+1+2=4T(3) = T (2) + T (1)+3=4+1+3=8T(4) = T (2) + T (2)+4=4+4+4=12T(5) = T (3) + T (2)+5=8+4+5=17...T(8) = T(4) + T (4)+8=12+12+8=32...T(16) = T(8) + T (8) + 16 = 32 + 32 + 16 = 80...T(32) = T(16) + T (16) + 32 = 80 + 80 + 32 = 192.It’s hard to see much of a pattern here, but here is a trick. Since the recurrence divides by 2 each time,let’s consider powers of 2, since the function will behave most regularly for these values. If we considerthe ratios T (n)/n for powers of 2 and interesting pattern emerges:T (1)/1=1 T(8)/8=4T(2)/2=2 T(16)/16 = 5T (4)/4=3 T(32)/32 = 6.This suggests that for powers of 2, T (n)/n = (lg n)+1, or equivalently, T (n)=(nlg n)+nwhichis Θ(n log n). This is not a proof, but at least it provides us with a starting point.Logarithms in Θ-notation: Notice that I have broken away from my usual convention of say lg n and justsaid log n inside the Θ(). The reason is that the base really does not matter when it is inside the Θ.Recall the change of base formula:logbn =loganlogab.If a and b are constants the logab is a constant. Consequently logbn and logan differ only by a constantfactor. Thus, inside the Θ() we do not need to differentiate between them. Henceforth, I will not befussy about the bases of logarithms if asymptotic results are sufficient.Eliminating Floors and Ceilings: One of the nasty things about recurrences is that floors and ceilings area pain to deal with. So whenever it is reasonable to do so, we will just forget about them, and makewhatever simplifying assumptions we like about n to make things work out. For this case, we willmake the simplifying assumption that n is a power of 2. Notice that this means that our analysis will24Lecture Notes CMSC 251only be correct for a very limited (but infinitely large) set of values of n, but it turns out that as long asthe algorithm


View Full Document
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?