DOC PREVIEW
UMD CMSC 351 - Lecture 6: Divide and Conquer and MergeSort

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture Notes CMSC 251Lecture 6: Divide and Conquer and MergeSort(Thursday, Feb 12, 1998)Read: Chapt. 1 (on MergeSort) and Chapt. 4 (on recurrences).Divide and Conquer: The ancient Roman politicians understood an important principle of good algorithmdesign (although they were probably not thinking about algorithms at the time). You divide yourenemies (by getting them to distrust each other) and then conquer them piece by piece. This is calleddivide-and-conquer. In algorithm design, the idea is to take a problem on a large input, break the inputinto smaller pieces, solve the problem on each of the small pieces, and then combine the piecewisesolutions into a global solution. But once you have broken the problem into pieces, how do you solvethese pieces? The answer is to apply divide-and-conquer to them, thus further breaking them down.The process ends when you are left with such tiny pieces remaining (e.g. one or two items) that it istrivial to solve them.Summarizing, the main elements to a divide-and-conquer solution are• Divide (the problem into a small number of pieces),• Conquer (solve each piece, by applying divide-and-conquer recursively to it), and• Combine (the pieces together into a global solution).There are a huge number computational problems that can be solved efficiently using divide-and-conquer. In fact the technique is so powerful, that when someone first suggests a problem to me,the first question I usually ask (after what is the brute-force solution) is “does there exist a divide-and-conquer solution for this problem?”Divide-and-conquer algorithms are typically recursive, since the conquer part involves invoking thesame technique on a smaller subproblem. Analyzing the running times of recursive programs is rathertricky, but we will show that there is an elegant mathematical concept, called a recurrence, which isuseful for analyzing the sort of recursive programs that naturally arise in divide-and-conquer solutions.For the next couple of lectures we will discuss some examples of divide-and-conquer algorithms, andhow to analyze them using recurrences.MergeSort: The first example of a divide-and-conquer algorithm which we will consider is perhaps the bestknown. This is a simple and very efficient algorithm for sorting a list of numbers, called MergeSort.We are given an sequence of n numbers A, which we will assume is stored in an array A[1 ...n]. Theobjective is to output a permutation of this sequence, sorted in increasing order. This is normally doneby permuting the elements within the array A.How can we apply divide-and-conquer to sorting? Here are the major elements of the MergeSortalgorithm.Divide: Split A down the middle into two subsequences, each of size roughly n/2.Conquer: Sort each subsequence (by calling MergeSort recursively on each).Combine: Merge the two sorted subsequences into a single sorted list.The dividing process ends when we have split the subsequences down to a single item. An sequenceof length one is trivially sorted. The key operation where all the work is done is in the combine stage,which merges together two sorted lists into a single sorted list. It turns out that the merging process isquite easy to implement.The following figure gives a high-level view of the algorithm. The “divide” phase is shown on the left.It works top-down splitting up the list into smaller sublists. The “conquer and combine” phases areshown on the right. They work bottom-up, merging sorted lists together into larger sorted lists.20Lecture Notes CMSC 2517 5 2 4 1 6 3 0output:input:splitmerge2 4 5 75 70 1 3 60 30 1 2 3 4 5 6 771 62 40361425757 5 2 41 6 3 03 01 62 47 5036142Figure 4: MergeSort.MergeSort: Let’s design the algorithm top-down. We’ll assume that the procedure that merges two sortedlist is available to us. We’ll implement it later. Because the algorithm is called recursively on sublists,in addition to passing in the array itself, we will pass in two indices, which indicate the first and lastindices of the subarray that we are to sort. The call MergeSort(A, p, r) will sort the subarrayA[p..r] and return the sorted result in the same subarray.Here is the overview. If r = p, then this means that there is only one element to sort, and we may returnimmediately. Otherwise (if p<r) there are at least two elements, and we will invoke the divide-and-conquer. We find the index q, midway between p and r, namely q =(p+r)/2(rounded down to thenearest integer). Then we split the array into subarrays A[p..q] and A[q +1..r]. (We need to be carefulhere. Why would it be wrong to do A[p..q − 1] and A[q..r]? Suppose r = p +1.) Call MergeSortrecursively to sort each subarray. Finally, we invoke a procedure (which we have yet to write) whichmerges these two subarrays into a single sorted array.MergeSortMergeSort(array A, int p, int r) {if (p < r) { // we have at least 2 itemsq = (p + r)/2MergeSort(A, p, q) // sort A[p..q]MergeSort(A, q+1, r) // sort A[q+1..r]Merge(A, p, q, r) // merge everything together}}Merging: All that is left is to describe the procedure that merges two sorted lists. Merge(A, p, q, r)assumes that the left subarray, A[p..q], and the right subarray, A[q +1..r], have already been sorted.We merge these two subarrays by copying the elements to a temporary working array called B.Forconvenience, we will assume that the array B has the same index range A, that is, B[p..r]. (One nicething about pseudocode, is that we can make these assumptions, and leave them up to the programmerto figure out how to implement it.) We have to indices i and j, that point to the current elements ofeach subarray. We move the smaller element into the next position of B (indicated by index k) andthen increment the corresponding index (either i or j). When we run out of elements in one array, thenwe just copy the rest of the other array into B. Finally, we copy the entire contents of B back into A.(The use of the temporary array is a bit unpleasant, but this is impossible to overcome entirely. It is oneof the shortcomings of MergeSort, compared to some of the other efficient sorting algorithms.)In case you are not aware of C notation, the operator i++ returns the current value of i, and thenincrements this variable by one.MergeMerge(array A, int p, int q, int r) { // merges A[p..q] with A[q+1..r]21Lecture Notes CMSC 251array B[p..r]i = k = p // initialize pointersj = q+1while (i <= q and j <= r) { // while both subarrays


View Full Document

UMD CMSC 351 - Lecture 6: Divide and Conquer and MergeSort

Download Lecture 6: Divide and Conquer and MergeSort
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 6: Divide and Conquer and MergeSort and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 6: Divide and Conquer and MergeSort 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?