DOC PREVIEW
MIT 6 006 - Recitation 11

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

6.006 Intro to Algorithms Recitation 11 March 11, 2011Counting SortCounting sort is an algorithm that takes an array A of n elements in the range {1, 2, ..., k} andsorts the array in O(n + k) time. Counting sort uses no comparisons and uses the fact that the nelements are in a limited range to beat the O(n log n) limit of comparison sorts.Algorithm: Counting sort keeps an auxiliary array C with k elements, all initialized to 0. Wemake one pass through the input array A and for each element i in A that we see, we incrementC[i] by 1. After we iterate through the n elements of A and update C, the value at index j of Ccorresponds to how many times j appeared in A. This step takes O(n) time to iterate through A.Once we have C, we can construct the sorted version of A by iterating through C and insertingeach element j a total of C[j] times into a new list (or A itself). Iterating through C takes O(k)time.The end result is a sorted A and in total it took O(n + k) time to do so.Note that this does not permute the elements in A into a sorted list. If A had two 3s for example,there’s no distinction which 3 mapped to which 3 in the sorted result. We just counted two 3s andarbitrarily stuck two 3s in the sorted list. This is perfectly fine in many cases, but you’ll see later onin radix sort why in some cases it is preferable to be able to provide a permutation that transformsA into a sorted version of itself.To do this, we continue from the point where C is an array where C[j] refers to how manytimes j appears in A. We transform C to an array where C[j] refers to how many elements are≤ j. We do this by iterating through C and adding the value at the previous index to the value atthe current index, since the number of elements ≤ j is equal to the number of elements ≤ j − 1(i.e. the value at the previous index) plus the number of elements = j (i.e. the value at the currentindex). The final result is a matrix C where the value of C[j] is the number of elements ≤ j in A.Now we iterate through A backwards starting from the last element of A. For each element iwe see, we check C[i] to find out how many elements are there ≤ i. From this information, we6.006 Intro to Algorithms Recitation 11 March 11, 2011know exactly where we can put i in the sorted array. Once we insert i into the sorted array, wedecrement C[i] so that if we see a duplicate element, we know that we have to insert it right beforethe previous i. Once we finish iterating through A, we will get a sorted list as before. This time,we provided a mapping from each element A to the sorted list. Note that since we iterated throughA backwards and decrement C[i] every time we see i. we preserve the order of duplicates in A.That is, if there are two 3s in A, we map the first 3 to an index before the second 3. A sort thathas this property is called a stable sort. We will need the stableness of counting sort when we useradix sort.Iterating through C to change C[j] from being the number of times j is found in A to beingthe number of times an element ≤ j is found in A takes O(k) time. Iterating through A to mapthe elements of A to the sorted list takes O(n) time. Since filling up C to begin with also tookO(n) time, the total runtime of this stable version of counting sort is O(n + k + n) = O(2n + k) =O(n + k).Radix SortThe downfall of counting sort is that it may not be too practical if the range of elements is toolarge. For example, if the range of the n elements we need to sort was from 1 to n3, then simplycreating the auxiliary array C will take O(n3) time and counting sort will asymptotically do worsethan insertion sort. This also takes O(n3) space which is significant larger than any of space usedby any other sorting algorithm we’ve learned so far.Radix sort helps solve this problem by sorting the elements digit by digit. The idea is thatwe can sort integers by first sorting them by their least significant digit (i.e. the ones digit), thensorting the result of that sort by their next significant digit (i.e. the tens digit), and so on until wesort the integers by their most significant digit.6.006 Intro to Algorithms Recitation 11 March 11, 2011How do we sort these elements by digit? Counting sort! Note that the k factor in countingsorting by digit is restricted to the range of each digit instead of the range of the elements. Wehave to use the stable variant of counting sort in radix sort. If we used the first version of countingsort, we wouldn’t have a mapping from element to element. In the example above, when we sortthe ones digit, we would be able to sort them in order but we would have no indication of whetherthe first 7 corresponds to 457 or 657. By using the stable variant, we get a mapping from elementto element and can map the 457 to the first 7 since 457 appeared earlier in the list than 657. Thisstable property also ensures that as we sort more and more significant digits, duplicate digits stayin the right order according to the less significant digits, thus preserving the correctness of radixsort.If there are n integers to sort in radix sort and the integers are represented in base k (i.e. thereare k possible values for each digit) and the maximum length of the integers is d, then radix sortwill execute a O(n + k) counting sort for each of the d digits, resulting in a O(d(n + k)) runtimefor radix sort.Note that what base we choose to represent the integers in affects the value of k and d. Largebases have the advantage of shorter integers digit-wise, but the counting sort on each digit will takelonger and vice versa for small bases.Knowing n, the ideal base for radix sort is k = n. Say that the integers range from 0 to u − 1.The number of digits in the integers is at most logku for base k. We can substitute this into theruntime for radix sort to get O((n + k) logku). Using some fancy math, we can find that the best kto choose to minimize this runtime is k = n. In this case, our runtime for radix sort is O(n lognu).When u = nO(1), the lognu term becomes a constant and our runtime for radix sort turns outto be O(n), giving us a linear time sorting algorithm if the range of the integers we’re sorting ispolynomial in the number of integers we’re sorting.6.006 Intro to Algorithms Recitation 11 March 11, 2011Graph BasicsA graph contains a set of vertices V and a set of edges E. Edges are defined to be a mapping froma vertex to a vertex, often …


View Full Document

MIT 6 006 - Recitation 11

Documents in this Course
Quiz 1

Quiz 1

7 pages

Quiz 2

Quiz 2

12 pages

Quiz 2

Quiz 2

9 pages

Quiz 1

Quiz 1

10 pages

Quiz 2

Quiz 2

11 pages

Quiz 1

Quiz 1

12 pages

Graphs

Graphs

27 pages

Load more
Download Recitation 11
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Recitation 11 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Recitation 11 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?