On this page:
28.1 Quicksort:   a usually fast way to sort
28.1.1 Quicksort using swapspace
28.1.2 Quicksort in place
28.1.3 Runtime analysis of quicksort
28.2 Mergesort:   sorting in a guaranteed better worst-case time
28.2.1 Runtime analysis of mergesort
28.3 Divide-and-conquer algorithms
28.3.1 Time/  space tradeoffs
28.4 Discussion
8.13

Lecture 28: Quicksort and Mergesort🔗

Divide-and-conquer algorithms and their big-O behaviors

In the last lecture we considered two simple sorting algorithms: the natural insertion-sort algorithm for our standard ILists, and the natural selection-sort for ArrayLists. Both algorithms were straightforward applications of the design recipe to their respective data. But unfortunately, the performance of those algorithms was not very good: in the worst case, they took quadratic amounts of time proportionate to the size of their inputs.

Can we do better? The problem with both algorithms is that they’re too “forgetful”: they will repeatedly compare the same pairs of numbers many times, and they don’t take advantage of a key property about comparisons. If we know that \(a < b\) and \(b < c\), then we do not need to compare \(a\) and \(c\), because by transitivity we already know the answer. Let’s see if we can use this fact to our advantage.

28.1 Quicksort: a usually fast way to sort🔗

Suppose we could guess the median value (the “middle-most” value, such that half the values are less than it, and half are greater) from a list of values. We could use that fact to divide our data into a list of “small values” and a list of “big values” Once we’ve done that, we never need to compare any “small” value to any “big” value ever again, thereby cutting out a potentially huge number of wasteful comparisons!

28.1.1 Quicksort using swapspace🔗

As our running example, let’s use the following list of strings, and say we want to sort them in increasing lexicographic order:
Index:   0       1      2      3        4         5       6       7     8
Data: [grape, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
We need to guess a median value, which we’ll call the pivot value. We might as well use the very first value we find (in index 0), because it’s pretty likely for that value to be a “middle” value, and not a maximum or minimum.

Once we’ve identified a pivot value, we need to find all the values less than it and move them to the front of the list, find all the values greater than it and move them to the back, and then place the pivot between them. To do this, we’ll use a temporary array in which to rearrange the values:
SOURCE:
Index:   0       1      2      3        4         5       6       7     8
Data: [grape, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]

   == move all smaller values left, and all bigger values right ==>
TEMP:
Index:   0       1      2      3    4      5     6       7           8
Data: [cherry, apple, banana, fig, date, grape, kiwi, watermelon, honeydew]
Now indices 0 through 4 of the temporary array contain values less than the pivot, indices 6 through 8 contain values greater than the pivot, and the pivot is in its final, sorted position. Even better, we know that since indices 0 through 4 contain values less than the pivot, we do not need to compare them to anything in indices 6 through 8. Essentially, we now have two smaller sub-lists which need sorting. If only we had a technique to sort them...Of course: we’re writing one now!

Let’s try to implement this algorithm. We want to implement a method in our ArrayUtils class with signature
<T> void quicksortCopying(ArrayList<T> arr, IComparator<T> comp);
but this signature will not be general enough for our recursive calls.

Do Now!

Why not? What helper method will be needed?

We need to supply a temporary array to hold the temporary values, and we’ll need to supply the indices between which we are sorting:
// In ArrayUtils // EFFECT: Sorts the given ArrayList according to the given comparator <T> void quicksortCopying(ArrayList<T> arr, IComparator<T> comp) {
// Create a temporary array ArrayList<T> temp = new ArrayList<T>();
// Make sure the temporary array is exactly as big as the given array for (int i = 0; i < arr.size(); i = i + 1) {
temp.add(arr.get(i));
}
quicksortCopyingHelp(arr, temp, comp, 0, arr.size());
}
<T> void quicksortCopyingHelp(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int hiIdx) {
...
}

The helper method must:
  1. Select the pivot element,

  2. Partition: Copy all elements less than the pivot into the lower half of the temp list, copy all elements greater than the pivot into the upper half of the temp list, and place the pivot between them,

  3. Copy the entire list back from temp to source, and

  4. Sort the upper and lower halves of the list

Step 3 is crucial, to ensure that the source list ends up sorted, and not just the temp list!
// EFFECT: sorts the source array according to comp, in the range of indices [loIdx, hiIdx) <T> void quicksortCopyingHelp(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int hiIdx) {
// Step 0: check for completion if (loIdx >= hiIdx) {
return; // There are no items to sort }
// Step 1: select pivot T pivot = source.get(loIdx);
// Step 2: partition items to lower or upper portions of the temp list int pivotIdx = partitionCopying(source, temp, comp, loIdx, hiIdx, pivot);
// Step 4: sort both halves of the list quicksortCopyingHelp(source, temp, comp, loIdx, pivotIdx);
quicksortCopyingHelp(source, temp, comp, pivotIdx + 1, hiIdx);
}
// Returns the index where the pivot element ultimately ends up in the sorted source // EFFECT: Modifies the source and comp lists in the range [loIdx, hiIdx) such that // all values to the left of the pivot are less than (or equal to) the pivot // and all values to the right of the pivot are greater than it <T> int partitionCopying(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int hiIdx, T pivot) {
int curLo = loIdx;
int curHi = hiIdx - 1;
// Notice we skip the loIdx index, because that's where the pivot was for (int i = loIdx + 1; i < hiIdx; i = i + 1) {
if (comp.compare(source.get(i), pivot) <= 0) { // lower temp.set(curLo, source.get(i));
curLo = curLo + 1; // advance the current lower index }
else { // upper temp.set(curHi, source.get(i));
curHi = curHi - 1; // advance the current upper index }
}
temp.set(curLo, pivot); // place the pivot in the remaining spot // Step 3: copy all items back into the source for (int i = loIdx; i < hiIdx; i = i + 1) {
source.set(i, temp.get(i));
}
return curLo;
}
The interesting work happens in partitionCopying. We maintain two indices, curLo and curHi, that indicate where the next small value or next large value should be placed. When we’re done, curLo equals curHi, and they both equal the index where the pivot should be placed. Finally, partitionCopying also copies the items back from temp into source into their appropriate locations.

28.1.2 Quicksort in place🔗

All this copying back and forth seems wasteful. Can we eliminate the extra ArrayList? The key idea is to use the curLo and curHi indices more cleverly. Let’s look again at our example:
Index:   0       1      2      3        4         5       6       7     8
Data: [grape, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
Once we select "grape" as our pivot, we can pretend that "grape" has been removed from the list and we have a “hole” where it used to be:
Index:   0       1      2      3        4         5       6       7     8
Data: [-----, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
Remember that we need to move all the “large” values to the right, and all the “small” values to the left, but we don’t yet know where to place the pivot. So let’s use curLo and curHi to keep track of indices of “values we know to be lower than the pivot” and “values we know to be higher than the pivot”, and if we ever find a pair of values out of place, we swap them. Concretely, we start curLo at 1, and curHi at 8.
               curLo                                                  curHi
                 v                                                      v
Index:   0       1      2      3        4         5       6       7     8
Data: [-----, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
Since "cherry" is less than "grape", we increase curLo:
                      curLo                                           curHi
                        v                                               v
Index:   0       1      2      3        4         5       6       7     8
Data: [-----, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
Since "apple" is less than "grape", we increase curLo again:
                             curLo                                    curHi
                               v                                        v
Index:   0       1      2      3        4         5       6       7     8
Data: [-----, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
Since "kiwi" is not less than "grape", we stop, and turn our attenion to curHi. Since "fig" is not greater than "grape", we stop. We now know that "kiwi" belongs to the right of the pivot, wherever it winds up, and "fig" belongs to the left, so we swap them:
                            curLo                                     curHi
                              v                                         v
Index:   0       1      2     3        4         5       6       7      8
Data: [-----, cherry, apple, fig, watermelon, banana, honeydew, date, kiwi]
And now we resume. Since "fig" is less than "grape", we advance "curLo".
                                     curLo                            curHi
                                       v                                v
Index:   0       1      2     3        4         5       6       7      8
Data: [-----, cherry, apple, fig, watermelon, banana, honeydew, date, kiwi]
Since "watermelon" is not less than "grape", we stop, and turn our attention to curHi. Since "kiwi" is greater than "grape", we advance curHi:
                                     curLo                     curHi
                                       v                         v
Index:   0       1      2     3        4         5       6       7      8
Data: [-----, cherry, apple, fig, watermelon, banana, honeydew, date, kiwi]
Since "date" is not greater than "grape", we stop, and swap the two values at positions 4 and 7:
                                  curLo                     curHi
                                    v                         v
Index:   0       1      2     3     4      5       6          7         8
Data: [-----, cherry, apple, fig, date, banana, honeydew, watermelon, kiwi]
Since "date" is less than "grape", we advance curLo. Since "banana" is less than "grape", we advance curLo again:
                                                 curLo      curHi
                                                   v          v
Index:   0       1      2     3     4      5       6          7         8
Data: [-----, cherry, apple, fig, date, banana, honeydew, watermelon, kiwi]
Since "honeydew" is not less than "grape", we stop, and turn our attention to curHi. Since "watermelon" is greater than "grape", we advance curHi. Since "honeydew" is greater than "grape", we advance curHi again.
                                         curHi   curLo
                                           v       v
Index:   0       1      2     3     4      5       6          7         8
Data: [-----, cherry, apple, fig, date, banana, honeydew, watermelon, kiwi]
Since curHi is now less than curLo, we must have found the crossover point. We need to place "grape" at index curHi, which means we need to move "banana" somewhere else. Conveniently, there’s room for it at index 0, where "grape" used to be!

Let’s see if we can implement this algorithm in Java.

Do Now!

What kind of loop should be used?

Since we do not know in advance how many times to run the loop, but we do know to stop the loop once curLo is greater than curHi, we should use a while loop:
// Returns the index where the pivot element ultimately ends up in the sorted source // EFFECT: Modifies the source list in the range [loIdx, hiIdx) such that // all values to the left of the pivot are less than (or equal to) the pivot // and all values to the right of the pivot are greater than it <T> int partition(ArrayList<T> source, IComparator<T> comp,
int loIdx, int hiIdx, T pivot) {
int curLo = loIdx;
int curHi = hiIdx - 1;
while (curLo < curHi) {
// Advance curLo until we find a too-big value (or overshoot the end of the list) while (curLo < hiIdx && comp.compare(source.get(curLo), pivot) <= 0) {
curLo = curLo + 1;
}
// Advance curHi until we find a too-small value (or undershoot the start of the list) while (curHi >= loIdx && comp.compare(source.get(curHi), pivot) > 0) {
curHi = curHi - 1;
}
if (curLo < curHi) {
swap(source, curLo, curHi);
}
}
swap(source, loIdx, curHi); // place the pivot in the remaining spot return curHi;
}
Now we can tweak our original quicksort method to use this, and avoid the temp list altogether:
// In ArrayUtils // EFFECT: Sorts the given ArrayList according to the given comparator <T> void quicksort(ArrayList<T> arr, IComparator<T> comp) {
quicksortHelp(arr, comp, 0, arr.size());
}
// EFFECT: sorts the source array according to comp, in the range of indices [loIdx, hiIdx) <T> void quicksortHelp(ArrayList<T> source, IComparator<T> comp,
int loIdx, int hiIdx) {
// Step 0: check for completion if (loIdx >= hiIdx) {
return; // There are no items to sort }
// Step 1: select pivot T pivot = source.get(loIdx);
// Step 2: partition items to lower or upper portions of the temp list int pivotIdx = partition(source, comp, loIdx, hiIdx, pivot);
// Step 3: sort both halves of the list quicksortHelp(source, comp, loIdx, pivotIdx);
quicksortHelp(source, comp, pivotIdx + 1, hiIdx);
}

28.1.3 Runtime analysis of quicksort🔗

We’ve successfully avoided allocating any memory at all for quicksort, so we know for certain that \(M_{quicksort}(n) = 0\). But what about the runtime?
  • The runtime for swap is simple: it performs two reads and two writes, so \(T_{swap}(n) = 4\), regardless of the size of the input.

  • The runtime for partition depends on the size of the interval \(n = hiIdx - loIdx\). It checks every item in that range against the pivot, and potentially swaps all of them. So

    \begin{equation*}\begin{aligned} T_{partition}^{best}(n) &= n \\ T_{partition}^{worst}(n) &= 3n \end{aligned}\end{equation*}

    In other words, the performance is always linear in the size of the interval.

  • The runtime for qucksortHelp depends on the size of the interval \(n = hiIdx - loIdx\) as well. But it has two recursive calls, whose runtimes depend on how good of a pivot we chose. In the best case, the pivot is very close to the median value, and divides the list exactly in half:

    \begin{equation*}T_{quicksortHelp}^{best}(n) = T_{partition}^{best}(n) + 2T_{quicksortHelp}^{best}(n/2)\end{equation*}

    In the worst case, the pivot is either the minimum or maximum, and divides the list into an empty piece and a nearly-complete piece:

    \begin{equation*}T_{quicksortHelp}^{worst}(n) = T_{partition}^{worst}(n) + T_{quicksortHelp}^{worst}(0) + T_{quicksortHelp}^{worst}(n-1)\end{equation*}

    We’ve seen this latter recurrence before, when we determined the runtime of selection-sort: it’s \(O(n^2)\). The former recurrence is a trickier one. We can expand the recurrence a few times:

    \begin{equation*}\begin{aligned} T_{quicksortHelp}^{best}(n) &= T_{partition}^{best}(n) + 2T_{quicksortHelp}^{best}(n/2) \\ &= n + 2(T_{partition}^{best}(n/2) + 2T_{quicksortHelp}^{best}(n/4)) \\ &= n + 2(n/2 + 2(T_{partition}^{best}(n/4) + 2T_{quicksortHelp}^{best}(n/8))) \\ &= n + 2(n/2) + 4(n/4) + 8(n/8) + \cdots + n(1) \end{aligned}\end{equation*}

    How many terms are there in the final expanded line? It’s however many times we can divide \(n\) in half before reaching one. This is familiar: it’s \(\log_2 n\). Accordingly, our runtime is \(T_{quicksortHelp}^{best}(n) = n\log_2 n\).

From the best and worst case results we can conclude that

Runtime for quicksort

  

Best-case

  

Worst-case

\(T_{quicksort}\)

  

\(\Omega(n \log_2 n)\)

  

\(O(n^2)\)

\(M_{quicksort}\)

  

\(\Omega(1)\)

  

\(O(1)\)

Do Now!

What inputs produce the best and worst case behaviors of quicksort?

Almost all inputs produce nearly-best-case behavior for quicksort. The edge case happens when the pivot that’s picked is repeatedly either the maximum or the minimum — in such cases, the “partition” doesn’t split the list evenly at all. This occurs when the input list is already sorted (or reverse-sorted).

Can we do better? Can we create a sorting algorithm that never has any bad cases?

28.2 Mergesort: sorting in a guaranteed better worst-case time🔗

If we allow ourselves the luxury of a temporary ArrayList again, we can do even better. Recall the merge method we implemented in Assignment 6: it allows us to build up a larger sorted list from two smaller sorted lists. We can implement a similar method for ArrayLists. But how to use it? Let’s conceptually divide our ArrayList exactly at the middle index, into a low-index half and a high-index half. If only we could sort those two halves, then we could use merge to combine them into a sorted ArrayList containing the entire contents of the original ArrayList, and we’d be done! How can we sort those two halves? Well, we can divide them in half, at their middle indices, and magically sort their respective halves, and then merge their halves. We can clearly repeat this process, but when does it stop? Conveniently, a list containing at most one item is always sorted — so we have a base case for our recursion. This approach defines the mergesort algorithm.
// In ArrayUtils // EFFECT: Sorts the provided list according to the given comparator <T> void mergesort(ArrayList<T> arr, IComparator<T> comp) {
// Create a temporary array ArrayList<T> temp = new ArrayList<T>();
// Make sure the temporary array is exactly as big as the given array for (int i = 0; i < arr.size(); i = i + 1) {
temp.add(arr.get(i));
}
mergesortHelp(arr, temp, comp, 0, arr.size());
}
// EFFECT: Sorts the provided list in the region [loIdx, hiIdx) according to the given comparator.
// Modifies both lists in the range [loIdx, hiIdx) <T> void mergesortHelp(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int hiIdx) {
...
}
Our helper routine must:
  1. Find the middle index of the current range. (If the range contains at most one item, stop.)

  2. Recursively sort the lower range, and the higher range.

  3. Merge the two ranges.

Let’s elaborate this sketch into code:
// EFFECT: Sorts the provided list in the region [loIdx, hiIdx) according to the given comparator.
// Modifies both lists in the range [loIdx, hiIdx) <T> void mergesortHelp(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int hiIdx) {
// Step 0: stop when finished if (hiIdx - loIdx <= 1) {
return; // nothing to sort }
// Step 1: find the middle index int midIdx = (loIdx + hiIdx) / 2;
// Step 2: recursively sort both halves mergesortHelp(source, temp, comp, loIdx, midIdx);
mergesortHelp(source, temp, comp, midIdx, hiIdx);
// Step 3: merge the two sorted halves merge(source, temp, comp, loIdx, midIdx, hiIdx);
}

Do Now!

Design the merge helper method to complete the algorithm above.

Now all we need to do is define merge. When we defined merge over IList, we used dynamic dispatch to give us the first items of each of the two lists, as needed. Here, we do not have Cons objects, so we must maintain two indices, and advance them as needed. Recall our running example:
Index:   0       1      2      3        4         5       6       7     8
Data: [grape, cherry, apple, kiwi, watermelon, banana, honeydew, date, fig]
And suppose that at the next-to-last stage, we had sorted indices 0 through 4, and indices 5 through 8:
Index:   0       1      2      3      4           5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]
We now need to merge those two sub-lists. We maintain three indices: curLo keeps track of the next item in the lower half-list, curHi keeps track of the next item in the upper half-list, and curCopy keeps track of where to insert into the temp storage. We loop, until curLo runs off the end of the lower half-list and curHi runs off the end of the upper half-list, and copy the smaller of the two current values into the temp storage, and advance the relevant index. Concretely, curLo starts off at loIdx, curHi starts off at midIdx, and curCopy starts off at loIdx:
       curLo                                     curHi
SOURCE   v                                        v
Index:   0       1      2      3       4          5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

      curCopy
TEMP     v
Index:   0       1      2      3      4     5     6      7           8
Data: [-----, ------, -----, -----, -----, ----, ----, ------, -----------]
Since "apple" is less than "banana", we copy "apple", and advance curLo and curCopy:
               curLo                            curHi
SOURCE           v                                v
Index:   0       1      2      3       4          5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

              curCopy
TEMP             v
Index:   0       1      2      3      4     5     6      7           8
Data: [apple, ------, -----, -----, -----, ----, ----, ------, -----------]
Since "cherry" is greater than "banana", we copy "banana", and advance curHi and curCopy:
               curLo                                   curHi
SOURCE           v                                       v
Index:   0       1      2      3      4           5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

                      curCopy
TEMP                     v
Index:   0       1       2      3      4     5     6      7           8
Data: [apple, banana, ------, -----, -----, ----, ----, ------, -----------]
Since "cherry" is less than "date", we copy "cherry", and advance curLo and curCopy:
                      curLo                            curHi
SOURCE                  v                                v
Index:   0       1      2      3       4          5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

                             curCopy
TEMP                            v
Index:   0       1       2      3      4     5     6      7           8
Data: [apple, banana, cherry, -----, -----, ----, ----, ------, -----------]
Continuing in this manner, we eventually get to the following state:
                             curLo                                          curHi
SOURCE                         v                                              v
Index:   0       1      2      3       4          5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

                                                         curCopy
TEMP                                                        v
Index:   0       1       2      3    4     5       6        7        8
Data: [apple, banana, cherry, date, fig, grape, honeydew, ----, ----------]
At this point, we have nothing left in the upper half-list, so we copy the remaining items from the lower half-list, because they are already sorted, and evidently are greater than everything in the upper half-list:
                                                curLo                       curHi
SOURCE                                            v                           v
Index:   0       1      2      3       4          5      6    7      8
Data: [apple, cherry, grape, kiwi, watermelon, banana, date, fig, honeydew]

                                                                           curCopy
TEMP                                                                          v
Index:   0       1       2      3    4     5       6        7        8
Data: [apple, banana, cherry, date, fig, grape, honeydew, kiwi, watermelon]
Since curLo now equals midIdx we’ve finished the lower half-list, and so we’re done. All that remains is to copy everything back from temp into source.

// Merges the two sorted regions [loIdx, midIdx) and [midIdx, hiIdx) from source // into a single sorted region according to the given comparator // EFFECT: modifies the region [loIdx, hiIdx) in both source and temp <T> void merge(ArrayList<T> source, ArrayList<T> temp, IComparator<T> comp,
int loIdx, int midIdx, int hiIdx) {
int curLo = loIdx; // where to start looking in the lower half-list int curHi = midIdx; // where to start looking in the upper half-list int curCopy = loIdx; // where to start copying into the temp storage while (curLo < midIdx && curHi < hiIdx) {
if (comp.compare(source.get(curLo), source.get(curHi) <= 0) {
// the value at curLo is smaller, so it comes first temp.set(curCopy, source.get(curLo));
curLo = curLo + 1; // advance the lower index }
else {
// the value at curHi is smaller, so it comes first temp.set(curCopy, source.get(curHi));
curHi = curHi + 1; // advance the upper index }
curCopy = curCopy + 1; // advance the copying index }
// copy everything that's left -- at most one of the two half-lists still has items in it while (curLo < midIdx) {
temp.set(curCopy, source.get(curLo));
curLo = curLo + 1;
curCopy = curCopy + 1;
}
while (curHi < hiIdx) {
temp.set(curCopy, source.get(curHi));
curHi = curHi + 1;
curCopy = curCopy + 1;
}
// copy everything back from temp into source for (int i = loIdx; i < hiIdx; i = i + 1) {
source.set(i, temp.get(i));
}
}
28.2.1 Runtime analysis of mergesort🔗

Do Now!

What is the runtime of mergesort, in the best and worst cases?

First let’s consider the runtime of merge, as a function of the difference \(n = hiIdx - loIdx\). The three while loops together examine every item in the given range of indices and copy them exactly once into the temp list. This takes \(O(n)\) time to complete. The final counted-for loop again iterates over the given range of indices, and copies every item back into source; this takes another \(O(n)\) time to complete. Therefore, \(T_{merge} \in O(n)\) in all cases — there is no difference between best or worst-case behavior.

Next, let’s look at mergesortHelp, again as a function of the difference \(n = hiIdx - loIdx\). We can see from the code that it makes two recursive calls, followed by one call to merge. Crucially, the computation of midIdx guarantees that both recursive calls are to subproblems that are at most half as big as the current one. This yields a recurrence

\begin{equation*}T_{mergesortHelp}(n) = 2T_{mergesortHelp}(n/2) + T_{merge}(n)\end{equation*}

This is the same recurrence as the best-case scenario for quicksort above, and by similar reasoning, we conclude that we can subdivide the list in half at most \(\log_2 n\) times before reaching a base case, so our total running time is

\begin{equation*}T_{mergesortHelp}(n) = n\log_2 n\end{equation*}

There is no best or worst case to consider: all cases have the same performance.

Finally, we look at mergesort itself. It requires a temporary ArrayList of the same size as the input, and then makes one call to mergesortHelp. So we obtain

Runtime for mergesort

  

Best-case

  

Worst-case

\(T_{mergesort}\)

  

\(\Omega(n \log_2 n)\)

  

\(O(n\log_2 n)\)

\(M_{mergesort}\)

  

\(\Omega(n)\)

  

\(O(n)\)

28.3 Divide-and-conquer algorithms🔗

The quicksort algorithm above is an example of what’s known as a divide-and-conquer algorithm. By splitting the input into nearly equally-sized chunks and processing them independently, we can compute answers in much better performance than if we processed the inputs strictly via structural recursion. Divide-and-conquer algorithms are an example of generative recursion: they require a leap of insight to figure out the best way to decompose the problem; once that insight is gained, the rest of the solution follows from the same design strategies we’ve been following all semester.

How much better, really, is \(O(n \log_2 n)\) than \(O(n^2)\)? For small values, there isn’t much difference. But when \(n = 256\), \(\log_2 n = 8\), which means \(O(n \log_2 n)\) is already over 60 times better than \(O(n^2)\). For \(n = 2^{10} = 1024\), that factor improves to over 100; for \(n = 2^{20} = 1048576\), that factor improves to over 50000, and it keeps getting better. Conversely, for \(n = 2^{20}\), \(O(n \log_2 n)\) is only twenty times worse than \(O(n)\):

image

28.3.1 Time/space tradeoffs🔗

Mergesort is another divide-and-conquer algorithm, and it gets a better worst-case behavior bound than quicksort. But to accomplish this, it requires the use of \(O(n)\) additional memory. This is a classic example of what’s known as a time/space tradeoff: frequently we can speed up algorithms by allowing them addtional space in which to store temporary results. (Conversely, if we are constrained for space, we can sometimes improve our memory usage by recomputing results as needed instead of storing them.) Many, many algorithms and their variations result from exploring this tradeoff in various ways; for more details, there are entire courses devoted to studying algorithms!

In the next lecture, we will see one more sorting algorithm, that again achieves \(O(n\log_2 n)\) worst-case time performance, but uses no additional storage. To accomplish this, we’ll make use of a novel data structure, which is of widespread utility (besides merely sorting efficiently!), and whose construction combines several of the data structures we’ve seen so far.

28.4 Discussion🔗

At this point, after going through four sorting algorithms, several helper functions and even more recurrence relations, we might well ask, “Wait! Do we really need to keep coming up with more algorithms? Isn’t one of these good enough?” In addition to using sorting algorithms to illustrate how to analyze algorithms, this sequence of four algorithms also illustrates how to come up with improved algorithms in the first place.
  • We arrived at insertion-sort pretty much by following the design recipe for the IList data type.

  • We arrived at selection-sort by asking “could we improve things if only we allowed mutation?” The central insight for selection-sort was that we could build an in-place sorting algorithm that didn’t require reallocating so many intermediate results.

  • We arrived at quicksort by looking at selection-sort’s worst case behavior and asking, “couldn’t we avoid doing all those comparisons?” The central insight here is that because comparisons are transitive, we could pick a “middle-most value” and let it partition the data for us, and then we’d never have to compare any of the smaller numbers to any of the larger numbers.

  • We arrived at mergesort by looking at quicksort’s worst case behavior and asking, “could we ensure we never make a bad choice, and always split the list cleanly in half?” The central insight of mergesort was to do away with the pivot altogether, because the key to quicksort’s good behavior was the divide-and-conquer approach.

In general, algorithm designers often look at the solutions we currently know to problems, and look at their worst cases and ask, “why is this the worst case? What’s triggering it?” The leap of intuition here is precisely the same as the leap needed to come up with a generatively recursive algorithm, and can be just as elusive at first. By studying many related algorithms, and identifying their central insights, we actually can get some guidance as to where to keep looking for improvements.