[FrontPage] [TitleIndex] [WordIndex

Contents

  1. Code
  2. Analysis

Code

This is hardly the only solution.

Analysis

The running time of each of these algorithms is O(n) and also Theta(n), i.e., n is a bound above and a tight bound. This is because each algorithm has to cdr down the list until the element is found. (Insert and delete also cons back up an answer, but this is also proportional to n.) In both the worst case -- needing to cdr all the way to the end of the list -- and the average case, in which we only expect to go halfway -- the running time is proportional to the length of the list. (The best case is actually O(1).)

Said a different way, if I double the length of the list, I'd expect the running time of each algorithm to double.

The major benefit of keeping the list in order, from the perspective of the collection ADT, is that member? and delete are expected to take only half as long in the case that the element is absent. (That is, we've cut the constant factor in half.) This is because you can tell when you've gone too far in an ordered list but must keep searching to the end in an unordered list. There's actually no benefit (expected) in the case where the element is present as in both an ordered and an unordered list you'd expect to find the element halfway, on average.

A second benefit is that an ordered list is already sorted. Although sorting isn't a part of the collection ADT, it's a nice benefit to have. Since sorting is O(n log n) (for the better algorithms), this could make a difference (see below).

The major cost of keeping the list in order is that insertion goes from an O(1) operation -- cons -- to an O(n) operation.

Times I'd prefer the ordered list implementation:

Times I'd prefer the unordered list implementation:

The truth is that there are other collection implementations (like Hashtables or Binary Trees) that are more efficient for many profiles.


2013-07-17 10:43