# 1. Longest Increasing Subsequence

### The Problem

For any sequence Q, a subsequence of Q is another sequence obtained from Q by deleting zero or more elements, without changing the order of the remaining elements; the elements of the subsequence need not be contiguous in Q.

For example, when you drive down a major street in any city, you drive through a sequence of intersections with traffic lights, but you only have to stop at a subsequence of those intersections, where the traffic lights are red. If you’re very lucky, you never stop at all: the empty sequence is a subsequence of Q. On the other hand, if you’re very unlucky, you may have to stop at every intersection: Q is a subsequence of itself.

Here are some examples:

`BOAT`

is a subsequence of`BOX-OF-CHOCOLATES`

.`OOF`

is a subsequence of`BOX-OF-CHOCOLATES`

, but`OO-F`

is not.`O-O-OOLA`

is a subsequence of`BOX-OF-CHOCOLATES`

, but`FOX`

is not.`FOOL`

and`BHOLA`

are a subsequences of`BOX-OF-CHOCOLATES`

, but`LAX`

and`BETA`

are not.

Now suppose we are given a sequence of integers, and we need to find the length of a longest subsequence whose elements are in increasing order:

- the input is an integer array A[1 .. n], and
- we need to compute the length of a longest possible sequence of indices 1 \leqslant i_1<i_2<\cdots<i_{\ell} \leqslant n such that A\left[i_k\right]<A\left[i_{k+1}\right] for all k.

We will use `LIS`

to refer to the phrase *longest increasing subsequence*. Note that there may be multiple increasing subsequences, and even multiple choices for increasing subsequences that are the longest. The approaches below that work correctly to find the length of a LIS are easily adaptable to finding a LIS as well.

### The Solution — Take 0 (a dead end)

### The Solution — Take 1 (building on the bad take)

### The Solution — Take 2 (a different approach)

### Time and Space Complexity

For the first approach:

- We filled out O(n^2) entries at O(1) cost each, so the overall running time of the algorithm is O(n^2).
- We store O(n^2) entries, although with a careful implementation the storage can be limited to O(n).

For the second approach:

- We filled out O(n) entries at O(n) cost each, so the overall running time of the algorithm is O(n^2) again.
- We store O(n) values.

### Correctness

The correctness of the computations of the T-values specified by both approaches above is implicit in the design of the recurrence. However, we provide an explicit proof for the second approach here. The proof of correctness for most of the other recurrences that we will discuss can be established with a similar approach in spirit and is left as an exercise.

## Footnotes

Indeed, since we already have a kickstart at the fragment T[n], if we can figure out a way of recovering the value of T[i] assuming knowledge of T[j] for j > i, then we are done: we can use this method to go backwards starting from T[n] all the way back to T[1].↩︎