What is dynamic programming

Dynamic programming is a method of solving a large problem by dividing it into small sub-problems that we know how to build an optimal solution and then base on that we can build our solution for any problem size.
There are three standard approaches to dynamic programming:

  • Bottom-up
  • Top-down
  • Half-and-Half

Bottom-up

When we try to solve problem with bottom approach we start from finding how to solve basic problems. We try to work on the problem with size zero, one, two, … until we find a pattern of how to build a solution for next size using previous solution.

For example if we know how to build problem of size two, and three, maybe next problem size of four can be solved using result of problem size two and three that we already know.

Top-down

When we try to find algorithm for our task using top-down method we start from large problem and try to divide it into smaller subsets

For example, we have a problem of size “n”, and we try to think how can it be spited into smaller subsets for example four problems of n/4 size each. We do it recursively until we find pattern.

Half-and-half

This approach is close to top-down but this time we divide the problem in half.

An example of this approach is merge sort, where we split a large array into two smaller arrays, we keep splitting subarrays until the size of a subarray is two or one element. Then we sort this subarray and build a solution by margining each sorted subarray

Optimization

Dynamic problems tend to grow quite quick, so we usually have to optimize them. There are two method that helps to reduce problem size:

  • Memoization for top-down approach
  • Tabulation for bottom-up approach

Memoization

This approach base on storing sub-problem results and reuse it whenever it is possible instead of solving it again. It can be achieved by using a simple HashMap, where key is a sub-problem definition for example problem size and value is the result. We populate hash map to each sub-problem.

Tabulation

Tabulation is used for bottom-up approach, we start from small problem, store each results and accumulate them to provide answer for our problem.

Fibonacci number

This is a very simple example that can show how DP is used in practice. It will also show how fast relatively small problem grows in size and start taking minutes to solve if we’re not using any optimization technics(memoization or tabulation)

Fib(n) = Fib(n-1) + Fib(n-2)
Fib(0) = 1
Fib(1) = 1

Fibonnaci function of n-th element get result from two previous call (size n-1 and n-2) and add them together.

Example: F(5)

As we can see this create quite a big tree, the height of this tree is equal to the Fibonacci number we are trying to calculate, so for this tree height is 5. If we would like to calculate F(15) height of a tree would be equal to 15.

Since it is a binary tree, each node can have a maximum of two branches. If we consider the Fibonacci tree as a perfect balanced tree we would say that it has 2h nodes so the upper bound would be O(2h). Where “h” is the height of a Fibonacci tree. Since we know that this tree is not a perfect balance tree this value is too high.

Calculate upper bound for Fibonacci

Feel free to skip this chapter, I will try to calculate tighter upper bound for Fibonacci.

Lets define O(n) as a recursive function:

\[ O(n) = \left\{\begin{array}{lr} 1, & \text{for } \leq 1\\ O(n) = O(1) + O(N-1) + O(N-2), & \text{for } 2 \leq n\\ \end{array}\right\} \]

For Fib(0) and Fib(1) we have direct values so O(1)
For Fib(n) we need to calculate F(n-1) and F(n-2) each will take T(n – 1) and T(n – 2) and then we need to add value, so this will take O(1) time

Let’s calculate O for multiple values:
T(2) = T(1) + T(0) + O(1) = O(3)
T(3) = T(2) + T(1) + O(1) = T(1) + T(0) + O(1) + T(1) + O(1) =
= 2T(1) + T(0) + 2O(1) = 3T(0) + 2O(1) = O(5)
T(4) = T(3) + T(2) + O(1) = O(5) + O(3) + O(1)= O(9)
T(5) = T(4) + T(3) + O(1) = O(9) + O(5) + O(1) = O(15)
T(6) = T(5) + T(4) + O(1) = O(15) + O(9) + O(1) = O(25)
T(7) = T(6) + T(5) + O(1) = O(25) + O(15) + O(1) = O(41)
T(8) = 67 T(9) = 109 T(10) = 177
T(15) = 2973 T(20) =21’891 T(30) = 2’692’537

Since we know that for regular binary tree we would have O(2h) we will try to apply abh
regression function. We can use this site : https://planetcalc.com/

The result of this regression is T(h) = 1.1329 * 1.6403h => O(1.64h)
This value will increase much slower than our previous aproximation.

Now we can compare our new upper bound to previous one.
T(50) is actual value.

T(50) = 40’730’022’147
O(250) = 1’125’899’900’000’000
O(1.6450)= 55’232’207’639

Now we definitely see that our previous approximation was overshot more than 27642 times bigger than real value.
New approximation have only 1.56 times overshot

Brute force approach to Fibonacci number

fun fib(n: Long): Long =
    when {
        n <= 0L -> 0
        n == 1L -> 1
        else -> fib(n - 1) + fib(n - 2)
    }

If we try to calculate fib(50) using this simple function we would have to wait quite a long time.

Using our previous approximation we know that this requires to create a tree with 55’232’207’639 nodes. On my laptop it took 46 seconds, this is way too long, but we can fix it with dynamic programing.

First we will try to guess how long will it take to calculate F(55) using our formula
T(55) =O(1.6455) = 655’256’959’995
this is 11.86 times more work, so it should take 46 * 11.86 = 545 seconds. Our upper bound for this execution is 9 minutes.

Real execution for this value was 506 seconds

How long would it take to get F(100)?
1.64100 : 1.6450 = 1.6450 times more work than 1.6450 our upper time bound is
46 sec * 1.6450 = 80564 years

Fibonacci memoization

If we look again on our tree for F(5) we can see that we repeat a lot of work in each branch. In this example we calculate F(2) three times and, F(3) two times.

Instead of calculating them every time we could store them in some HashMap where the key would be our “n” and value F(n)

This is how our recursive call will look like with memonization, red values will get from cache. We do not need to start another recursion call for them

fun fib(n: Int,  memo: MutableMap<Int, BigInteger> = mutableMapOf()): BigInteger =
    when {
        n <= 0 -> BigInteger.ZERO
        n == 1 -> BigInteger.ONE
        memo.containsKey(n) -> memo[n]!!
        else -> (fib(n - 1, memo) + fib(n - 2, memo))
                 .apply { memo[n] = this }
    }

Fib method has new parameter “memo” this is our hash map where we are going to store each value we already calculated. Initially this hash map is empty.
I also changed result type from long to BigInteger, because the new function can easily overflow long.

Before we start recursion we are checking our hash map for cached result

memo.containsKey(n) -> memo[n]!!

If there is no result in hash map we need to calculate it end store in hash map. However if we have a value we just return it, no need to call recursion.

else -> (fib(n - 1, memo) + fib(n - 2, memo)).apply { memo[n] = this }

This else statement call recursion, store result and return value.

We have there three very important steps:

  • Pass “memo” to recursion as second argument, without that each recursion call would have new memo instance, and result would not be reused
  • Calculate new value
  • Store new value in memo, for anyone not familiar with Kotlin syntax, “apply” will pass left side as “this” into brackets { memo[n] = this } and then return “this”.

Using this function to calculate F(1000) takes on my laptop only 3ms.

You could think that this is a holy grail to all recursive problems but unfortunately when I try to execute F(10’000) I will get :

Exception in thread “main” java.lang.StackOverflowError

Recurrence function reach to many recursive steps. We can solve this issue with tabulation approach.

Fibonacci tabulation

Tabulation is less intuitive to use but correctly done should be faster and usually more memory efficient than memonization. Iterative approach will also remove the issue with StackOverflowError on too many calls.

In previous approach we tried to split problem into smaller sub-problems, while maintain a map of already solved sub-problems.

Tabulation have bottom-up approach, this time we start from sub-problems and try build our way to top, using previous result stored in array.

If you look again on memonized F(5) tree you should see that to calculate N-th Fibonacci number all we need is to get two previous values. We will be doing that by calculating elements in order 1, 2, 3 … until we reach N

fun fibTab(n : Int) : BigInteger {
    val out = arrayOfNulls<BigInteger>(n + 1) 
            as Array<BigInteger>
    
    out[0] = BigInteger.ZERO
    out[1] = BigInteger.ONE
    
    for (i in 2 .. n) {
        out[i] = out[i - 1] + out[i - 2]
    }
    
    return out[n]
}

This time we start from creating an array for all our elements(n + 1) since we count from 0 to N. Array element at position “i” holds value for F(i). We initialize first two nodes by hand:
out[0] = 0
out[1] = 1
and now we can iterate and count “i”-th value using previous calculated “i-1” and “i-2”.
Last array element is our Fibonacci number.

Previous memonization solution took 3ms to count to F(1000), and it could not count to F(10’000) this method do it 3x faster and can reach value of F(10’000) and even F(100’000) in only 410ms.

In this sample we could just store two last value instead of creating a large array, but this sample was created to show how tabulation works.

Summary

Dynamic programming is a powerful approach to algorithm design. Many problems can be solved by dividing them into smaller one. DP give you guild lines how to do it correctly.

DP also provides recipe how to build our solution from brute force algorithm into optimal solution, using one of optimization technic.

Other examples

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *