Dynamic programming is a fancy name for *storing intermediate results and re-using the stored result instead of re-computing them each time.*

We'll first see an example of a dynamic programming problem. Technical definitions are introduced at the end of the tutorial.

# Tiling problem

**Problem**: You are given a grid of size n \times 2 and n tiles of size 2 \times 1. In how many different ways can you tile the grid such that the entire grid is covered and no tiles overlap. (The tiles look identical to each other. Two ways of tiling are different based on whether the tiles are placed horizontally or vertically).

**Example**: There are 3 possible ways of tiling a 3 \times 2 grid.

Tiling 1:

Three vertical tiles

__ __ __

| | | |

|__|__|__|

Tiling 2:

1 vertical tile, followed by 2 horizontal tiles

__ _____

| |_____|

|__|_____|

Tiling 3:

2 horizontal tiles, followed by 1 vertical tile

_____ __

|_____| |

|_____|__|

## First solution (recursion)

In any tiling of the n \times 2 grid, the bottom-left square will either be covered with a vertical tile, or a horizontal tile.

- If it is covered with a vertical tile, then what remains is a (n-1) \times 2 grid. (example, tiling 1 and tiling 2 above). We can look at this grid yet to be tiled, as a
*smaller* version of the same problem. - If it is covered with a horizontal tile, the only way to cover the top left square (and the square to its immediate right) is to use another horizontal tile, leaving us with a (n-2) \times 2 grid. (example, tiling 3 above)

Since the bottom left square can only be tiled in these two distinct ways, the number of ways to tile a n \times 2 grid is the sum of the number of ways to tile a (n-1) \times 2 grid and the number of ways to tile a (n-2) \times 2grid.

Hence, we can define a function f(n) = number of ways of tiling an n \times 2 grid = f(n-1) + f(n-2).

This gives us a recursive solution. So we need a few base cases. In this case, f(1) = 1 (number of ways of tiling a 1 \times 2 grid) and f(2) = 2 (number of ways of tiling a 2 \times 2 grid).

**Code:**

def tile(n):

# base case

if n == 1: return 1

if n == 2: return 2

# recursive case

return tile(n-1) + tile(n-2)

## How slow is our solution?

Now, let's see what happens when we call tile(7). This is how the recursion expands.

tile(7)

/ \

/ \

/ \

tile(6) tile(5)

/ \ / \

tile(5) tile(4) tile(4) tile(3)

/ \ / \ / \ / \

tile(4) tile(3) tile(3) tile(2) tile(3) tile(2) tile(2) tile(1)

. . . . .

. . . . .

In total, tile(6) will be called once, tile(5) will be called 2 times, tile(4) is called 3 times, tile(3) is called 5 times, tile(2) is called 8 times ...

In general, the size of the above recursion tree is exponential in n. Our solution is really slow.

## Dynamic Programming solution (memoization)

Notice that in the above solution, what we are calculation the same value repeatedly. For example, we calculate tile(3) 5 times. There's a simple fix to this, *store the values when after calculating*.

# solution with memoization

answer = [None, None, None, ... (n times) ... ]

def tile(n):

if answer[n] is None: # answer is not stored (*)

# calculate the answer ..................(*)

# base case

if n == 1: result = 1

if n == 2: result = 2

# recursive case

result = tile(n-1) + tile(n-2)

answer[n] = result # store the answer .. (*)

return answer[n] # return stored answer .... (*)

The important lines in the above code are marked with a (*).

The new recursion tree looks as follows:

tile(7)

/ \

/ \

/ \

tile(6) tile(5)

/ \

tile(5) tile(4)

/ \

tile(4) tile(3)

. .

. .

This is because tile(5) called tile(4) + tile(3) exactly once. Once it is calculated, if it is called again, it doesn't do any calculations. Notice the large amounts of time savings here. The entire subtree below the tile(5) on the right is missing.

## Dynamic Programming solution (bottom-up)

There's another way to implement the same solution, bottom-up.

# bottom-up implementation

def tile(n):

answer = [None, None, None, ... (n times) ... ]

# base case

answer[1] = 1

answer[2] = 2

# recursive case

for i in 3 ... n:

answer[i] = answer[i-1] + answer[i-2]

return answer[n]

Note that although implementing the bottom-up solution is much simpler for this problem, in general, which solution is easier to implement (memoization vs bottom-up) depends on the specific problem.

## How fast is the dynamic programming solution?

Both the dynamic programming implementations are O(n), i.e. linear run-time. This might be slightly tougher to see in the memoization implementation, but notice that each node gets expanded only once. Hence, the total number of nodes is definitely smaller than 3n = O(n).

# Technical definitions (theory)

It is useful to be familiar with some technical definitions related to dynamic programming.

The two important properties a solution needs to have for it to be considered **dynamic programming** is

**Optimal sub-structure**: The optimal solution to a problem can be composed from optimal solutions to similar smaller problems. In our tiling example, tile(n) = tile(n-1) + tile(n-2).**Overlapping sub-problems**: If implemented naively, the *same* smaller problem is solved many many times when trying to solve the original problem. In our tiling example, tile(3) was being solved 5 times when we tried to solve tile(7).

A dynamic programming solution consists of two main parts.

**State**: The parameters that define the subproblems. In the tiling example, tile(i) for i in 1 .. n are the states. **Recursive formula**: which defines the relation between the larger problems and the smaller problems. In the tiling example, tile(i) = tile(i-1) + tile(i-2) is the recursive formula.

There are in general two ways of implementing dynamic programming solutions to problems. The first method is **recursive **with** memoization**, and the second method is **bottom-up** and **iterative**.

# Next steps

For **trivia** about why Dynamic Programming is named so, a **video** and other information, check out the posts in this discussion.

The best way to get better at dynamic programming is to solve different kinds of dynamic programming problems. This is what we will be doing in the upcoming tutorials.

# References

- Dynamic Programming - Tiling problem | IARCS Online Study Material