Dynamic programming is a fancy name for storing intermediate results and re-using the stored result instead of re-computing them each time.
Let's see an example of a dynamic programming problem. Once we solve the problem using dynamic programming, the formal technical definitions will be easier to follow.
Problem: You are given a grid of size n \times 2 and n tiles of size 2 \times 1. In how many different ways can you tile the grid such that the entire grid is covered and no tiles overlap. (The tiles look identical to each other. Two ways of tiling are different based on whether the tiles are placed horizontally or vertically).
Example: There are 3 possible ways of tiling a 3 \times 2 grid.
Computers can only do a limited number of operations in a given time, usually about a billion operations per second. Here, by an individual operation, we mean addition, multiplication, assigning a value to a variable, etc. Hence, if we want our algorithm to finish executing in a second or so, then we need to make sure that the total number of operations we are asking it to do is less than 1 billion.
Estimating the exact run-time of an algorithm without implementing and executing it is very tough since it depends on so many factors - including the programming language being used, the compiler, the hardware of the computer itself, and more. Instead, what we'd like to do most of the time, is to estimate the run-time approximately.
Rate of growth
In algorithms, we focus on estimating the execution ...