When I learned dynamic programming, I was told this was in contrast to simple memoization in that memoization is a cache of computation, dynamic programming involves a "choice" in each step.
For example let's look at the fibonacci sequence, F(n) = F(n-1) + F(n-2), naively you just fill in a (1 dimensional) table with the results [1, 1, 2, 3, 5, 8, 13...] and it feels static because there's nothing to choose or decide.
In contrast for example the longest common subsequence looks like this:
lcs[m, n] = 0 if m = 0 or n = 0
lcs[m, n] = lcs[m-1, n-1] + 1 if X[m] = Y[n]
lcs[m, n] = max(lcs[m-1, n], lcs[m, n-1]) if X[m] != Y[n]
At every step there's a "choice" to be made depending on the situation. The functions like "max()" are quite typical for dynamic programming. At least this is how I interpret the "dynamic" part.
For example let's look at the fibonacci sequence, F(n) = F(n-1) + F(n-2), naively you just fill in a (1 dimensional) table with the results [1, 1, 2, 3, 5, 8, 13...] and it feels static because there's nothing to choose or decide.
In contrast for example the longest common subsequence looks like this:
At every step there's a "choice" to be made depending on the situation. The functions like "max()" are quite typical for dynamic programming. At least this is how I interpret the "dynamic" part.