Here is a little riddle which explains how dynamic programming can sometimes
work very well.

In Europe, the new currency is made up of cents and euros. There are eight different coins in circulation: 1c, 2c, 5c, 10c, 20c, 50c, 1€ (100c), 2€ (200c)

**How many different ways can 2€ be made using as much coin as you want?**

Before we start, what is dynamic programming?

Sometimes, when dealing with a complex problem, as Descartes said, you have to divide it into a collection of simpler parts.

"diviser chacune des difficultés que j'examinerais, en autant de parcelles qu'il se pourrait, et qu'il serait requis pour les mieux résoudre"

"divide each of the difficulties that I examine in as many parts that it could and would be best for solving them."

-- René Descartes, Discours de la méthode

Dynamic programming, **DP** for short, is a method for tackle a
complex problem by breaking it down into a collection of simpler subproblems.
DP is usually based on a recurrent formula and one or more starting state.

Here, we have to express the solution of the original problem in terms of solutions for smaller problems. We can split the possibilies to make the target amount 2€ by two categories: using a piece of 2€ or not.

- 0 * 200c + 200c with coins lower than 200c
- 1 * 200c + 0

Let ** F** be the function calculating the number of ways we can made a certain
amount. It will take two inputs: the total amount and the largest coin used.

We can see

because there is only one way to make 2€ with a piece of 2€!

Furthermore, we can decompose F(200, 100) into multiple categories:

- 0 * 100c + 200c with coins lower than 100
- 1 * 100c + 100c with coins lower than 100
- 2 * 100c + 0 with coins lower than 100

is the target amount`t`

is the value of the largest available coin`c`

is the value of the next coin smaller than s`p(s)`

is the number of ways to make the target amount`F(t, c)`

with coins of value`t`

and/or smaller coins.`c`

`t=1`

or `c=0`

are the initial states.

Instead of creating a function giving the next smaller coin, ** c** will be
replaced by his index

**in the sorted array and**

`m`

**by**

`p(c)`

`m-1`

```
int arr[8] = {1, 2, 5, 10, 20, 50, 100, 200};
int F(int t, int m) {
if (arr[m] == 1 || t == 0) {
return 1;
}
int total = 0;
for (int i = 0; i <= t/arr[m]; i++) {
total += F(t - i * arr[m], m - 1);
}
return total;
}
```

Unfortunately, the more ** t** and

**will be large, the more we'll need to call the function. Worse, the same values will be calculated several times. To calculate with an amount of 10€, the function is called 325 195 472 times and it took nearly 4s.**

`m`

To fix that, we can simply use memoization. Since the result depends only on the two inputs, which is a deterministic behavior, it can be stored in an table-structure. Every time a result has already been calculated, it is retrieved from the array.

```
int memo[205][10]; // Initialized by memset(memo, 0, sizeof(memo));
int arr[8] = {1, 2, 5, 10, 20, 50, 100, 200};
int F(int t, int m) {
if (memo[t][m]) {
return memo[t][m];
}
if (arr[m] == 1 || t == 0) {
return memo[t][m] = 1;
}
int total = 0;
for (int i = 0; i <= t/arr[m]; i++) {
total += F(t - i * arr[m], m - 1);
}
return memo[t][m] = total;
}
```

Great! We have our solution. The function call number was reduced to 66 254 and
it took less then 1s. But wait a minute, couldn't we improve even more this
algorithm?
We've just seen the top-down approach. Let's take a look at the dependencies
for one example `F(5, 10)`

.

To find a way to reduce the dependencies, we can reformulate `F`

.

`G`

yields this dependencies tree:

As you can see, the total number of arrows is greater: 8 instead of 6. However,
every `G(t, c)`

call depends at most on only two other elements:

- a previous element of its column c and
- the element at the same row t in the previous column
`c-1`

.

Consequently, we can calculate one column at a time while we overwrite the previous column.

```
int G(int t, int m) {
int tab [10100];
memset(tab, 0, sizeof(tab));
tab[0] = 1;
for (int i = 0; i <= m; i++) {
for (int j = arr[i]; j <= t; j++) {
tab[j] += tab[j - arr[i]];
}
}
return tab[t];
}
```

This is the bottom-up approach of dynamic programming. By using only loops instead of recursive calls, it makes this solution very efficient: more than 100 faster relatively to the previous.

F(1000, 200) | Time (ms) | Recursive calls |
---|---|---|

Algo 1 | 3849.650 | 325 195 472 |

Algo 2 | 1.240 | 66 254 |

Algo 3 | 0.066 | 0 |

F(10000, 200) | Time (ms) | Recursive calls |
---|---|---|

Algo 2 | 93.945 | 6 568 754 |

Algo 3 | 0.247 | 0 |

DP helps us to solve this problem with efficiency. However DP is not applicable to all optimization problems and sometimes, such a strategy may end up performing full enumeration.