Dynamic Programming
Dynamic Programming
Dynamic programming applies when subproblems share subsubproblems.
- Characterize the structure of an optimal solution.
- Recursively define the value of an optimal solution.
- Compute the value of an optimal solution, typically in a bottom-up fashion.
- Construct an optimal solution from computed information.
- When to use dynamic programming?
- Optimal structure: the optimal solution consists of optimal solutions to subproblems.
- Overlapping subproblems. (otherwise greedy algorithm)
- Requirements
- Subproblems be independent, in that they do not share resources.
- Subproblems be overlapping, in that the same set of relatively small problems are solved again and again.
Solution
- Top-down with memoization method
- Recursive procedure.
- Store the result of problems upon first encounter.
- Some subproblems may not need to be solved at all.
- Bottom-up method
- Calculate the subproblems in increasing size
- Suitable when there’s a clear size ranking in subproblems
- May have smaller constant compared to memoization method due to low overhead.
- Utilize known structure of subproblems to speed up table access.
- Subproblem graph
- A directed graph representing the dependencies of subproblems and sub-subproblems
- The complexity is linear in the number of vertices and edges.
- Dynamic programming can be viewed as a depth-first search in this graph.
- To reconstruct, record not only the optimal value, but also the choices made