Dynamic programming
This resource is currently unmaintained. Please feel welcome to adopt it and then change this tag. |
In computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure that takes much less time than naïve methods.
The term was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he had refined this to the modern meaning. The field was founded as a systems analysis and engineering topic which is recognized by the IEEE.
The word "programming" in "dynamic programming" has no particular connection to computer programming at all. A program is, instead, the plan for action that is produced. For instance, a finalized schedule of events at an exhibition is sometimes called a program. Programming, in this sense, is finding an acceptable plan of action.
Algorithms that use dynamic programming
edit- Many string algorithms including longest common subsequence
- The Cocke-Younger-Kasami (CYK) algorithm which determines whether and how a given string can be generated by a given context-free grammar
- The use of transposition tables and refutation tables in computer chess
- The Viterbi algorithm (used for hidden Markov models)
- The Earley algorithm (a type of chart parser)
- The Needleman-Wunsch and other sequence alignment algorithms used in bioinformatics
- Levenshtein distance (edit distance)
- Floyd's All-Pairs shortest path algorithm
- Optimizing the order for chain matrix multiplication
- Subset Sum algorithm
- knapsack problem
- The dynamic time warping algorithm for computing the global distance between two time series
- The Selinger (a.k.a System R) algorithm for relational database query optimization
- De Boor algorithm for evaluating B-spline curves
- Duckworth-Lewis method for resolving the problem when games of cricket are interrupted
- The Value Iteration method for solving Markov_decision_processes