Friday, 15 June 2012

Introduction To Dynamic Programming

Overview
Dynamic programming is a technique for solving problems with overlapping subproblems. It complex problems by breaking them down into simpler subproblems. Typically, these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type. Rather than solving overlapping subproblems again and again, dynamic programming suggests solving each of the smaller subproblems only once and recording the results in a table from which we can then obtain solution to the original problem.


In simple words, solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.



Basic Elements of Dynamic programming
  1. sub structure
  2. table structure 
  3. bottom up computation


Applications

  1.  Fibonacci numbers
  2.  Longest increasing sequence 
  3.  Minimum weight triangulation 
  4. The partition problem 
  5. Appropriate string matching 
Steps of solving Dynamic programming  problem
  • Develop a mathematical relation that can express any solution and sub solutions for problem
  • Prove the principle of optimality
  • Develop a recurrence relation that is a solution to its sub solution using mathematical notation in step 1.
  • Write an algorithm to compute the recurrence relation. 
All Pair shortest paths


Given a weighted connected graph (undirected or directed), the all-pair shortest paths problem asks to find the distances (the lengths of the shortest paths) from each vertex to all other vertices. It is convenient to record the lengths of shortest paths in an n-by-n matrix D called the distance matrix: the element dij in the ith row and the jth column of this matrix indicates the length of the shortest path from the ith vertex to the jth vertex (1≤ i,j ≤ n). We can generate the distance matrix with an algorithm called Floyd’s algorithm. It is applicable to both undirected and directed weighted graphs provided that they do not contain a cycle of a negative length.



The Floyd's algorithm algorithm computes the distance matrix of a weighted graph with vertices through a series of n-by-n matrices.


Each of these matrices contains the lengths of shortest paths with certain constraints on the paths considered for the matrix. Specifically, the element in the ith row and the jth column of matrix D (k=0,1,. . . ,n) is equal to the length of the shortest path among all paths from the ith vertex to the jth vertex with each intermediate vertex, if any, numbered not higher than k. In particular, the series starts with D(0), which does not allow any intermediate vertices in its paths; hence, D(0) is nothing but the weight matrix of the graph.

The last matrix in the series, D(n), contains the lengths of the shortest paths among all paths that can use all n vertices as intermediate and hence is nothing but the distance matrix
being sought.





Optimal Binary Search tree

A binary search tree is a tree where the key values are stored in the internal nodes, the external nodes (leaves) are null nodes, and the keys are ordered lexicographically. I.e. for each internal node all the keys in the left subtree are less than the keys in the node, and all the keys in the right subtree are greater.
When we know the probabilities of searching each one of the keys, it is quite easy to compute the expected cost of accessing the tree. An OBST is a BST which has minimal expected cost.




0/1 Knapsack problem
Problem statement: A thief trying to rob a bank and can carry a maximal weight of into their knapsack. There are n items and ith  item weigh wi and is worth vi dollars. What items should thief take?
We can express this fact in the following formula: define c[i, w] to be the solution for
items 1,2, . . . , i and maximum weight w. Then


c[i,w] = 0                                                          if i=0 or w=0
              c[i-1, w]                                              if wi ≥ 0
              max [vi + c[i-1, w-wi], c[i-1, w]}      if i>0 and w ≥wi

No comments:

Post a Comment