Saturday, February 27, 2010

Old wine, new bottle : dynamic programming.


As a part of my grad coursework , I came across this topic... dynamic programming. There's already a lot of description of this topic all over the web, and some of it is truly cool. But when I hear about dynamic programming, I just think its not a new trick at all....its kind of an improved version of divide and conquer..... that reduces time requirement by storing results to the so called "common sub problems".

Well I didnt realize something as old as dynamic programming could be fun to read, I mean one of the descriptions that I read was about dividing a problem into sub solutions. This is basically done using what is called "copy and paste" or something , and the argument in general is:

"Suppose I have optimal solution, then I view this solution as answer of a function with args as two sub solutions"

Optimal(best) Answer to main problem =
Combine((optimal)best ans subprob 1, optimal(best) ans subprob 2)



And to prove this, the argument, we generally say

if the answers to subprob 1 and 2 were not optimal , then we would have taken other solutions that were optimal,and resulting answer would be different(read optimal).
but this cant be, cause we are considering that our main answer is optimal.


eg. consider a shortest path problem.

We say I know the optimal answer of shortest path from a to z.

Then I say, I know that this path is combination of shortest path from a to g and g to z.

sp(a,z) = Combine( SP(a,g) , SP(g,z))

now why should combination of Sp(a,g) and sp(g,z) result always into shortest path? why?

The sub problems are independent. ie. the shortest path from a to g, does not change based on selecting of shortest path from g to z.
Now because of above stmt,in this case, we can easily combine the distances of SP(a,g) and SP(g,z) and that would give us the shortest distance.


But might this always be the case ?

Not really. Consider now for eg. a longest path subproblem. In this we need to find the longest path from a to z, which has no cycles. ie. a simple path.

Now suppose we decide g.
then LP(a,g) is longest simple path from a to g
and LP(g,z) is the longest simple path from g to z.

Now just imagine, that both of this paths actually go through a vertex k....

now lets try to join the two paths.

(a-----k-----g) + (g----k-----z) = a-----k----g----k------z.


Now clearly k---g---k is a cycle and hence this solution is not valid(we dont want cycles).

Shitty feeling isnt it?

now what is really happening here is, the optimal solutions to the subproblems arent independant.

ie. if LP(a,g) involves k, then LP(g,z) should not involve k for all k element of (a-z) -{a,z,g}


This is a dependancy among sub problems, and these problems are called dependant subproblem.




Now next what, now as far as this, dynamic programming is similar to d& c. The difference is that dynamic programming makes use of previously calculated results in order to generate newer ones, thus avoiding repeated calculation, in our shortest path example, this isnt so visible, but for problems like knapsack etc, you can easily see that.


So now, how to excell in the "dynamic programming" , imho you just need to know d&c by heart in order to do this stuff....after all,

dynamic programming isnt something new, its just d&c with some optimizations.