# Surfing on Greedy Algorithms and Dynamic Programming

Authors : Jeyakumaran , Anantha Raam, Sakthivel, Pushkaran, Swarna

Edited by : Swarna S

There are different methods to solve a problem, like the brute force method, divide and conquer method etc. I’m pretty sure that you would have heard of these terms.

In this article we will be learning two new algorithmic approaches namely the greedy approach and the dynamic programming approach. We later compare both of them and try to understand which is the better one.

SO IS IT GREEDY BECAUSE IT IS ONLY FOR CERTAIN TYPES OF PROBLEMS??

No!!! That’s not the reason for its name.

Greedy method can be used only for optimization problems. An optimization problem is any type of problem that involves maximizing or minimizing some quantity. Minimizing the amount of money to be spent for travelling is an optimization problem.

Let’s see how does this approach work.

1. Split the given problem into a set of different steps
2. Obtain the output of each step. If it is feasible, it is added into the solution set.
3. We have now got a solution set from which we have to select a solution that satisfies the given condition.

Why should we be greedy?

Suppose you want to buy a car. How would you go about doing it ?

We first choose the car brand. We then filter the cars under that brand depending on our liking. You keep filtering by adding each criterion at each stage. Finally we arrive at our desired car.

In this situation, our problem (choosing a car) is divided to stages (filtering the cars based on a condition) and at each stage we get an output based on which the next stage is carried out. Finally we arrive at our desired solution(the best car ). In this example we have actually followed a greedy process to solve these problem.

This is a simple real life example to make you understand what this is. The Knapsack problem and Optimal Merging problem are few famous problems that are solved using this approach. One of them, the job sequencing problem is discussed below.

# A Greedy Problem — Job Sequencing Problem

In this problem we have n jobs j1, j2, … jn each has an associated deadline d1, d2, … dn and profit p1, p2, … pn.. Profit will only be awarded or earned if the job is completed on or before the deadline. Assuming that each job takes unit time to complete and only 1 job can be executed at a time,we have to find a subset of jobs which can be completed before deadline with maximum profit.

Suppose we are given the following jobs as input:

We can solve the above problem by using the greedy approach. First, we sort the jobs according to their profits in the descending order.

Note! If two or more jobs are having the same profit then sort them as per their entry in the job list.

The second step is to find the maximum deadline in it and make those many time slots available for performing the jobs. For instance, over here the maximum of the given deadlines is 9. So we make 9 time slots available for performing the jobs. ( the time slots correspond the deadlines available)

The last step is to place the jobs that we sorted earlier in to the given time slots. If you find that the time slot for a given job is empty the place the job in that slot. If it is not empty, then move on to the next job.

For example, here the jobs 5,6,3,1 are placed successfully. The job 9 cannot be done since job 5 has occupied slot 4. So we move to the next job job 8.

In a similar way, you can fill up the slots and find out the maximum number of jobs that you can perform within the deadlines.

The maximum profit after finishing the job can be calculated by finding the corresponding sum of the profits of the jobs in the set S.

You can also solve this problem by using the Gantt chart. Consider another example.

The table given below is the gantt chart. Jobs which meet the deadline are added.

Since the jobs are already sorted in order of decreasing deadlines and the deadline of J4 is 2 , we will place it in the first non-empty slot before 2.

Next,since deadline of J1 is 5, so we place it in the first empty cell before deadline 5.

Now, we take job J3. Since its deadline is 3, we place it in the first empty cell before deadline 3 .

Consider job J2. Its deadline is 3. We’ll place it in empty slot before 3 but since 3 is already empty, we place it in the first empty slot.

Now finally choosing job 5 over job 6, we place it in the empty cell before slot 4.

Now since all the slots are filled, we have job J6 which cannot be completed. Thus the optimal schedule is J2,J4,J3,J5,J1.

# Hold on…so are you saying being greedy is good ?

The answer is a NO. Let’s see why. You can compare the characteristics of the greedy approach to that of the ant and man in the stories of Ramakrishna.

According to the story, the man saves a huge bucket of sweets served to god by placing a small piece of sugar cube at a few distance away from the bucket, in the path of ant. The ant gets satisfied with the sugar cube and leaves the bucket of sweets. Similarly , we humans also get satisfied with the small happiness or obstacles in the way to victory. We don’t bother about the loss we got to pay in the long term.

The greedy approach finds the local best in order to extend to global best solution. However this approach may not yield the best solutions always since the local bests may yield to wrong solution.

One such example is the minimum coin exchange problem.

Problem: Suppose we have infinite supply of { 1, 2, 5, 10, 20, 50, 100, 500, 1000} valued coins/notes, what is the minimum number of coins and/or notes needed to make the change? The user gives the input as the change to be made.

Test Case 1:70

Test Case 2:121

The greedy approach for the given problem is stated below.

Here you find the best note at any point of time and greedy works fine. But when you change the problem a bit, it doesn’t. Let’s see how.

Problem: Suppose we have infinite supply of { 1, 3, 4, 5} valued coins/notes, what is the minimum number of coins and/or notes needed to make the change?

Test Case 1: 7

Test Case 2: 12

By Greedy :

Test Case 1: 3 {5,1,1} Test Case 2: 4 {5,5,1,1}

But Optimum solution is

Test Case 1 : 2 {4,3}

Test Case 2 : 3 {4,4,4}

Therefore too much Greedy leads to too much loss!!!!

How can we manage this fault? Well, there are so many methods you can use instead of this. One of these is the Dynamic Programming (DP).

What is Dynamic Programming alias DP ?

Let us say that we have a machine, and to determine its state at time t, we have certain quantities called state variables. There will be certain times when we have to make a decision which of these variables affects the state of the system, which may or may not be known to us in advance. These decisions or changes are equivalent to transformations of state variables. The results of the previous decisions help us in choosing the future ones.

What do we conclude from this? We need to break up a problem into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems. If you are given a problem, which can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones — and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a Dynamic- Programming problem.

There are 2 ways of solving this problem:

1)Memoization(Top-down)

2)Tabulation(bottom-up)

Let’s take an example first, say let’s calculate the Fibonacci series for n=5.

Here we can see that the left sub tree of F(4) and right sub tree of F(5) are the same.Like this , we are doing unnecessary computations again and again.

This is where dynamic programming comes into play. It stops those unnecessary computations and decreases the complexity of solving a given problem.

Tabulation:

It is an bottom-up approach where we solve a dynamic programming problem by first filling up a table and then compute the solution to the original problem based on the table. Here we simply fill the look-up table first.

Steps:

• We begin with initializing the base values of i.
• We then run a loop that iterates over the remaining values of i.
• At every iteration, fib(n) updates the ith entry in the look-up table by combining the solutions to previously solved sub-problems.
• Finally, fib(n)returns table[n].

## Let’s see an real life application of DP — The Longest Common Sub sequence problem (LCS).

The problem :

We are given two strings: string S of length n, and string T of length m. Our goal is to produce the longest common sub sequence: the longest sequence of characters that appear left-to-right (but not necessarily in a contiguous block) in both strings.

Consider the two strings S = ABCAB and T = ACEABD where n = 5 and m = 6 respectively.

If you observe, the letters AAB is the only common sub string to both of the given strings and is for maximum length.

Note that there might be many more sub strings but we are interested in finding the one with maximum length. For instance, the sub strings A,C,B,AA are also common but not of maximum length. Only AAB is.

Hmm..the problem sounds good. But why should we learn to solve this problem ?

This problem is a very good example for understanding the power of dynamic programming.

The LCS technique is widely used in genetics for analyzing the DNA strands of various organisms. It helps us in understanding the differences in these creatures at the cellular level.

Moving on, let’s try to frame a solution for it. This problem can be solved using the brute force method. In this method, we compute all the possible sequences of characters of one string; check if it is present in the latter and find out the one with maximum length.

For such an approach, your time complexity is going to in the order of O(n * power(2,n)) (power(2,n) is 2 to the power n). This is because you have power(2,n) possible sub sequences and the time taken to check if they are present or not is O(n).

If you observe, this process usually takes a lot of time and when ’n’ increases, it is seriously a havoc. So how can we make it better ? Let’s try to use DP approach here.

Say LCS[i,j] is the length of the LCS of S[1..i] with T[1..j].

How can we solve for LCS[i,j] in terms of the LCS’s of the smaller problems?

Case 1: what if S[i] =T[j]?

Then, the desired sub sequence has to ignore one of S[i] or T[j] so that we have:

LCS[i, j] = max(LCS[i − 1, j], LCS[i, j − 1])

Case 2: what if S[i] != T[j]?

Then the LCS of S[1..i] and T[1..j] might as well match them up. For instance, if I gave you a common sub sequence that matched S[i] to an earlier location in T, for instance, you could always match it to T[j] instead. So, in this case we have:

LCS[i, j] = 1 + LCS[i − 1, j − 1]

So, we can just do two loops (over values of i and j) , filling in the LCS using these rules. Here’s what it looks like pictorially for the example S = ABAZDC and T = BACBAD, with S along the leftmost column and T along the top row.

We store the values of the recursive algorithm in a matrix for reading out the final output. The final answer (the length of the LCS of S and T) is in the lower-right corner.

The time taken to fill out this matrix is O(mn) where m is the size of string S and n is the size of the string T. Thus we have achieved better time complexity compared to the first solution.

But this isn’t the end! You can optimize this solution further and reduce it to even more better complexities. The important part is that using DP approach makes your problem even more easier to solve and faster to achieve.

Food for thought : In the LCS problem, we have seen the bottom up approach based solution. Try to come up with top down approach solution.

Now we can solve the minimum exchange problem using DP and get the desired solution.

Let’s use the tabulation method…..

Fill the table with values of given coins in x and y. Fill the inner table with the minimum number of coins required to achieve, with the given coins. The value in the last cell of the table is the solution.

For example, to get 4 we need 1 coin of value 3 in addition to no. of coins for 4–3 which is 1.Therefore 2 coins for four.

# Which is the best one ?

Wow! We have learnt two approaches but which is the better one ? Well, there’s no proper answer for it. Depending on the situation the programmer needs to apply the appropriate one.

We hope that you have got an idea about when you should use greedy and when you should not.There are many other complex problems like the above ones you can check out.

Happy Learning!!

--

--