# What are Greedy Algorithms?

These refer to a class of algorithms that solve a given problem with a “heuristic” – your best guess ! If we think of it as a multi step solution, greedy algorithms essentially try to make optimal choice locally at each step, in an attempt to get a globally optimal way of solving a given problem.

Also, as they are heuristics – there is no guarantee they always result in the most optimal way to solve a given problem. There may exist some subset corner cases or test data sets on which a given greedy algorithm may not work.

My familiarity with this class of algorithms is mainly because of my work as a graduate student where I tried to implement various routing algorithms. I worked on prototyping a novel hop by hop routing protocol called GSTAR for Future Internet Architecture project. We also tried to extend the unicast protocol to do efficient multicast – using greedy approach!

So, without further ado, let’s start decoding the basics of this algorithmic paradigm.

# Components of Greedy Approach

There are two parts to greedy algorithm :

1. Optimal substructure : This essentially means that solution to a problem of size ‘n’ depends on the optimal solutions for sub-problems of size k, where k < n.
2. Greedy choice property : A problem is said to satisfy Greedy choice property if a global optimal can be achieved by choosing local optimum.

Note – When a problem has overlapping optimal substructure, we will utilize Dynamic programming strategy (shall be covered in another blog).

# Using Greedy

In general if you have an optimization problem at hand, that is a problem involving finding minimum, maximum, shortest etc, greedy strategy could be used.

### Internet Routing !

I wanted to add here that this is also the most popular algorithm paradigm in designing routing protocols. Routing often requires us to find optimal paths to route a packet in a network [4](a connected graph with routers as the nodes) and each edge assigned a weight that might represent delay or number of hops between the router nodes. Be it Prim’s / Krushkal’s algorithm [4] to construct a minimum spanning tree or the Dijkstra’s algorithm to find the shortest path, they all depend on a greedy approach of finding a minimum cost edge at each step and adding nodes to spanning tree or the shortest path set in an attempt to find a globally optimal solution.

Next, we will look at a few such greedy problems here on the blog and understand greedy strategies used for solving them. We can understand better when we solve problems. Let’s start with one of the popular and well known problem :

## Problem :

Input : Coin denomination set D with denominations d1>d2>d3 ..> dm and a sum “S”.

Output : Minimum number of coins required to form the sum “S” (Consider you have as many as would like of each coin denomination) , output num[k] denotes the number of coins of denomination dk.

## Algorithm :

• Repeat steps below, until the sum remaining, S = 0 :
1. Find the largest coin denomination (say dk), which is ≤ S.
2. Use ⌊Sk⌋ coins of denomination dk, update num[k]
3. Update S = S mod dk

### Pseudocode

```for k = 1 to m
do {
num[k] = ⌊S/d[k]⌋ // count as many as you would need of each denomination
S = S mod d[k]   // next sub problem to be solved
}```

### Greedy choice :

• At each step we make a locally optimal choice of choosing largest denomination dk from set D, such that dk <= S and adding to solution set (without bothering about global optimum).
• Sum S is then reduced by dk and next locally optimal choice is made for Sum (S – dk) and so on, at each step, until Sum remaining is zero.

### Optimal Substructure :

• Let “P” denote the original problem of making sum S, using minimum number of coins from denomination set d1, d2 … dm.
• Suppose the Greedy solution starts with denomination dk, dk <S. Then, we have the sub-problem P of making change for (S – dk).
• Let “O” be an optimal solution to P and let O be an optimal solution to P’. Then, clearly cost(O) = cost(O’) + 1 , (cost = min number coins) indicating that globally optimal solution depends on optimal solution to sub-problems.
• Thus, the optimal substructure property clearly follows.

Example : Consider U.S coin denominations : 1 cents, 5 cent, 10 cent, 25 cents, 50 cents. Find the minimum number of coins required to make a sum of 30.

```Answer : Minimum number of coins needed to make 30 = 2 coins
(i.e., 25 cents and 5 cents)```

### But, this heuristic fails …

Now, let’s consider a case when this fails. If the coin denominations supported are only {20,15,5} and we have to find minimum coins required to make a sum of 30

Greedy algorithm described here returns 20,5,5 : 3 coins , however the actual minimum coins needed are 15,15 : 2 coins. So, yes it does not give you the most optimal solution for all cases, but works for most cases , for example – this algorithm outlined works for the United States coin denominations.

We will see how we can handle the failure cases with another way of solving this problem, a little later on this website. So, do check back.

## Up next on the blog

In the next week’s blog, we will look at two more popular problems solved using Greedy approach :

1. Fractional Knapsack Problem
2. Activity Selection Problem

Until then, keep decoding ! Come back next week.

### Caveat :

Although induction [3] helps prove a greedy strategy and this class of algorithms seem to be intuitive, simple to code; they may not be as straight forward to a beginner.

So, don’t beat yourself if you are unable to devise a solution to a popular Greedy problem in say 30 minutes or even a day. It takes time. Keep learning and applying – eventually we will get there. 🙂