Understanding Map – Reduce

Quite a few people seem to be intimidated by the concept of Map-Reduce. As it turns out Map-Reduce is actually quite simple and straightforward when you get to understand the basic principle.

 

Basic principle

The basic Map-Reduce consists of two steps. I guess you are not going to be very surprised when I tell you that these steps are called Map and Reduce.

The Map process gets the raw data as input and discard anything in the data we are not interested in, basically each input has a corresponding output so we end up with the same number of simplified data elements. Simple right?

And the Reduce process takes the output of the Map process and combines/discards data to produce the result. Again pretty simple right? There is one catch with the Reduce though. It should output exactly the same data structure as it took as its input. The point being is you can run the same Reduce process multiple times if you would like to. I will come back to why you might want to do so.

 

image

 

A simple example.

The following code loads a few orders and does a Map reduce job on them.

   1: private static void MapReduce()


   2: {


   3:     var orders = LoadOrders();


   4:     Print(orders);


   5:  


   6:     var mapped = Map(orders);


   7:     Print(mapped);


   8:  


   9:     var reduced = Reduce(mapped);


  10:     Print(reduced);


  11: }

 

The Orders data structure looks like this:

   1: public class Order


   2: {


   3:     public Order()


   4:     {


   5:         Lines = new List<OrderLine>();


   6:     }


   7:  


   8:     public int CustomerId { get; set; }


   9:     public List<OrderLine> Lines { get; set; }


  10: }


  11:  


  12: public class OrderLine


  13: {


  14:     public string Book { get; set; }


  15:     public int Quantity { get; set; }


  16:     public decimal Price { get; set; }


  17: }

As you can see we have an order with a customer and a number of order lines. Each order line contains the books title, the quantity and the amount it was sold for. During the Map process we just extract the data we are interested in. In this case the books title and the price it was sold for. We store that in the following structure:

   1: public class ReducedTo


   2: {


   3:     public string Book { get; set; }


   4:     public decimal Amount { get; set; }


   5: }

 

The Map Process

So the Map function takes a collection of Orders as input and returns a collection of ReduceTo items.

   1: private static IEnumerable<ReducedTo> Map(IEnumerable<Order> orders)


   2: {


   3:     var result = new List<ReducedTo>();


   4:  


   5:     foreach (var order in orders)


   6:     {


   7:         foreach (var line in order.Lines)


   8:         {


   9:             result.Add(new ReducedTo()


  10:             {


  11:                 Book = line.Book,


  12:                 Amount = line.Price


  13:             });


  14:         }


  15:     }


  16:  


  17:     return result;


  18: }

In fact you could easily rewrite this into a LINQ query like this:

   1: private static IEnumerable<ReducedTo> Map(IEnumerable<Order> orders)


   2: {


   3:     return from order in orders


   4:            from line in order.Lines


   5:            select new ReducedTo


   6:            {


   7:                Book = line.Book,


   8:                Amount = line.Price


   9:            };


  10: }

And in fact that is all there is to a Map process, it is just a LINQ select clause. And when you look at it that way it becomes a lot simpler and easier to understand :-)

 

The Reduce process

With the Map done we can focus on the Reduce part. The important part is that the Reduce takes the result of the Map function both as input and output. As long as we take this into account the function is quite simple to write.

   1: private static IEnumerable<ReducedTo> Reduce(IEnumerable<ReducedTo> input)


   2: {


   3:     var result = new List<ReducedTo>();


   4:  


   5:     foreach (var reduced in input)


   6:     {


   7:         var existing = result.FirstOrDefault(r => r.Book == reduced.Book);


   8:         if (existing != null)


   9:         {


  10:             existing.Amount += reduced.Amount;


  11:         }


  12:         else


  13:         {


  14:             result.Add(reduced);


  15:         }


  16:     }


  17:  


  18:     return result;


  19: }

 

Pretty simple and if we think about it it is really just a grouping. So just as with the Map process we could easily rewrite this as a LINQ group by query like this:

   1: private static IEnumerable<ReducedTo> Reduce(IEnumerable<ReducedTo> input)


   2: {


   3:     return from sale in input


   4:            group sale by sale.Book into grp


   5:            select new ReducedTo()


   6:             {


   7:                 Book = grp.Key,


   8:                 Amount = grp.Sum(item => item.Amount)


   9:             };


  10: }

 

So where the Map process is just a LINQ select the Reduce process is just a LINQ group by with the additional collection that the input type is also the output type. Makes things a lot simpler right :-)

 

Running the sample code produces output like this.

image

 

Advantages of Map-Reduce

We have seen Map-Reduce is just another way of doing a LINQ group by query so what is the big deal? One of the biggest advantage of Map-Reduce is the fact that you can parallelize them. You can do so over multiple threads inside a single process or if you want to scale this out over many machines. The key to this is being able to repeat the Reduce step multiple times.

 

Suppose we want to analyze a lot of sales data and compute the sales amount per article. We could partition the data into as many partitions as we want. Lets assume that we split the sales orders into three partitions. For each group of orders we could map the sales into a collection of article and sales amount pair where each article could be repeated multiple times. After this is done the Reduce step groups the data so we have a single entry for each article with the total sales amount for that partition. Now when that has been done for each partition we combine the three reduced sets into a single set. This is much like the set after the first Map process and contains duplicate entries for each article so all we need to do is run the same reduce process over this much smaller set of data to get the final result. So in reality it is just divide and conquer where we divide the original data into as many partitions as we want to. In other words, a really easy system for scale out scenarios :-)

 

image

 

This is quite easy to do now with the same Map and Reduce functions we used before. The code below loads 25 different sets of orders and runs the Map and Reduce jobs on each of them. Once that is done it combines the different output sets into one new input and runs another Reduce over that to get the end result.

   1: private static void ParallelMapReduce()


   2: {


   3:     var tasks = new List<Task<IEnumerable<ReducedTo>>>();


   4:     for (int i = 0; i < 25; i++)


   5:     {


   6:         var task = Task<IEnumerable<ReducedTo>>.Factory.StartNew(() =>


   7:         {


   8:             var orders = LoadOrders();


   9:             var mapped = Map(orders);


  10:             var reduced = Reduce(mapped);


  11:             return reduced;


  12:         });


  13:         tasks.Add(task);


  14:     }


  15:  


  16:     Task.WaitAll(tasks.ToArray());


  17:  


  18:     foreach (var task in tasks)


  19:     {


  20:         Print(task.Result);


  21:     }


  22:  


  23:     var result = Reduce(tasks.Select(t => t.Result));


  24:     Print(result);


  25: }

To combine the different results I created an simple overload of the Reduce function that just combines multiple inputs and calls the original Reduce function like this:

   1: private static IEnumerable<ReducedTo> Reduce(IEnumerable<IEnumerable<ReducedTo>> args)


   2: {


   3:     var input = new List<ReducedTo>();


   4:  


   5:     foreach (var item in args)


   6:     {


   7:         input.AddRange(item);


   8:     }


   9:  


  10:     return Reduce(input);


  11: }



 



Running this will produce output like below. The white Map/Reduce output is different reduced sets of output from the 25 tasks and the yellow output the the second Reduce done with the output from the first 25 Reduce calls.



image



 



Conclusion



Map-Reduce is just another way of doing a LINQ group by but one that is very scalable because of a few simple rules. And once you understand that there is no longer a reason to be intimidated by Map-Reduce and start embracing its power.



 



Enjoy!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>