Hadoop - Reducer class in java
NickName:Haris H Ask DateTime:2017-01-13T00:03:42

Hadoop - Reducer class in java

I am developing a Hadoop project in java. I want to find the customers with max consumption in a certain day. I have managed to find the customers in the date I want, but I am facing a problem in my Reducer class. Here is the code:

Mapper Class

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class alicanteMapperC extends
        Mapper<LongWritable, Text, Text, IntWritable> {

    String Customer = new String();
    SimpleDateFormat ft = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    Date t = new Date();
    IntWritable Consumption = new IntWritable();
    int counter = 0;

    //new vars
    int max=0;

    @Override
    public void map(LongWritable key, Text value, Context context)
            throws IOException, InterruptedException {

        Date d2 = null;
        try {
             d2 = ft.parse("2013-07-01 01:00:00");
        } catch (ParseException e1) {
            // TODO Auto-generated catch block
            e1.printStackTrace();
        }

        if (counter > 0) {

            String line = value.toString();
            StringTokenizer itr = new StringTokenizer(line, ",");

            while (itr.hasMoreTokens()) {
                Customer = itr.nextToken();
                try {
                    t = ft.parse(itr.nextToken());
                } catch (ParseException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
                Consumption.set(Integer.parseInt(itr.nextToken()));
            }

            if (t.compareTo(d2) == 0) {
                context.write(new Text(Customer), Consumption);
            }
        }
        counter++;
    }
}

Reducer class

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class alicanteReducerC extends
        Reducer<Text, IntWritable, Text, IntWritable> {

    IntWritable maximum = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values, Context context)
            throws IOException, InterruptedException {

        int max = 0;

        for (IntWritable val : values) {
            if (val.get() > max) {
                max = val.get();
            }
        }

        for (IntWritable val : values) {
            if (val.get() == max) {
                context.write(key, val);
            }
        }
    }
}

Do you have any idea why the reducer won't write to the output file? In other words, why doesn't the second for works?

EDIT In my mapper class I find the Customers in a specific date and thus the consumption of them and I pass these values in the reducer class.

In the reducer class I want to find the max consumption and the customer associated to this consumption.

Copyright Notice:Content Author:「Haris H」,Reproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/41617770/hadoop-reducer-class-in-java

More about “Hadoop - Reducer class in java” related questions

Hadoop - Reducer class in java

I am developing a Hadoop project in java. I want to find the customers with max consumption in a certain day. I have managed to find the customers in the date I want, but I am facing a problem in my

Show Detail

Hadoop Streaming with Java Mapper/Reducer

I'm trying to run a hadoop streaming job with a java Mapper/Reducer over some wikipedia dumps (in compressed bz2 form). I'm trying to use WikiHadoop, which is an interface released by Wikimedia rec...

Show Detail

hadoop reducer output was read in reducer iteratively

I am just testing on the word count example using a 3 machine cluster. My codes are the same as this example except the following: I add two line code in the reducer code before "output.collect(ke...

Show Detail

Hadoop: reducer not getting invoked

I know this is a very basic question but I am not able to find where I am making a mistake. My Reducer is not getting invoked from the driver code. I would greatly appreciate if anyone can help me ...

Show Detail

Hadoop : Reducer class not called even with Overrides

I was trying a mapreduce wordcount code in hadoop, but the reducer class is never called and the program terminates after running the mapper class. import java.io.IOException; import java.util.*;

Show Detail

Cannot run Java class files with hadoop streaming

Whenever I am trying to use Java class files as my mapper and/or reducer I am getting the following error: java.io.IOException: Cannot run program "MapperTst.class": java.io.IOException: error=2...

Show Detail

Null pointer exception in hadoop reducer

I am facing the NullPointerException with the below code. It would be great if some one can review and help me with the program. The mapper is running fine but, I get an NPE, when I am try to spli...

Show Detail

Hadoop mapper and reducer value type mismatch error

I am new with hadoop and have encountered this problem. I am trying to change the default Text,Integer values for the reducer to Text, Text. I want to map Text,IntWritable then in the reducer I wan...

Show Detail

MapReduce Hadoop 2.4.1 Reducer not running

For some reason my Reducer doesn't seem to be running. My Driver is import java.io.File; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.File...

Show Detail

Mapper and Reducer for K means algorithm in Hadoop in Java

I am trying to implement K means in hadoop-1.0.1 in java language. I am frustrated now. Although I got a github link of the complete implementation of k means but as a newbie in Hadoop, I want to l...

Show Detail