MapReduce实例浅析(2)

发表于:2015-07-10来源:uml.org.cn作者:open经验库点击数: 标签:数据库
(3)执行MapReduce任务 在MapReduce中,由Job对象负责管理和运行一个计算任务,并通过Job的一些方法对任务的参数进行相关的设置。此处设置了使用TokenizerMapp

  (3)执行MapReduce任务

  在MapReduce中,由Job对象负责管理和运行一个计算任务,并通过Job的一些方法对任务的参数进行相关的设置。此处设置了使用TokenizerMapper完成Map过程和使用的 IntSumReduce完成Combine和Reduce过程。还设置了Map过程和Reduce过程的输出类型:key的类型为Text,value 的类型为IntWritable。任务的输入和输出路径则由命令行参数指定,并由FileInputFormat和FileOutputFormat分别设定。完成相应任务的参数设定后,即可调用job.waitForCompletion()方法执行任务,主函数实现如下:

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount  ");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(wordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

  运行结果如下:

  14/12/17 05:53:26 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=

  14/12/17 05:53:26 INFO input.FileInputFormat: Total input paths to process : 2

  14/12/17 05:53:26 INFO mapred.JobClient: Running job: job_local_0001

  14/12/17 05:53:26 INFO input.FileInputFormat: Total input paths to process : 2

  14/12/17 05:53:26 INFO mapred.MapTask: io.sort.mb = 100

  14/12/17 05:53:27 INFO mapred.MapTask: data buffer = 79691776/99614720

  14/12/17 05:53:27 INFO mapred.MapTask: record buffer = 262144/327680

  key = 0

  value = Hello World

  key = 12

  value = Bye World

  14/12/17 05:53:27 INFO mapred.MapTask: Starting flush of map output

  14/12/17 05:53:27 INFO mapred.MapTask: Finished spill 0

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting

  14/12/17 05:53:27 INFO mapred.LocalJobRunner:

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task ‘attempt_local_0001_m_000000_0′ done.

  14/12/17 05:53:27 INFO mapred.MapTask: io.sort.mb = 100

  14/12/17 05:53:27 INFO mapred.MapTask: data buffer = 79691776/99614720

  14/12/17 05:53:27 INFO mapred.MapTask: record buffer = 262144/327680

  14/12/17 05:53:27 INFO mapred.MapTask: Starting flush of map output

  key = 0

  value = Hello Hadoop

  key = 13

  value = Bye Hadoop

  14/12/17 05:53:27 INFO mapred.MapTask: Finished spill 0

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting

  14/12/17 05:53:27 INFO mapred.LocalJobRunner:

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task ‘attempt_local_0001_m_000001_0′ done.

  14/12/17 05:53:27 INFO mapred.LocalJobRunner:

  14/12/17 05:53:27 INFO mapred.Merger: Merging 2 sorted segments

  14/12/17 05:53:27 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 73 bytes

  14/12/17 05:53:27 INFO mapred.LocalJobRunner:

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting

  14/12/17 05:53:27 INFO mapred.LocalJobRunner:

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now

  14/12/17 05:53:27 INFO output.FileOutputCommitter: Saved output of task ‘attempt_local_0001_r_000000_0′ to out

  14/12/17 05:53:27 INFO mapred.LocalJobRunner: reduce > reduce

  14/12/17 05:53:27 INFO mapred.TaskRunner: Task ‘attempt_local_0001_r_000000_0′ done.

  14/12/17 05:53:27 INFO mapred.JobClient: map 100% reduce 100%

  14/12/17 05:53:27 INFO mapred.JobClient: Job complete: job_local_0001

  14/12/17 05:53:27 INFO mapred.JobClient: Counters: 14

  14/12/17 05:53:27 INFO mapred.JobClient: FileSystemCounters

原文转自:http://www.uml.org.cn/sjjm/201501201.asp