Hadoop错误.ClassCastException:无法将org.apache.hadoop.io.LongWritable强制转换为org.apache.hadoop.io.Text

濑那

我的程序如下:

    public static class MapClass extends Mapper<Text, Text, Text, LongWritable> {

        public void map(Text key, Text value, Context context) throws IOException, InterruptedException {
            // your map code goes here
            String[] fields = value.toString().split(",");

            for(String str : fields) {
                context.write(new Text(str), new LongWritable(1L));
            }
        }
    }
   public int run(String args[]) throws Exception {
        Job job = new Job();
        job.setJarByClass(TopOS.class);

        job.setMapperClass(MapClass.class);

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.setJobName("TopOS");
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);
        job.setNumReduceTasks(0);
        boolean success = job.waitForCompletion(true);
        return success ? 0 : 1;
    }

    public static void main(String args[]) throws Exception {
        int ret = ToolRunner.run(new TopOS(), args);
        System.exit(ret);
    }
}

我的数据如下所示:

123456,Windows,6.1,6394829384232,343534353,23432,23434343,12322
123456,OSX,10,6394829384232,23354353,23432,23434343,63635
123456,Windows,6.0,5396459384232,343534353,23432,23434343,23635
123456,Windows,6.0,6393459384232,343534353,23432,23434343,33635

为什么会出现以下错误?我该如何解决?

Hadoop : java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
pi

从我的角度来看,您的代码中只有一个小错误。

当您使用平面Textfile作为Input时,固定键类是LongWritable(不需要/不使用),而值类是Text。

在您的Mapper中将keyClass设置为Object,以强调您不使用此键,从而摆脱了错误。

这是我稍作修改的代码。

package org.woopi.stackoverflow.q22853574;

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.fs.Path;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;

public class MapReduceJob {

  public static class MapClass extends Mapper<Object, Text, Text, LongWritable> {

    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
        // your map code goes here
        String[] fields = value.toString().split(",");

        for(String str : fields) {
            context.write(new Text(str), new LongWritable(1L));
        }
    }
  }

    public int run(String args[]) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        job.setJarByClass(MapReduceJob.class);

        job.setMapperClass(MapClass.class);

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.setJobName("MapReduceJob");
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);
        job.setNumReduceTasks(0);
        job.setInputFormatClass(TextInputFormat.class);
        boolean success = job.waitForCompletion(true);
        return success ? 0 : 1;
    }

  public static void main(String args[]) throws Exception {
    MapReduceJob j = new MapReduceJob();
    int ret = j.run(args);
    System.exit(ret);
  }

我希望这有帮助。

马丁

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

Related 相关文章

热门标签

归档