问题描述
我正在尝试调整现有问题以满足我的需求..
I am trying to tweak an existing problem to suit my needs..
基本上输入是简单的文本我处理它并将键/值对传递给reducer我创建了一个 json .. 所以有关键但没有价值所以映射器:
Basically input is simple text I process it and pass key/value pair to reducer And I create a json.. so there is key but no value So mapper:
输入:文本/文本
输出:文本/文本
缩减器:文本/文本
输出:文本/无
我的签名如下:
public class AdvanceCounter {
/**
* The map class of WordCount.
*/
public static class TokenCounterMapper
extends Mapper<Object, Text, Text, Text> { // <--- See this signature
public void map(Object key, Text value, Context context) // <--- See this signature
throws IOException, InterruptedException {
context.write(key,value); //both are of type text OUTPUT TO REDUCER
}
}
public static class TokenCounterReducer
extends Reducer<Text, Text, Text, **NullWritable**> { // <--- See this signature Nullwritable here
public void reduce(Text key, Iterable<Text> values, Context context) // <--- See this signature
throws IOException, InterruptedException {
for (Text value : values) {
JSONObject jsn = new JSONObject();
//String output = "";
String[] vals = value.toString().split(" ");
String[] targetNodes = vals[0].toString().split(",",-1);
try {
jsn.put("source",vals[1]);
jsn.put("targets",targetNodes);
context.write(new Text(jsn.toString()),null); // no value
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
Job job = new Job(conf, "Example Hadoop 0.20.1 WordCount");
// ...
//
job.setOutputValueClass(NullWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
但在执行时我收到此错误:
But on execution i am getting this error:
13/06/04 13:08:26 INFO mapred.JobClient: Task Id : attempt_201305241622_0053_m_000008_0, Status : FAILED
java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.io.NullWritable, recieved org.apache.hadoop.io.Text
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.sogou.Stinger$TokenCounterMapper.map(Stinger.java:72)
at org.sogou.Stinger$TokenCounterMapper.map(Stinger.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
推荐答案
你没有指定你的地图输出类型,所以它和你为你的 reducer 设置的一样,是 Text
和NullWritable
这对您的映射器不正确.您应该执行以下操作以避免任何混淆,最好为 mapper 和 reducer 指定所有类型:
You haven't specified your map output types, so it's taking the same as you set for your reducer, which are Text
and NullWritable
which is incorrect for your mapper. You should do the following to avoid any confusing it's better to specify all your types for both mapper and reducer:
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
这篇关于映射中的值类型不匹配:预期 org.apache.hadoop.io.NullWritable,收到 org.apache.hadoop.io.Text的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!