Note that there are some explanatory texts on larger screens.

plurals
  1. POHadoop not able to run program
    text
    copied!<p>Hi This is my first program with hadoop and I have modified wordcount program. I m not able to execute this program. I have changed map output &amp; reducer input to <code>&lt;text text&gt;</code>. Input file contains record as <code>email gender age 21</code>. Execution hangs up showing <code>Map 100% Reduce 100%</code>. </p> <pre><code>//MAPPER public class WordMapper extends MapReduceBase implements Mapper&lt;LongWritable, Text, Text, Text&gt; { @Override public void map(LongWritable key, Text value, OutputCollector&lt;Text, Text&gt; output, Reporter reporter) throws IOException { String s = value.toString(); String s1; Matcher m2; FileSplit filesplit = (FileSplit) reporter.getInputSplit(); String fileName = filesplit.getPath().getName(); Pattern p1 = Pattern.compile("\\s+email\\s+gender\\s+age\\s+(\\S+)$"); m2=p1.matcher(s); if (m2.find()){ s1 = m2.replaceFirst("omitted"); output.collect(new Text(s1), new Text(fileName)); } } } //REDUCER public class SumReducer extends MapReduceBase implements Reducer&lt;Text, Text, Text, IntWritable&gt; { @Override public void reduce(Text key, Iterator&lt;Text&gt; values, OutputCollector&lt;Text, IntWritable&gt; output, Reporter reporter) throws IOException { int cliCount = 0; while (values.hasNext()) { cliCount += 1; } output.collect(key, new IntWritable(cliCount)); } } //MAIN public class WordCount extends Configured implements Tool { @Override public int run(String[] args) throws Exception { if (args.length != 2) { System.out.printf( "Usage: %s [generic options] &lt;input dir&gt; &lt;output dir&gt;\n", getClass() .getSimpleName()); ToolRunner.printGenericCommandUsage(System.out); return -1; } JobConf conf = new JobConf(getConf(), WordCount.class); conf.setJobName(this.getClass().getName()); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); conf.setMapperClass(WordMapper.class); conf.setReducerClass(SumReducer.class); conf.setMapOutputKeyClass(Text.class); conf.setMapOutputValueClass(Text.class); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); JobClient.runJob(conf); return 0; } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new WordCount(), args); System.exit(exitCode); } } </code></pre> <p>Update: Only inside _log folder <code>.xml</code> file is present</p> <p>I kept the program executing, hadoop killed it.</p> <pre><code>3/06/19 15:02:47 INFO mapred.JobClient: Total committed heap usage (bytes)=258875392 13/06/19 15:02:47 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter 13/06/19 15:02:47 INFO mapred.JobClient: BYTES_READ=26 13/06/19 15:02:47 INFO mapred.JobClient: Job Failed: NA Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1322) at WordCount.run(WordCount.java:41) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at WordCount.main(WordCount.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) </code></pre>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload