Note that there are some explanatory texts on larger screens.

plurals
  1. POCompilation error java hadoop program
    primarykey
    data
    text
    <p>I wrote this Java hadoop program which will execute parallel indexation of files.The file was created in eclipse</p> <pre><code>package org.myorg; import java.io.*; import java.util.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.util.*; public class ParallelIndexation { public static class Map extends MapReduceBase implements Mapper&lt;LongWritable, Text, Text, IntWritable&gt; { private final static IntWritable zero = new IntWritable(0); private Text word = new Text(); public void map(LongWritable key, Text value, OutputCollector&lt;Text, IntWritable&gt; output, Reporter reporter) throws IOException { String line = value.toString(); int CountComputers; //DataInputStream ConfigFile = new DataInputStream( new FileInputStream("countcomputers.txt")); FileInputStream fstream = new FileInputStream("/usr/countcomputers.txt"); // путь к файлу DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String result = br.readLine(); // читаем как строку CountComputers = Integer.parseInt(result); // переводим строку в число //CountComputers=ConfigFile.readInt(); in.close(); fstream.close(); ArrayList&lt;String&gt; paths = new ArrayList&lt;String&gt;(); StringTokenizer tokenizer = new StringTokenizer(line, "\n"); while (tokenizer.hasMoreTokens()) { paths.add(tokenizer.nextToken()); } String[] ConcatPaths= new String[CountComputers]; int NumberOfElementConcatPaths=0; if (paths.size()%CountComputers==0) { for (int i=0; i&lt;CountComputers; i++) { ConcatPaths[i]=paths.get(NumberOfElementConcatPaths); NumberOfElementConcatPaths+=paths.size()/CountComputers; for (int j=1; j&lt;paths.size()/CountComputers; j++) { ConcatPaths[i]+="\n"+paths.get(i*paths.size()/CountComputers+j); } } } else { NumberOfElementConcatPaths=0; for (int i=0; i&lt;paths.size()%CountComputers; i++) { ConcatPaths[i]=paths.get(NumberOfElementConcatPaths); NumberOfElementConcatPaths+=paths.size()/CountComputers+1; for (int j=1; j&lt;paths.size()/CountComputers+1; j++) { ConcatPaths[i]+="\n"+paths.get(i*(paths.size()/CountComputers+1)+j); } } for (int k=paths.size()%CountComputers; k&lt;CountComputers; k++) { ConcatPaths[k]=paths.get(NumberOfElementConcatPaths); NumberOfElementConcatPaths+=paths.size()/CountComputers; for (int j=1; j&lt;paths.size()/CountComputers; j++) { ConcatPaths[k]+="\n"+paths.get((k-paths.size()%CountComputers)*paths.size()/CountComputers+paths.size()%CountComputers*(paths.size()/CountComputers+1)+j); } } } //CountComputers=ConfigFile.readInt(); for (int i=0; i&lt;ConcatPaths.length; i++) { word.set(ConcatPaths[i]); output.collect(word, zero); } } } public static class Reduce extends MapReduceBase implements Reducer&lt;Text, IntWritable, Text, LongWritable&gt; { public native long Traveser(String Path); public native void Configure(String Path); public void reduce(Text key, IntWritable value, OutputCollector&lt;Text, LongWritable&gt; output, Reporter reporter) throws IOException { long count; String line = key.toString(); ArrayList&lt;String&gt; ProcessedPaths = new ArrayList&lt;String&gt;(); StringTokenizer tokenizer = new StringTokenizer(line, "\n"); while (tokenizer.hasMoreTokens()) { ProcessedPaths.add(tokenizer.nextToken()); } Configure("/etc/nsindexer.conf"); for (int i=0; i&lt;ProcessedPaths.size(); i++) { count=Traveser(ProcessedPaths.get(i)); } output.collect(key, new LongWritable(count)); } static { System.loadLibrary("nativelib"); } } public static void main(String[] args) throws Exception { JobConf conf = new JobConf(ParallelIndexation.class); conf.setJobName("parallelindexation"); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(LongWritable.class); conf.setMapperClass(Map.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); } } </code></pre> <p>As a result of compilation in the Nexenta Illumos operating system (solaris) by means of team</p> <pre><code>javac -classpath /export/hadoop-1.0.1/hadoop-core-1.0.1.jar -d folder/classes folder/src/ParallelIndexation.java, </code></pre> <p>received the following mistake</p> <pre><code>folder/src/ParallelIndexation.java:81: error: Reduce is not abstract and does not override abstract method reduce(Text,Iterator&lt;IntWritable&gt;,OutputCollector&lt;Text,LongWritable&gt;,Reporter) in Reducer public static class Reduce extends MapReduceBase implements ^ 1 error </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload