Hadoop2.6.2的Eclipse插件的使用
來自: http://www.cnblogs.com/zdfjf/p/5178197.html
歡迎轉載,且請注明出處,在文章頁面明顯位置給出原文連接。
本文鏈接:
首先給出eclipse插件的下載地址: http://download.csdn.net/download/zdfjf/9421244
-
1.插件的安裝
插件下載后,放在eclipse安裝目錄下的plugins文件夾下,然后重啟eclipse,就會發現Project Explorer窗口里多出DFS Locations這一項,對應的是HDFS里存放的文件,現在里邊還沒有顯示目錄結構,不用著急,第二步配置之后,目錄結構就會出現了。
我突然想起來博客園上有一篇文章對這部分介紹的很好,而且我感覺對這一部分,我不會寫的比他好。所以我就不浪費時間了,直接參考蝦皮工作室的,原文鏈接 http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html ,可以對這一部分配置完成,下面我們要說的是配置完成后,有一些問題導致運行程序不能成功。通過不斷調試,我把我運行成功的代碼和相應的配置貼出來。
- 2.代碼
1 /**
2 * Licensed to the Apache Software Foundation (ASF) under one
3 * or more contributor license agreements. See the NOTICE file
4 * distributed with this work for additional information
5 * regarding copyright ownership. The ASF licenses this file
6 * to you under the Apache License, Version 2.0 (the
7 * "License"); you may not use this file except in compliance
8 * with the License. You may obtain a copy of the License at
9 *
10 * http://www.apache.org/licenses/LICENSE-2.0
11 *
12 * Unless required by applicable law or agreed to in writing, software
13 * distributed under the License is distributed on an "AS IS" BASIS,
14 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing permissions and
16 * limitations under the License.
17 */
18 package org.apache.hadoop.examples;
19
20 import java.io.IOException;
21 import java.util.StringTokenizer;
22
23 import org.apache.hadoop.conf.Configuration;
24 import org.apache.hadoop.fs.Path;
25 import org.apache.hadoop.io.IntWritable;
26 import org.apache.hadoop.io.Text;
27 import org.apache.hadoop.mapreduce.Job;
28 import org.apache.hadoop.mapreduce.Mapper;
29 import org.apache.hadoop.mapreduce.Reducer;
30 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
31 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
32 import org.apache.hadoop.util.GenericOptionsParser;
33
34 public class WordCount {
35
36 public static class TokenizerMapper
37 extends Mapper<Object, Text, Text, IntWritable>{
38
39 private final static IntWritable one = new IntWritable(1);
40 private Text word = new Text();
41
42 public void map(Object key, Text value, Context context
43 ) throws IOException, InterruptedException {
44 StringTokenizer itr = new StringTokenizer(value.toString());
45 while (itr.hasMoreTokens()) {
46 word.set(itr.nextToken());
47 context.write(word, one);
48 }
49 }
50 }
51
52 public static class IntSumReducer
53 extends Reducer<Text,IntWritable,Text,IntWritable> {
54 private IntWritable result = new IntWritable();
55
56 public void reduce(Text key, Iterable<IntWritable> values,
57 Context context
58 ) throws IOException, InterruptedException {
59 int sum = 0;
60 for (IntWritable val : values) {
61 sum += val.get();
62 }
63 result.set(sum);
64 context.write(key, result);
65 }
66 }
67
68 public static void main(String[] args) throws Exception {
69 System.setProperty("HADOOP_USER_NAME", "hadoop");
70 Configuration conf = new Configuration();
71 conf.set("mapreduce.framework.name", "yarn");
72 conf.set("yarn.resourcemanager.address", "192.168.0.1:8032");
73 conf.set("mapreduce.app-submission.cross-platform", "true");
74 String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
75 if (otherArgs.length < 2) {
76 System.err.println("Usage: wordcount <in> [<in>...] <out>");
77 System.exit(2);
78 }
79 Job job = new Job(conf, "word count1");
80 job.setJarByClass(WordCount.class);
81 job.setMapperClass(TokenizerMapper.class);
82 job.setCombinerClass(IntSumReducer.class);
83 job.setReducerClass(IntSumReducer.class);
84 job.setOutputKeyClass(Text.class);
85 job.setOutputValueClass(IntWritable.class);
86 for (int i = 0; i < otherArgs.length - 1; ++i) {
87 FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
88 }
89 FileOutputFormat.setOutputPath(job,
90 new Path(otherArgs[otherArgs.length - 1]));
91 System.exit(job.waitForCompletion(true) ? 0 : 1);
92 }
93 }
這里第69行,因為我windows上用戶名為frank,集群上用戶名是hadoop ,所以這里增加配置文件,把HADOOP_USER_NAME設置為hadoop。第71和72行是因為配置文件沒有起作用,如果不加這兩行,會以本地方式運行,沒有提交到集群上運行。第73行因為是跨平臺的,windows->linux,所以加上這一句。
然后,最重要的一步來了,注意,注意,注意,重要的事說3遍。
插件本來會自動把項目打成jar包,上傳運行。但是有問題,現在不會自動打包。所以,我們要把project打成jar包,然后build path ,配置為項目的外部依賴包,然后右鍵run as -> run on hadoop.就能運行成功了。
ps:這是我的一種方法,在配置的過程中,遇到的問題多種多樣,造成問題的原因也不盡相同。So,多搜索,多思考,解決問題。
</div>