首页 > 代码库 > Hadoop那些事儿(二)---MapReduce开发环境搭建
Hadoop那些事儿(二)---MapReduce开发环境搭建
上一篇文章介绍了在ubuntu系统中安装Hadoop的伪分布式环境,这篇文章主要为MapReduce开发环境的搭建流程。
1.HDFS伪分布式配置
使用MapReduce时,如果需要与HDFS建立连接,及使用HDFS中的文件,还需要做一些配置。
首先进入Hadoop的安装目录
cd /usr/local/hadoop/hadoop2
在HDFS中创建用户目录
./bin/hdfs dfs -mkdir -p /user/hadoop
创建input目录,并将./etc/hadoop中的xml文件复制到分布式文件系统中
./bin/hdfs dfs -mkdir input
./bin/hdfs dfs -put ./etc/Hadoop/*.xml input
复制完成后,可以使用下面命令查看文件列表
./bin/hdfs dfs -ls input
2.开发环境搭建
1.1 调整虚拟机内存为2G+
1.2 eclipse linux版本下载
下载地址:http://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/neon2
我下载的文件为:eclipse-jee-neon-2-linux-gtk-x86_64.tar.gz
1.3 为hadoop用户分配opt文件夹的操作权限
sudo chown hadoop /opt
sudo chmod -R 777 /opt
1.4 将下载的文件拷贝到opt文件夹下
1.5 解压(解压后文件夹名为eclipse)
cd /opt
sudo tar -zxf eclipse-jee-neon-2-linux-gtk-x86_64.tar.gz
1.6 下载eclispe的Hadoop插件(hadoop-eclipse-plugin-2.6.0.jar)
1.7 为hadoop用户分配eclipse文件夹的权限
sudo chown hadoop /opt/eclipse
sudo chmod -R 777 /opt/eclipse
然后将hadoop-eclipse-plugin-2.6.0.jar拷贝到eclipse的plugins文件夹下
1.8 通过命令行启动eclipse
cd /usr/local/bin
sudo ln -s /opt/eclipse/eclipse
这样设置完成后,以后在命令行中输入eclispe就可启动
eclipse
注意,选择工作区间一定要选在自己有操作权限的目录下,比如我是/home/hadoop/workspace
1.9 启动eclipse后,window - show view - other中将会有MapReduceTools
1.10 打开MapReduce窗口开始配置文件系统连接,要与/usr/local/hadoop/hadoop2/etc/hadoop/下的core-site.xml的配置保持一致
配置完成后,查看左边的DFS树目录变成了下图:
1.11 确保所有的守护进程都已开启
1.12 window -preference - Hadoop Map/Reduce 选择Hadoop的安装目录,我这里是 /usr/local/hadoop/hadoop2,配置完成后新建的hadoop项目会自动导入需要的jar包
1.13 File - new - Project ,选择Map/Reduce Product 新建一个Map/Reduce项目,然后再src下新建一个package,并新建一个WordCount测试类
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
//很重要
conf.set("mapred.job.tracker","localhost:9001");
args = new String[]{"hdfs://localhost:9000/user/hadoop/input/count_in","hdfs://localhost:9000/user/hadoop/output/count_out"};
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
然后将/usr/local/hadoop/hadoop2/etc/hadoop目录下的log4j.properties文件拷贝到src目录下(否则无法在控制台打印日志)
1.14.在input文件夹上右键,创建一个文件夹–count_in,在桌面创建两个文件word1.txt和word2.txt,并写入一些字符串,如:
aaaa
bbbb
cccc
aaaa
然后在count_in文件夹上右键,选择upload file to DFS ,选中word1.txt和word2.txt,将其导入DFS文件系统中
1.15 代码上邮件Run as – Run on Hadoop 运行程序,运行结束后在DFS中的文件夹上右键Refresh,会生产输出文件
输出的内容无误后,MapReduce的开发环境就搭建好了
Hadoop那些事儿(二)---MapReduce开发环境搭建