首页 > 代码库 > Hbase集群搭建二(Hadoop搭建)
Hbase集群搭建二(Hadoop搭建)
服务器准备:下述内容绑定 /etc/hosts
10.110.110.10 master
10.110.110.11 slave1
10.110.110.12 slave2
操作用户准备:hbase
1. 下载源码:mesos hadoop 源码并解压至 hadoop
2. 进入hadoop文件夹,编辑 pom.xml 文件确认 mesos 版本
<mesos.version>1.0.0</mesos.version>
3. 安装jdk:下载地址 (root用户安装)
[root@centos66-2 hadoop]# java -version java version "1.8.0_101" Java(TM) SE Runtime Environment (build 1.8.0_101-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
4. 安装Apache-maven 下载
apache-maven:是一个项目管理和构建自动化工具
wget http://apache.fayea.com/maven/maven-3/3.5.0/binaries/apache-maven-3.5.0-bin.tar.gz tar -xvf apache-maven-3.5.0-bin.tar.gz
export PATH=/home/hbase/apache-maven-3.5.0/bin:/usr/java/jdk1.8.0_101/bin:$PATH
5. mvn package
$ cd /home/hadoop
$ man package
6. 下载 hadoop 发行版
wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.2.0.tar.gz tar -xvf hadoop-2.5.0-cdh5.2.0.tar.gz
7. 文件复制
cp /home/hbase/hadoop/target/hadoop-mesos-0.1.0.jar /home/hbase/hadoop-2.5.0/share/hadoop/common/lib/
8. 执行脚本
cd hadoop-2.5.0 mv bin bin-mapreduce2 mv examples examples-mapreduce2 ln -s bin-mapreduce1 bin ln -s examples-mapreduce1 examples pushd etc mv hadoop hadoop-mapreduce2 ln -s hadoop-mapreduce1 hadoop popd pushd share/hadoop rm mapreduce ln -s mapreduce1 mapreduce popd
9. hadoop 文件配置 core-site.xml 文件位置:hadoop-2.5.0/etc/hadoop/
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/grid/working/hadoop/ljl_hadoop_data</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://10.110.110.10:9000</value> </property> <!-- This value should match the value in fs.defaultFS --> <property> <name>fs.default.name</name> <value>hdfs://10.110.110.10:9000</value> </property> </configuration>
10. hdfs-site.xml
<configuration> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
11. mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>10.110.110.10:9001</value> </property> <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.MesosScheduler</value> </property> <property> <name>mapred.mesos.taskScheduler</name> <value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value> </property> <property> <name>mapred.mesos.master</name> <value>10.110.110.10:5050</value> </property> <property> <name>mapred.mesos.executor.uri</name> <value>hdfs://10.110.110.10:9000/hadoop-2.5.0.tar.gz</value> </property> </configuration>
12.masters
master
13.slaves
slave1
salve2
14. hadoop-env.sh 至少要配置 JAVA_HOME
export JAVA_HOME=/usr/java/jdk1.7.0_76 export MESOS_JAR="/usr/share/java/mesos-0.24.0.jar" export PROTOBUF_JAR="/home/hbase/hadoop-2.5.0/share/hadoop/mapreduce1/lib/protobuf-java-2.5.0.jar" export MESOS_NATIVE_JAVA_LIBRARY="/usr/local/lib/libmesos.so" export MESOS_NATIVE_LIBRARY="/usr/local/lib/libmesos.so" export HADOOP_CLASSPATH="/usr/share/java/mesos-0.24.0.jar:$HADOOP_CLASSPATH" export HADOOP_HEAPSIZE=2000
15. mesos-master-env.sh 文件位置/usr/etc/mesos
export PATH=/home/hbase/hadoop-2.5.0/bin:$PATH export MESOS_log_dir=/var/log/mesos
16. mesos-slave-env.sh
export PATH=/home/hbase/hadoop-2.5.0/bin:$PATH export MESOS_log_dir=/var/log/mesos
17. 重启 mesos
18. 启动hdfs (18-20 只在master上操作)
export PATH=/home/hbase/hadoop-2.5.0/bin:$PATH hadoop namenode -format hadoop namenode & hadoop datanode &
19.将hadoop-2.5.0打包上传
cd /home/hbase tar zcf hadoop-2.5.0-cdh5.2.0.tar.gz hadoop-2.5.0 hadoop fs -put hadoop-2.5.0-cdh5.2.0.tar.gz /hadoop-2.5.0-cdh5.2.0.tar.gz
20. 启动jobtacker
hadoop jobtracker &
21. 进入slave1 , slave 2 查看hadoop是否启动,若无启动,格式化手动启动
$ ssh grid@10.110.110.10 $ ps -aux | grep hadoop
22. 格式化启动操作
$ cd /home/hbase/hadoop-2.5.0/bin $ bin/hadoop namenode -format $ bin/start-mapred.sh
上一篇 下一篇
Hbase集群搭建二(Hadoop搭建)
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。