首页 > 代码库 > 安装hadoop-2.8.0
安装hadoop-2.8.0
有的地方来源别的博文
0x01 版本 0x02 hdfs安装 0x03 hive安装 0x04 hive安装 0x05 spark安装 0x06 启动报错 0x07 参考
0x01 版本 版本hdaoop2.8 hbase-1.2.5 0x02 hdfs安装 1.初始化,建立目录和用户,jdk环境 - name: pro file: path=/home/hadoop state=directory - name: add user action: user name=hadoop update_password=always shell=/bin/bash - name: chpasswd shell: echo "xx"|passwd --stdin hadoop - name: chown shell: chown -R hadoop.hadoop /home/hadoop - name: copy profile copy: src=http://www.mamicode.com/opt/src/hprofile dest=/etc/profile force=yes owner=root group=root mode=0644"输入远端服务器IP: " ip ssh-copy-id -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa.pub root@$ip ssh root@$ip ‘sed -i "s/^#RSAAuthentication\ yes/RSAAuthentication\ yes/g" /etc/ssh/sshd_config‘ ssh root@$ip ‘sed -i "s/^#PubkeyAuthentication\ yes/PubkeyAuthentication yes/g" /etc/ssh/sshd_config‘ ssh root@$ip ‘sed -i "s/^#PermitRootLogin\ yes/PermitRootLogin\ yes/g" /etc/ssh/sshd_config‘ ssh root@$ip ‘service sshd restart‘ hostname=`ssh root@${ip} ‘hostname‘` echo "添加主机名和IP到本地/etc/hosts文件中" echo "$ip $hostname" >> /etc/hosts echo "远端主机主机名称为$hostname, 请查看 /etc/hosts 确保该主机名和IP添加到主机列表文件中" echo "主机公钥复制完成 2.3 读取主机列表然后把/etc/hosts复制到所有主机上 #!/bin/sh cat /etc/hosts | while read LINE do ip=`echo $LINE | awk ‘{print $1}‘ | grep -v "::" | grep -v "127.0.0.1"` echo "Copying /etc/hosts to ${ip}" scp -o StrictHostKeyChecking=no /etc/hosts root@${ip}:/etc/ done 或者使用自己的exp.sh ip 3.修改的配置 namenode ha 配置 vim hdfs-site.xml <property> <name>dfs.namenode.secondary.http-address</name> <value>d17:50090</value> </property> 测试HA $ sbin/hadoop-daemon.sh stop namenode 再次查看CentOS7-2上的namenode,发现自动切换为active了 vim slaves d17 d18 参考自己写的安装hadoop-2.3.0-cdh5.1.2 http://szgb17.blog.51cto.com/340201/1691814 4.初始化,并启动hdfs hadoop namenode -format 初始化,只做一次 启动的命令,旧版 hadoop-daemon.sh start namenode hadoop-daemons.sh start datanode yarn-daemon.sh start resourcemanager yarn-daemons.sh start nodemanager 新版 start-dfs.sh 启动的warning,说要重新编译一次,太麻烦,没做 17/05/15 17:10:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable start-yarn.sh 5.启动后的服务确认 网上找到旧版启动的服务 [shirdrn@localhost bin]$ jps 8192 TaskTracker 7905 DataNode 7806 NameNode 8065 JobTracker 8002 SecondaryNameNode 8234 Jps 新版 [hadoop@n16 conf]$ jps 9088 Jps 472 NameNode 2235 ResourceManager 1308 QuorumPeerMain 1901 HMaster 0x03 hbase安装 启动顺序:hadoop-->zookeeper-->hbase 关闭顺序:hbase-->zookeeper-->hadoop 1.先安装zookeeper,使用ansible安装zookeeper 2.启动报错,start-hbase.sh Could not start ZK with 3 ZK servers in local mode deployment 出现这个报错,主要是这个属性hbase.cluster.distributed,写错,切记 vim hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://n16:9000/hbase/data</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2182</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>n16,d17,d18</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/zookeeper/data</value> </property> </configuration> 3.start-hbase cat /etc/profile export JAVA_HOME=/usr/java/jdk export JRE_HOME=/usr/java/jdk/jre exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH export HADOOP_HOME=/home/hadoop export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:/home/hadoop/hbase/bin:$PATH 0x04 hive安装 0x05 spark安装 0x06 启动报错 hbase Could not start ZK with 3 ZK servers in local mode deployment 0x07 参考 http://slaytanic.blog.51cto.com/2057708/1397396 web界面查看 Web查看HDFS: http://ip:50070 namenode 通过Web查看hadoop集群状态: http://ip:8088 vim /home/hadoop/hadoop-2.2.0/etc/hadoop/yarn-site.xml <name>yarn.resourcemanager.webapp.address</name> <value>xuegod63.cn:8088</value> </property>
临时搭建的,就是能跑起来。真正应用还要做很多优化的。
本文出自 “python 运维” 博客,谢绝转载!
安装hadoop-2.8.0
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。