首页 > 代码库 > Java之美[从菜鸟到高手演练]之Linux下单节点安装Hadoop

Java之美[从菜鸟到高手演练]之Linux下单节点安装Hadoop

作者:二青

邮箱:xtfggef@gmail.com     微博:http://weibo.com/xtfggef

现在开始要系统地学习下Hadoop了,虽然可能有点儿晚,但是还是想好好学习一下这门火爆的技术,让我们从安装环境开始。官方文档

本文使用的个软件及版本如下:

  • Ubuntu 14.10 64 Bit Server Edition
  • Hadoop2.6.0
  • JDK 1.7.0_71
  • ssh
  • rsync
首先自己准备一个装有linux系统的机器,物理机虚拟机都可,推荐使用Oracle VirtualBox 搭建一个虚拟机。本文使用Window7+VirtualBox+Ubuntu 14.10 64 Server Edition。
去Apache首页下载一个Hadoop镜像(Apache Hadoop Mirror)。去Oracle官网下载JDK(JDK 下载)。
1. 搭建基础环境,下载Hadoop和JDK安装包
2. 使用Putty登录Ubuntu
技术分享

执行下面两行命令安装ssh和rsync
$ sudo apt-get install ssh
$ sudo apt-get install rsync

3. 使用WinSCP将下载好的Hadoop和JDK压缩包传入Ubuntu
技术分享

使用tar -zxvf xxx.tar.gz 分别解压两个包,并拷贝到/opt目录下。
4. 配置Java环境
root权限打开/etc/profile文件,在末尾加上:
JAVA_HOME=/opt/jdk1.7.17
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export JAVA_HOME PATH CLASSPATH

执行.  /etc/profile可使profile修改后立即生效。(注意.后面的空格)
其实这段配置的目的就是设置PATH和CLASSPATH,与我们在windows下设置环境变量是一致的。之后用javac或者java -version测试下,看成功没有。

技术分享


5. 配置Hadoop

将解压的包拷贝到/opt后,要对hadoop进行简单的配置。

编辑etc/hadoop/hadoop-env.sh,添加如下配置:

# set to the root of your Java installation
export JAVA_HOME=/opt/jdk1.7.0_71
# Assuming your installation directory is /opt/hadoop-2.6.0
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/opt/hadoop-2.6.0"}

完成这步,hadoop就简单的配置好了,我们可以进行深入一点儿的配置开启不同模式。

  • 独立模式
  • 为分布式
  • 完全分布式
这里我们将要配置成为伪分布式去使用,单节点的伪分布式表示每个hadoop守护进程单独运行于一个Java进程。
1. 编辑配置文件etc/hadoop/core-site.xml,etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

2. 对localhost设置无密钥ssh连接
技术分享


如果失败,使用如下命令:

$ ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

3. 执行

执行.hdfs namenode -format

adam@ubuntu:/opt/hadoop-2.6.0/bin$ ./hdfs namenode -format
15/01/11 11:37:08 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /opt/hadoop-2.6.0/etc/hadoop:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-logre/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadtpcore-4.2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.0/share.jar:/opt//lib/commons-el-1.0.jar:/opt/hapacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.0/shaar:/opt/hadoop-
...
...
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_71
************************************************************/
15/01/11 11:37:08 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/01/11 11:37:08 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-6645a7aa-b5c4-4b8c-a0b7-ece148452be5
15/01/11 11:37:10 INFO namenode.FSNamesystem: No KeyProvider found.
15/01/11 11:37:10 INFO namenode.FSNamesystem: fsLock is fair:true
15/01/11 11:37:10 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/01/11 11:37:10 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-chec
15/01/11 11:37:10 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set t
15/01/11 11:37:10 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Jan 11 11:37
15/01/11 11:37:10 INFO util.GSet: Computing capacity for map BlocksMap
15/01/11 11:37:10 INFO util.GSet: VM type       = 64-bit
15/01/11 11:37:10 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/01/11 11:37:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/01/11 11:37:10 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/01/11 11:37:10 INFO blockmanagement.BlockManager: defaultReplication         = 1
15/01/11 11:37:10 INFO blockmanagement.BlockManager: maxReplication             = 512
15/01/11 11:37:10 INFO blockmanagement.BlockManager: minReplication             = 1
15/01/11 11:37:10 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/01/11 11:37:10 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/01/11 11:37:10 INFO blockmanagement.BlockManager: replicationRecheck
15/01/11 11:37:10 INFO blockmanagement.BlockManager: encryptDataTransfe
15/01/11 11:37:10 INFO blockmanagement.BlockManager: maxNumBlocksToLog
15/01/11 11:37:10 INFO namenode.FSNamesystem: fsOwner             = ada
15/01/11 11:37:10 INFO namenode.FSNamesystem: supergroup          = sup
15/01/11 11:37:10 INFO namenode.FSNamesystem: isPermissionEnabled = tru
15/01/11 11:37:10 INFO namenode.FSNamesystem: HA Enabled: false
15/01/11 11:37:10 INFO namenode.FSNamesystem: Append Enabled: true
15/01/11 11:37:11 INFO util.GSet: Computing capacity for map INodeMap
15/01/11 11:37:11 INFO util.GSet: VM type       = 64-bit
15/01/11 11:37:11 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/01/11 11:37:11 INFO util.GSet: capacity      = 2^20 = 1048576 entrie
15/01/11 11:37:11 INFO namenode.NameNode: Caching file names occuring m
15/01/11 11:37:11 INFO util.GSet: Computing capacity for map cachedBloc
15/01/11 11:37:11 INFO util.GSet: VM type       = 64-bit
15/01/11 11:37:11 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/01/11 11:37:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/01/11 11:37:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.thr
15/01/11 11:37:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min
15/01/11 11:37:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.ext
15/01/11 11:37:11 INFO namenode.FSNamesystem: Retry cache on namenode i
15/01/11 11:37:11 INFO namenode.FSNamesystem: Retry cache will use 0.03
15/01/11 11:37:11 INFO util.GSet: Computing capacity for map NameNodeRe
15/01/11 11:37:11 INFO util.GSet: VM type       = 64-bit
15/01/11 11:37:11 INFO util.GSet: 0.029999999329447746% max memory 966.
15/01/11 11:37:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/01/11 11:37:11 INFO namenode.NNConf: ACLs enabled? false
15/01/11 11:37:11 INFO namenode.NNConf: XAttrs enabled? true
15/01/11 11:37:11 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/01/11 11:37:11 INFO namenode.FSImage: Allocated new BlockPoolId: BP-
15/01/11 11:37:11 INFO common.Storage: Storage directory /tmp/hadoop-ad
15/01/11 11:37:11 INFO namenode.NNStorageRetentionManager: Going to ret
15/01/11 11:37:11 INFO util.ExitUtil: Exiting with status 0
15/01/11 11:37:11 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
adam@ubuntu:/opt/hadoop-2.6.0/bin$

启动NameNode守护进程

将目录切换到hadoop/sbin下执行:./start-dfs.sh

adam@ubuntu:/opt/hadoop-2.6.0/sbin$ ./start-dfs.sh
Starting namenodes on [localhost]
adam@localhost‘s password:
localhost: starting namenode, logging to /opt/hadoop-2.6.0/logs/hadoop-adam-namenode-ubuntu.out
adam@localhost‘s password:
localhost: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-adam-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
adam@0.0.0.0‘s password:
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.6.0/logs/hadoop-adam-secondarynamenode-ubuntu.out
adam@ubuntu:/opt/hadoop-2.6.0/sbin$

这样我们就安装了一个简单的单节点伪分布式的hadoop环境。



Java之美[从菜鸟到高手演练]之Linux下单节点安装Hadoop