首页 > 代码库 > centos安装hadoop(伪分布式)

centos安装hadoop(伪分布式)

在本机上装的CentOS 5.5 虚拟机,

      软件准备:jdk 1.6 U26

      hadoop:hadoop-0.20.203.tar.gz

 

ssh检查配置

 

Linux代码 复制代码 收藏代码
  1. [root@localhost ~]# ssh-keygen -t  rsa  
  2. Generating public/private rsa key pair.  
  3. Enter file in which to save the key (/root/.ssh/id_rsa):   
  4. Created directory ‘/root/.ssh‘.  
  5. Enter passphrase (empty for no passphrase):   
  6. Enter same passphrase again:   
  7. Your identification has been saved in /root/.ssh/id_rsa.  
  8. Your public key has been saved in /root/.ssh/id_rsa.pub.  
  9. The key fingerprint is:  
  10. a8:7a:3e:f6:92:85:b8:c7:be:d9:0e:45:9c:d1:36:3b root@localhost.localdomain  
  11. [root@localhost ~]#   
  12. [root@localhost ~]# cd ..  
  13. [root@localhost /]# cd root  
  14. [root@localhost ~]# ls  
  15. anaconda-ks.cfg  Desktop  install.log  install.log.syslog  
  16. [root@localhost ~]# cd .ssh  
  17. [root@localhost .ssh]# cat id_rsa.pub > authorized_keys  
  18. [root@localhost .ssh]#   
  19.   
  20. [root@localhost .ssh]# ssh localhost  
  21. The authenticity of host ‘localhost (127.0.0.1)‘ can‘t be established.  
  22. RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.  
  23. Are you sure you want to continue connecting (yes/no)? yes  
  24. Warning: Permanently added ‘localhost‘ (RSA) to the list of known hosts.  
  25. Last login: Tue Jun 21 22:40:31 2011  
  26. [root@localhost ~]#   
[root@localhost ~]# ssh-keygen -t  rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory ‘/root/.ssh‘.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:a8:7a:3e:f6:92:85:b8:c7:be:d9:0e:45:9c:d1:36:3b root@localhost.localdomain[root@localhost ~]# [root@localhost ~]# cd ..[root@localhost /]# cd root[root@localhost ~]# lsanaconda-ks.cfg  Desktop  install.log  install.log.syslog[root@localhost ~]# cd .ssh[root@localhost .ssh]# cat id_rsa.pub > authorized_keys[root@localhost .ssh]# [root@localhost .ssh]# ssh localhostThe authenticity of host ‘localhost (127.0.0.1)‘ can‘t be established.RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added ‘localhost‘ (RSA) to the list of known hosts.Last login: Tue Jun 21 22:40:31 2011[root@localhost ~]# 

 

安装jdk

Linux代码 复制代码 收藏代码
  1. [root@localhost java]# chmod +x jdk-6u26-linux-i586.bin  
  2. [root@localhost java]# ./jdk-6u26-linux-i586.bin  
  3. ......  
  4. ......  
  5. ......  
  6. For more information on what data Registration collects and   
  7. how it is managed and used, see:  
  8. http://java.sun.com/javase/registration/JDKRegistrationPrivacy.html  
  9.   
  10. Press Enter to continue.....  
  11.   
  12.    
  13. Done.  
[root@localhost java]# chmod +x jdk-6u26-linux-i586.bin[root@localhost java]# ./jdk-6u26-linux-i586.bin..................For more information on what data Registration collects and how it is managed and used, see:http://java.sun.com/javase/registration/JDKRegistrationPrivacy.htmlPress Enter to continue..... Done.

  安装完成后生成文件夹:jdk1.6.0_26

 

  配置环境变量

 

Linux代码 复制代码 收藏代码
  1. [root@localhost java]# vi /etc/profile  
  2. #添加如下信息  
  3. # set java environment  
  4. export JAVA_HOME=/usr/java/jdk1.6.0_26  
  5. export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib  
  6. export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin  
  7. export HADOOP_HOME=/usr/local/hadoop/hadoop-0.20.203  
  8. export PATH=$PATH:$HADOOP_HOME/bin  
  9.   
  10. [root@localhost java]# chmod +x  /etc/profile  
  11. [root@localhost java]# source  /etc/profile  
  12. [root@localhost java]#   
  13. [root@localhost java]# java -version  
  14. java version "1.6.0_26"  
  15. Java(TM) SE Runtime Environment (build 1.6.0_26-b03)  
  16. Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing)  
  17. [root@localhost java]#   
[root@localhost java]# vi /etc/profile#添加如下信息# set java environmentexport JAVA_HOME=/usr/java/jdk1.6.0_26export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/binexport HADOOP_HOME=/usr/local/hadoop/hadoop-0.20.203export PATH=$PATH:$HADOOP_HOME/bin[root@localhost java]# chmod +x  /etc/profile[root@localhost java]# source  /etc/profile[root@localhost java]# [root@localhost java]# java -versionjava version "1.6.0_26"Java(TM) SE Runtime Environment (build 1.6.0_26-b03)Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing)[root@localhost java]# 

修改hosts

Linuxa代码 复制代码 收藏代码
  1. [root@localhost conf]# vi /etc/hosts  
  2. # Do not remove the following line, or various programs  
  3. # that require network functionality will fail.  
  4. 127.0.0.1               localhost.localdomain localhost  
  5. ::1             localhost6.localdomain6 localhost6  
  6. 127.0.0.1       namenode datanode01  
[root@localhost conf]# vi /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1               localhost.localdomain localhost::1             localhost6.localdomain6 localhost6127.0.0.1       namenode datanode01

 

 

解压安装hadoop

Linux代码 复制代码 收藏代码
  1. [root@localhost hadoop]# tar zxvf hadoop-0.20.203.tar.gz  
  2. ......  
  3. ......  
  4. ......  
  5. hadoop-0.20.203.0/src/contrib/ec2/bin/image/create-hadoop-image-remote  
  6. hadoop-0.20.203.0/src/contrib/ec2/bin/image/ec2-run-user-data  
  7. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-cluster  
  8. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-master  
  9. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-slaves  
  10. hadoop-0.20.203.0/src/contrib/ec2/bin/list-hadoop-clusters  
  11. hadoop-0.20.203.0/src/contrib/ec2/bin/terminate-hadoop-cluster  
  12. [root@localhost hadoop]#   
[root@localhost hadoop]# tar zxvf hadoop-0.20.203.tar.gz..................hadoop-0.20.203.0/src/contrib/ec2/bin/image/create-hadoop-image-remotehadoop-0.20.203.0/src/contrib/ec2/bin/image/ec2-run-user-datahadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-clusterhadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-masterhadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-slaveshadoop-0.20.203.0/src/contrib/ec2/bin/list-hadoop-clustershadoop-0.20.203.0/src/contrib/ec2/bin/terminate-hadoop-cluster[root@localhost hadoop]# 

 

  进入hadoop配置conf

 

Linux代码 复制代码 收藏代码
  1. ####################################  
  2. [root@localhost conf]# vi hadoop-env.sh  
  3. # 添加代码  
  4. # set java environment  
  5.   export JAVA_HOME=/usr/java/jdk1.6.0_26  
  6.   
  7. #####################################  
  8. [root@localhost conf]# vi core-site.xml  
  9. <?xml version="1.0"?>  
  10. <?xml-stylesheet type="text/xsl" href=http://www.mamicode.com/"configuration.xsl"?>  
  11. <!-- Put site-specific property overrides in this file. -->  
  12. <configuration>  
  13.    <property>  
  14.      <name>fs.default.name</name>  
  15.      <value>hdfs://namenode:9000/</value>  
  16.    </property>  
  17.    <property>  
  18.      <name>hadoop.tmp.dir</name>  
  19.      <value>/usr/local/hadoop/hadooptmp</value>  
  20.    </property>  
  21. </configuration>  
  22.   
  23. #######################################  
  24. [root@localhost conf]# vi hdfs-site.xml   
  25. <?xml version="1.0"?>  
  26. <?xml-stylesheet type="text/xsl" href=http://www.mamicode.com/"configuration.xsl"?>  
  27. <!-- Put site-specific property overrides in this file. -->  
  28. <configuration>  
  29. <property>  
  30.      <name>dfs.name.dir</name>  
  31.      <value>/usr/local/hadoop/hdfs/name</value>  
  32.   </property>  
  33.   <property>  
  34.      <name>dfs.data.dir</name>  
  35.      <value>/usr/local/hadoop/hdfs/data</value>  
  36.   </property>  
  37.   <property>  
  38.      <name>dfs.replication</name>  
  39.      <value>1</value>  
  40.   </property>  
  41. </configuration>  
  42.   
  43. #########################################  
  44. [root@localhost conf]# vi mapred-site.xml  
  45. <?xml version="1.0"?>  
  46. <?xml-stylesheet type="text/xsl" href=http://www.mamicode.com/"configuration.xsl"?>  
  47. <!-- Put site-specific property overrides in this file. -->  
  48. <configuration>  
  49.   <property>  
  50.      <name>mapred.job.tracker</name>  
  51.      <value>namenode:9001</value>  
  52.   </property>  
  53.   <property>  
  54.      <name>mapred.local.dir</name>  
  55.      <value>/usr/local/hadoop/mapred/local</value>  
  56.   </property>  
  57.   <property>  
  58.      <name>mapred.system.dir</name>  
  59.      <value>/tmp/hadoop/mapred/system</value>  
  60.   </property>  
  61. </configuration>  
  62.   
  63. #########################################  
  64. [root@localhost conf]# vi masters  
  65. #localhost  
  66. namenode  
  67.   
  68. #########################################  
  69. [root@localhost conf]# vi slaves  
  70. #localhost  
  71. datanode01  
####################################[root@localhost conf]# vi hadoop-env.sh# 添加代码# set java environment  export JAVA_HOME=/usr/java/jdk1.6.0_26#####################################[root@localhost conf]# vi core-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="http://www.mamicode.com/configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>   <property>     <name>fs.default.name</name>     <value>hdfs://namenode:9000/</value>   </property>   <property>     <name>hadoop.tmp.dir</name>     <value>/usr/local/hadoop/hadooptmp</value>   </property></configuration>#######################################[root@localhost conf]# vi hdfs-site.xml <?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="http://www.mamicode.com/configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>     <name>dfs.name.dir</name>     <value>/usr/local/hadoop/hdfs/name</value>  </property>  <property>     <name>dfs.data.dir</name>     <value>/usr/local/hadoop/hdfs/data</value>  </property>  <property>     <name>dfs.replication</name>     <value>1</value>  </property></configuration>#########################################[root@localhost conf]# vi mapred-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="http://www.mamicode.com/configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>  <property>     <name>mapred.job.tracker</name>     <value>namenode:9001</value>  </property>  <property>     <name>mapred.local.dir</name>     <value>/usr/local/hadoop/mapred/local</value>  </property>  <property>     <name>mapred.system.dir</name>     <value>/tmp/hadoop/mapred/system</value>  </property></configuration>#########################################[root@localhost conf]# vi masters#localhostnamenode#########################################[root@localhost conf]# vi slaves#localhostdatanode01

 

启动 hadoop

Linux代码 复制代码 收藏代码
  1. #####################<span style="font-size: small;">格式化namenode##############</span>  
  2.   
  3.   
  4.   
  5. [root@localhost bin]# hadoop namenode -format  
  6. 11/06/23 00:43:54 INFO namenode.NameNode: STARTUP_MSG:   
  7. /************************************************************  
  8. STARTUP_MSG: Starting NameNode  
  9. STARTUP_MSG:   host = localhost.localdomain/127.0.0.1  
  10. STARTUP_MSG:   args = [-format]  
  11. STARTUP_MSG:   version = 0.20.203.0  
  12. STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by ‘oom‘ on Wed May  07:57:50 PDT 2011  
  13. ************************************************************/  
  14. 11/06/23 00:43:55 INFO util.GSet: VM type       = 32-bit  
  15. 11/06/23 00:43:55 INFO util.GSet: 2% max memory = 19.33375 MB  
  16. 11/06/23 00:43:55 INFO util.GSet: capacity      = 2^22 = 4194304 entries  
  17. 11/06/23 00:43:55 INFO util.GSet: recommended=4194304, actual=4194304  
  18. 11/06/23 00:43:56 INFO namenode.FSNamesystem: fsOwner=root  
  19. 11/06/23 00:43:56 INFO namenode.FSNamesystem: supergroup=supergroup  
  20. 11/06/23 00:43:56 INFO namenode.FSNamesystem: isPermissionEnabled=true  
  21. 11/06/23 00:43:56 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100  
  22. 11/06/23 00:43:56 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)  
  23. 11/06/23 00:43:56 INFO namenode.NameNode: Caching file names occuring more than 10 times   
  24. 11/06/23 00:43:57 INFO common.Storage: Image file of size 110 saved in 0 seconds.  
  25. 11/06/23 00:43:57 INFO common.Storage: Storage directory /usr/local/hadoop/hdfs/name has been successfully formatted.  
  26. 11/06/23 00:43:57 INFO namenode.NameNode: SHUTDOWN_MSG:   
  27. /************************************************************  
  28. SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1  
  29. ************************************************************/  
  30. [root@localhost bin]#   
  31.   
  32. ###########################################  
  33. [root@localhost bin]# ./start-all.sh  
  34. starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out  
  35. datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out  
  36. namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out  
  37. starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out  
  38. datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out  
  39. [root@localhost bin]# jps  
  40. 11971 TaskTracker  
  41. 11807 SecondaryNameNode  
  42. 11599 NameNode  
  43. 12022 Jps  
  44. 11710 DataNode  
  45. 11877 JobTracker   
#####################格式化namenode##############[root@localhost bin]# hadoop namenode -format11/06/23 00:43:54 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = localhost.localdomain/127.0.0.1STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 0.20.203.0STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by ‘oom‘ on Wed May  4 07:57:50 PDT 2011************************************************************/11/06/23 00:43:55 INFO util.GSet: VM type       = 32-bit11/06/23 00:43:55 INFO util.GSet: 2% max memory = 19.33375 MB11/06/23 00:43:55 INFO util.GSet: capacity      = 2^22 = 4194304 entries11/06/23 00:43:55 INFO util.GSet: recommended=4194304, actual=419430411/06/23 00:43:56 INFO namenode.FSNamesystem: fsOwner=root11/06/23 00:43:56 INFO namenode.FSNamesystem: supergroup=supergroup11/06/23 00:43:56 INFO namenode.FSNamesystem: isPermissionEnabled=true11/06/23 00:43:56 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10011/06/23 00:43:56 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)11/06/23 00:43:56 INFO namenode.NameNode: Caching file names occuring more than 10 times 11/06/23 00:43:57 INFO common.Storage: Image file of size 110 saved in 0 seconds.11/06/23 00:43:57 INFO common.Storage: Storage directory /usr/local/hadoop/hdfs/name has been successfully formatted.11/06/23 00:43:57 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1************************************************************/[root@localhost bin]# ###########################################[root@localhost bin]# ./start-all.shstarting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.outdatanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.outnamenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.outstarting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.outdatanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out[root@localhost bin]# jps11971 TaskTracker11807 SecondaryNameNode11599 NameNode12022 Jps11710 DataNode11877 JobTracker 

 

 

  查看集群状态

Linux代码 复制代码 收藏代码
  1. [root@localhost bin]# hadoop dfsadmin  -report  
  2. Configured Capacity: 4055396352 (3.78 GB)  
  3. Present Capacity: 464142351 (442.64 MB)  
  4. DFS Remaining: 464089088 (442.59 MB)  
  5. DFS Used: 53263 (52.01 KB)  
  6. DFS Used%: 0.01%  
  7. Under replicated blocks: 0  
  8. Blocks with corrupt replicas: 0  
  9. Missing blocks: 0  
  10.   
  11. -------------------------------------------------  
  12. Datanodes available: 1 (1 total, 0 dead)  
  13.   
  14. Name: 127.0.0.1:50010  
  15. Decommission Status : Normal  
  16. Configured Capacity: 4055396352 (3.78 GB)  
  17. DFS Used: 53263 (52.01 KB)  
  18. Non DFS Used: 3591254001 (3.34 GB)  
  19. DFS Remaining: 464089088(442.59 MB)  
  20. DFS Used%: 0%  
  21. DFS Remaining%: 11.44%  
  22. Last contact: Thu Jun 23 01:11:15 PDT 2011  
  23.   
  24.   
  25. [root@localhost bin]#   
[root@localhost bin]# hadoop dfsadmin  -reportConfigured Capacity: 4055396352 (3.78 GB)Present Capacity: 464142351 (442.64 MB)DFS Remaining: 464089088 (442.59 MB)DFS Used: 53263 (52.01 KB)DFS Used%: 0.01%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Datanodes available: 1 (1 total, 0 dead)Name: 127.0.0.1:50010Decommission Status : NormalConfigured Capacity: 4055396352 (3.78 GB)DFS Used: 53263 (52.01 KB)Non DFS Used: 3591254001 (3.34 GB)DFS Remaining: 464089088(442.59 MB)DFS Used%: 0%DFS Remaining%: 11.44%Last contact: Thu Jun 23 01:11:15 PDT 2011[root@localhost bin]# 

 

 

 

 

  其他问题: 1

Linux代码 复制代码 收藏代码
  1. ####################启动报错##########  
  2. [root@localhost bin]# ./start-all.sh  
  3. starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out  
  4. The authenticity of host ‘datanode01 (127.0.0.1)‘ can‘t be established.  
  5. RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.  
  6. Are you sure you want to continue connecting (yes/no)? y  
  7. Please type ‘yes‘ or ‘no‘: yes  
  8. datanode01: Warning: Permanently added ‘datanode01‘ (RSA) to the list of known hosts.  
  9. datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out  
  10. <strong><span style="color: rgb(255, 0, 0);">datanode01: Unrecognized option: -jvm  
  11. datanode01: Could not create the Java virtual machine.</span>  
  12.   
  13.   
  14. </strong>  
  15.   
  16.   
  17.   
  18. namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out  
  19. starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out  
  20. datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out  
  21. [root@localhost bin]# jps  
  22. 10442 JobTracker  
  23. 10533 TaskTracker  
  24. 10386 SecondaryNameNode  
  25. 10201 NameNode  
  26. 10658 Jps  
  27.   
  28. ################################################  
  29. [root@localhost bin]# vi hadoop  
  30. elif [ "$COMMAND" = "datanode" ] ; then  
  31.   CLASS=‘org.apache.hadoop.hdfs.server.datanode.DataNode‘  
  32.   if [[ $EUID -eq 0 ]]; then  
  33.     HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"  
  34.   else  
  35.     HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"  
  36.   fi  
  37.   
  38. #http://javoft.net/2011/06/hadoop-unrecognized-option-jvm-could-not-create-the-java-virtual-machine/  
  39. #改为  
  40. elif [ "$COMMAND" = "datanode" ] ; then  
  41.   CLASS=‘org.apache.hadoop.hdfs.server.datanode.DataNode‘  
  42. #  if [[ $EUID -eq 0 ]]; then  
  43. #    HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"  
  44. #  else  
  45.     HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"  
  46. #  fi  
  47.   
  48. #或者换非root用户启动  
  49. #启动成功  
####################启动报错##########[root@localhost bin]# ./start-all.shstarting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.outThe authenticity of host ‘datanode01 (127.0.0.1)‘ can‘t be established.RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.Are you sure you want to continue connecting (yes/no)? yPlease type ‘yes‘ or ‘no‘: yesdatanode01: Warning: Permanently added ‘datanode01‘ (RSA) to the list of known hosts.datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.outdatanode01: Unrecognized option: -jvmdatanode01: Could not create the Java virtual machine.namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.outstarting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.outdatanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out[root@localhost bin]# jps10442 JobTracker10533 TaskTracker10386 SecondaryNameNode10201 NameNode10658 Jps################################################[root@localhost bin]# vi hadoopelif [ "$COMMAND" = "datanode" ] ; then  CLASS=‘org.apache.hadoop.hdfs.server.datanode.DataNode‘  if [[ $EUID -eq 0 ]]; then    HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"  else    HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"  fi#http://javoft.net/2011/06/hadoop-unrecognized-option-jvm-could-not-create-the-java-virtual-machine/#改为elif [ "$COMMAND" = "datanode" ] ; then  CLASS=‘org.apache.hadoop.hdfs.server.datanode.DataNode‘#  if [[ $EUID -eq 0 ]]; then#    HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"#  else    HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"#  fi#或者换非root用户启动#启动成功

  2,启动时要关闭防火墙

 

查看运行情况:

http://localhost:50070

Firefox显示代码 复制代码 收藏代码
  1. NameNode ‘localhost.localdomain:9000‘  
  2. Started:    Thu Jun 23 01:07:18 PDT 2011  
  3. Version:    0.20.203.0, r1099333  
  4. Compiled:   Wed May 07:57:50 PDT 2011 by oom  
  5. Upgrades:   There are no upgrades in progress.  
  6.   
  7. Browse the filesystem  
  8. Namenode Logs  
  9. Cluster Summary  
  10. 6 files and directories, 1 blocks = 7 total. Heap Size is 31.38 MB / 966.69 MB (3%)  
  11. Configured Capacity :   3.78 GB  
  12. DFS Used    :   52.01 KB  
  13. Non DFS Used    :   3.34 GB  
  14. DFS Remaining   :   442.38 MB  
  15. DFS Used%   :   0 %  
  16. DFS Remaining%  :   11.44 %  
  17. Live Nodes  :   1  
  18. Dead Nodes  :   0  
  19. Decommissioning Nodes   :   0  
  20. Number of Under-Replicated Blocks   :   0  
  21.   
  22. NameNode Storage:  
  23. Storage Directory   Type    State  
  24. /usr/local/hadoop/hdfs/name IMAGE_AND_EDITS Active  
NameNode ‘localhost.localdomain:9000‘Started: 	Thu Jun 23 01:07:18 PDT 2011Version: 	0.20.203.0, r1099333Compiled: 	Wed May 4 07:57:50 PDT 2011 by oomUpgrades: 	There are no upgrades in progress.Browse the filesystemNamenode LogsCluster Summary6 files and directories, 1 blocks = 7 total. Heap Size is 31.38 MB / 966.69 MB (3%)Configured Capacity	:	3.78 GBDFS Used	:	52.01 KBNon DFS Used	:	3.34 GBDFS Remaining	:	442.38 MBDFS Used%	:	0 %DFS Remaining%	:	11.44 %Live Nodes 	:	1Dead Nodes 	:	0Decommissioning Nodes 	:	0Number of Under-Replicated Blocks	:	0NameNode Storage:Storage Directory	Type	State/usr/local/hadoop/hdfs/name	IMAGE_AND_EDITS	Active

 

http://localhost:50030

Firefox显示代码 复制代码 收藏代码
  1. namenode Hadoop Map/Reduce Administration  
  2. Quick Links  
  3.   
  4.     * Scheduling Info  
  5.     * Running Jobs  
  6.     * Retired Jobs  
  7.     * Local Logs  
  8.   
  9. State: RUNNING  
  10. Started: Thu Jun 23 01:07:30 PDT 2011  
  11. Version: 0.20.203.0, r1099333  
  12. Compiled: Wed May 07:57:50 PDT 2011 by oom  
  13. Identifier: 201106230107  
  14. Cluster Summary (Heap Size is 15.31 MB/966.69 MB)  
  15. Running Map Tasks   Running Reduce Tasks    Total Submissions   Nodes   Occupied Map Slots  Occupied Reduce Slots   Reserved Map Slots  Reserved Reduce Slots   Map Task Capacity   Reduce Task Capacity    Avg. Tasks/Node Blacklisted Nodes   Graylisted Nodes    Excluded Nodes  
  16. 0   0   0   1   0   0   0   0   2   2   4.00    0   0   0  
  17.   
  18. Scheduling Information  
  19. Queue Name  State   Scheduling Information  
  20. default     running     N/A  
  21. Filter (Jobid, Priority, User, Name)  
  22. Example: ‘user:smith 3200‘ will filter by ‘smith‘ only in the user field and ‘3200‘ in all fields  
  23. Running Jobs  
  24. none  
  25. Retired Jobs  
  26. none  
  27. Local Logs  
  28. Log directory, Job Tracker History This is Apache Hadoop release 0.20.203.0   
namenode Hadoop Map/Reduce AdministrationQuick Links    * Scheduling Info    * Running Jobs    * Retired Jobs    * Local LogsState: RUNNINGStarted: Thu Jun 23 01:07:30 PDT 2011Version: 0.20.203.0, r1099333Compiled: Wed May 4 07:57:50 PDT 2011 by oomIdentifier: 201106230107Cluster Summary (Heap Size is 15.31 MB/966.69 MB)Running Map Tasks	Running Reduce Tasks	Total Submissions	Nodes	Occupied Map Slots	Occupied Reduce Slots	Reserved Map Slots	Reserved Reduce Slots	Map Task Capacity	Reduce Task Capacity	Avg. Tasks/Node	Blacklisted Nodes	Graylisted Nodes	Excluded Nodes0	0	0	1	0	0	0	0	2	2	4.00	0	0	0Scheduling InformationQueue Name 	State 	Scheduling Informationdefault 	running 	N/AFilter (Jobid, Priority, User, Name)Example: ‘user:smith 3200‘ will filter by ‘smith‘ only in the user field and ‘3200‘ in all fieldsRunning JobsnoneRetired JobsnoneLocal LogsLog directory, Job Tracker History This is Apache Hadoop release 0.20.203.0 

 

测试:

Linux代码 复制代码 收藏代码
  1. ##########建立目录名称##########  
  2. [root@localhost bin]# hadoop fs -mkdir  testFolder  
  3.   
  4. ###############拷贝文件到文件夹中  
  5. [root@localhost local]# ls  
  6. bin  etc  games  hadoop  include  lib  libexec  sbin  share  src  SSH_key_file  
  7. [root@localhost local]# hadoop fs -copyFromLocal SSH_key_file testFolder  
  8.   
  9. 进入web页面即可查看  
##########建立目录名称##########[root@localhost bin]# hadoop fs -mkdir  testFolder###############拷贝文件到文件夹中[root@localhost local]# lsbin  etc  games  hadoop  include  lib  libexec  sbin  share  src  SSH_key_file[root@localhost local]# hadoop fs -copyFromLocal SSH_key_file testFolder进入web页面即可查看

 

 

 

 参考:http://bxyzzy.blog.51cto.com/854497/352692

 

   附:  准备FTP :yum install vsftpd (方便文件传输  和hadoop无关)

     关闭防火墙:service iptables start

     启动FTP:service vsftpd start

<iframe src="http://zhans52.iteye.com/iframe_ggbd/794" frameborder="0" scrolling="no" width="468" height="60"></iframe>