首页 > 代码库 > Spark学习笔记-hadoop命令

Spark学习笔记-hadoop命令

进入 $HADOOP/bin

一.文件操作

文件操作 类似于正常的linux操作前面加上“hdfs dfs -”

前缀也可以写成hadoop而不用hdfs,但终端中显示

Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

1.创建目录:(注意 文件夹需一级一级创建)

hdfs dfs -mkdir /user

hdfs dfs -mkdir /user/com

hdfs dfs -mkdir /user/com/evor

2.删除文件目录

hdfs dfs -rm -r /user/com/evor  (-rmr也可以) 删除文件夹下所有东西 rm的递归版本

hdfs dfs -rm /user/com/evor/hadoop.txt  删除文件

3.上传文件

1)hdfs dfs -put  /本/地/路/径/spark.jar   /user/com/evor

2)hdfs dfs -copyFromLocal  /本/地/路/径/spark.jar   /user/com/evor 

区别 copyFromLocal 限定源路径为本地的路径,其他与-put相同

4.下载文件

复制文件到本地

1) hdfs dfs -get /user/com/evor/spark.jar   /本/地/路/径

2) hdfs dfs -copyToLocal  /user/com/evor/spark.jar   /本/地/路/径

区别 copyToLocal 限定目标路径为本地的路径,其他与-get相同

5.查看文件

我们可以直接在hdfs中直接查看文件,功能与cat类似

将路径指定文件的内容输出到stdout。

hdfs dfs -cat /user/com/evor/hadoop.txt   

hadoop fs -cat hdfs://host1:port1/file1  hdfs://host2:port2/file2

hadoop fs -cat file:///file3   /user/hadoop/file4

6.修改权限

hdfs dfs -chmod 777 /user/com/evor/WordCount.sh 

二.MapReduce Job操作

提交MapReduce Job

运行jar文件。用户可以把他们的Map Reduce代码捆绑到jar文件中,原则上说,Hadoop所有的MapReduce Job都是一个jar包。

运行一个/home/admin/hadoop/job.jar的MapReduce Job

执行:hadoop  jar /home/admin/hadoop/job.jar [jobMainClass] [jobArgs]    (注意 是hadoop 不是hdfs)

杀死某个正在运行的Job

假设Job_Id为:job_201005310937_0053

执行:hadoop job -kill job_201005310937_0053

 

 相关链接 -> http://www.cnblogs.com/xd502djj/p/3625799.html

 

更多命令提示:

输入hdfs

hadoop@Node4:/$ hdfsUsage: hdfs [--config confdir] COMMAND       where COMMAND is one of:  dfs                  run a filesystem command on the file systems supported in Hadoop.  namenode -format     format the DFS filesystem  secondarynamenode    run the DFS secondary namenode  namenode             run the DFS namenode  journalnode          run the DFS journalnode  zkfc                 run the ZK Failover Controller daemon  datanode             run a DFS datanode  dfsadmin             run a DFS admin client  haadmin              run a DFS HA admin client  fsck                 run a DFS filesystem checking utility  balancer             run a cluster balancing utility  jmxget               get JMX exported values from NameNode or DataNode.  oiv                  apply the offline fsimage viewer to an fsimage  oev                  apply the offline edits viewer to an edits file  fetchdt              fetch a delegation token from the NameNode  getconf              get config values from configuration  groups               get the groups which users belong to  snapshotDiff         diff two snapshots of a directory or diff the                       current directory contents with a snapshot  lsSnapshottableDir   list all snapshottable dirs owned by the current user                        Use -help to see options  portmap              run a portmap service  nfs3                 run an NFS version 3 gateway  cacheadmin           configure the HDFS cacheMost commands print help when invoked w/o parameters.

 

 

 

 

 

 

格式化hadoop之后重新启动平台,输入jps 有时会发现没有namenode进程

查namenode日志文件,/usr/local/hadoop/hadoop-2.4.1/logs 里的namenode相关文件,发现namenode clusterID与datenode的不同造成了错误

 

分别察看

/usr/local/hadoop/hadoop-2.4.1/hdfs/data/current/VERSION

/usr/local/hadoop/hadoop-2.4.1/hdfs/name/current/VERSION

将clusterID改成相同即可。

Spark学习笔记-hadoop命令