Hadoop学习笔记4之HDFS常用命令
1.查看${Hadoop_HOME}/bin/hadoop脚本的hadoop命令帮助信息打印可知:
hadoop version //查看版本
hadoop fs //文件系统客户端
hadoop jar //运行jar包
hadoop classpath //查看类路径
hadoop checknative //检查本地库并压缩
hadoop distcp // 远程递归拷贝文件
hadoop credential //认证
hadoop trace //跟踪
不同的命令对应了不同的java类
2.bin/hdfs可以执行的命令
dfs //等价于 hadoop fs命令.
classpath prints the classpath
namenode -format format theDFS filesystem
secondarynamenode run the DFSsecondary namenode
namenode run the DFSnamenode
journalnode run the DFSjournalnode
zkfc run the ZKFailover Controller daemon
datanode run a DFSdatanode
dfsadmin run a DFSadmin client
haadmin run a DFS HAadmin client
fsck run a DFSfilesystem checking utility
balancer run a clusterbalancing utility
jmxget get JMXexported values from NameNode or DataNode.
mover run a utilityto move block replicas across
storage types
oiv apply theoffline fsimage viewer to an fsimage
oiv_legacy apply theoffline fsimage viewer to an legacy fsimage
oev apply theoffline edits viewer to an edits file
fetchdt fetch adelegation token from the NameNode
getconf get configvalues from configuration
groups get thegroups which users belong to
snapshotDiff diff twosnapshots of a directory or diff the
current directorycontents with a snapshot
lsSnapshottableDir list allsnapshottable dirs owned by the current user
Use -help to see options
portmap run a portmapservice
nfs3 run an NFSversion 3 gateway
cacheadmin configure theHDFS cache
crypto configureHDFS encryption zones
storagepolicies list/get/setblock storage policies
version print theversion
3.hdfs fds命令:
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot<snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set<acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]]
4.例:
$ hdfs dfs -mkdir-p /user/ubuntu/ //在hdfs上建立文件夹
$ hdfs dfs -puthdfs.cmd /user/ubuntu/ //将本地文件上传到HDFS
$ hdfs dfs -get/user/ubuntu/hadoop.cmd a.cmd //将文件从HDFS取回本地
$ hdfs dfs -rm -r -f /user/ubuntu/ //删除
hdfs dfs -ls -R/ //递归展示HDFS文件系统