Hadoop单击版测试,搭建集群节点

一.Hadoop技术原理:

Hdfs主要模块:NameNode、DataNode
Yarn主要模块:ResourceManager、NodeManager

二.hadoop单机版测试

1.安装hadoop,创建hadoop用户

tar zxf jdk-8u181-linux-x64.tar.gz
tar zxf hadoop-3.0.3.tar.gz 
ln -s jdk1.8.0_181/ java					##制作软链接,便于启动
ln -s hadoop-3.0.3 hadoop
[[email protected] ~]$ ls
hadoop-3.0.3  hadoop-3.0.3.tar.gz  jdk1.8.0_181  jdk-8u181-linux-x64.tar.gz

Hadoop单击版测试,搭建集群节点

2.配置环境变量

  [[email protected] hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[[email protected] hadoop]$ vim hadoop-env.sh 
 54 export JAVA_HOME=/home/hadoop/java

[[email protected] ~]$ vim .bash_profile 
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin
[[email protected] ~]$ source .bash_profile 
[[email protected] ~]$ jps    ##配置成功可以调用

Hadoop单击版测试,搭建集群节点

3.测试

[[email protected] hadoop]$ pwd
/home/hadoop/hadoop
[[email protected] hadoop]$ mkdir input
[[email protected] hadoop]$ cp etc/hadoop/*.xml input/
[[email protected] hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'
[[email protected] hadoop]$ ls input/
capacity-scheduler.xml  hadoop-policy.xml  httpfs-site.xml  kms-site.xml     yarn-site.xml
core-site.xml           hdfs-site.xml      kms-acls.xml     mapred-site.xml
[[email protected] hadoop]$ cd output/
[[email protected] output]$ ls
part-r-00000  _SUCCESS

Hadoop单击版测试,搭建集群节点

二.伪分布式

1.编辑文件

[email protected] hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[[email protected] hadoop]$ vim core-site.xml
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://172.25.61.1:9000</value>
        </property>
    </configuration>
    [[email protected] hadoop]$ vim hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>     ##自己充当节点
    </property>
</configuration>

2.免密操作

[[email protected] hadoop]$ ssh-******
[[email protected] hadoop]$ logot
[[email protected] ~]# passwd hadoop
[[email protected] ~]# su - hadoop
[[email protected] ~]$ ssh-copy-id 172.25.61.1
[[email protected] ~]$ ssh-copy-id localhost
[[email protected] ~]$ ssh-copy-id server1

3.格式化,开启服务

[[email protected] hadoop]$ pwd
/home/hadoop/hadoop
[[email protected] hadoop]$ bin/hdfs namenode -format
[[email protected] hadoop]$ pwd
/home/hadoop/hadoop
[[email protected] hadoop]$ cd sbin/
[[email protected] sbin]$ ./start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
localhost: datanode is running as process 13435.  Stop it first.
Starting secondary namenodes [server1]
server1: secondarynamenode is running as process 13617.  Stop it first.
[[email protected] sbin]$ jps
13617 SecondaryNameNode
14329 Jps
13435 DataNode
13964 NameNode

Hadoop单击版测试,搭建集群节点

网页输入http://172.25.61.1:9870

Hadoop单击版测试,搭建集群节点

4.测试,创建目录,上传

[[email protected] hadoop]$ pwd
/home/hadoop/hadoop
[[email protected] hadoop]$ bin/hdfs dfs -mkdir 
[[email protected] hadoop]$ bin/hdfs dfs -mkdir /user
[[email protected] hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[[email protected] hadoop]$ bin/hdfs dfs -ls
[[email protected] hadoop]$ bin/hdfs dfs -put input
[[email protected] hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2019-05-23 03:11 input

网页上也可以看到

Hadoop单击版测试,搭建集群节点

[[email protected] hadoop]$ rm -rf input/
[[email protected] hadoop]$ rm -rf output
[[email protected] hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output 
[[email protected] hadoop]$ bin/hdfs dfs -cat output/*
[[email protected] hadoop]$ bin/hdfs dfs -get output		##可以把文件get下来查看
[[email protected] hadoop]$ ls
[[email protected] hadoop]$ cd output/
[[email protected] output]$ ls
part-r-00000  _SUCCESS

三.分布式

1.环境恢复

[[email protected] hadoop]$ pwd
/home/hadoop/hadoop
[[email protected] hadoop]$ sbin/stop-dfs.sh 
Stopping namenodes on [server1]
Stopping datanodes
Stopping secondary namenodes [server1]
[[email protected] hadoop]$ cd /tmp/
[[email protected] tmp]$ ls
hadoop  hadoop-hadoop  hsperfdata_hadoop
[[email protected] tmp]$ rm -rf *

2新开两个虚拟机server2,server3当做节点

创建用户
[[email protected] ~]# useradd -u 1000 hadoop
[[email protected] ~]# useradd -u 1000 hadoop

安装nfs-utils
[[email protected] ~]# yum install -y nfs-utils
[[email protected] ~]# yum install -y nfs-utils
[[email protected] ~]# yum install -y nfs-utils

[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# systemctl start rpcbind

3.server1开启服务,配置

[[email protected] ~]# systemctl start nfs-server
[[email protected] ~]# vim /etc/exports
/home/hadoop   *(rw,anonuid=1000,anongid=1000)
[[email protected] ~]# exportfs -rv
exporting *:/home/hadoop
[[email protected] ~]# showmount -e
Export list for server1:
/home/hadoop *

4.server2,3挂载

[[email protected] ~]# mount 172.25.61.1:/home/hadoop /home/hadoop
[[email protected] ~]# df
[[email protected] ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root     17811456 1095856  16715600   7% /
devtmpfs                    497292       0    497292   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264   13072    495192   3% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                  1038336  123376    914960  12% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.61.1:/home/hadoop  17811456 2794496  15016960  16% /home/hadoop

[[email protected] ~]# mount 172.25.61.1:/home/hadoop /home/hadoop
[[email protected] ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root     17811456 1095808  16715648   7% /
devtmpfs                    497292       0    497292   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264   13060    495204   3% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                  1038336  123376    914960  12% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.61.1:/home/hadoop  17811456 2794496  15016960  16% /home/hadoop
挂载之后,server1,2,3可以免密登陆

5.重新编辑文件

[[email protected] hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[[email protected] hadoop]$ vim core-site.xml 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://172.25.61.1:9000</value>
    </property>
</configuration>


[[email protected] hadoop]$ vim hdfs-site.xml 
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>     ##改为两个节点
    </property>
</configuration>

[[email protected] hadoop]$ vim workers 
[[email protected] hadoop]$ cat workers 
172.25.61.2
172.25.61.3
##在一个地方编辑,其他节点都有了
[[email protected] ~]# su - hadoop
[[email protected] ~]$ cd hadoop/etc/hadoop/
[[email protected] hadoop]$ cat workers 
172.25.61.2
172.25.61.3
[[email protected] ~]# su - hadoop
[[email protected] ~]$ cd hadoop/etc/hadoop/
[[email protected] hadoop]$ cat workers 
172.25.61.2
172.25.61.3

6.格式化,并启动服务

[[email protected] hadoop]$ bin/hdfs namenode -format
[[email protected] hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
Starting secondary namenodes [server1]
##从节点可以看到datanode信息
[[email protected] ~]$ jps
11959 DataNode
12046 Jps
[[email protected] ~]$ jps
10774 Jps
10713 DataNode

7.测试

[[email protected] hadoop]$ bin/hdfs dfs -mkdir /user
[[email protected] hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[[email protected] hadoop]$ ls
bin etc include lib libexec LICENSE.txt logs NOTICE.txt output README.txt sbin share
[[email protected] hadoop]$ bin/hdfs dfs -put etc/hadoop/ input

可以在网页上查看上传的数据及节点信息

Hadoop单击版测试,搭建集群节点Hadoop单击版测试,搭建集群节点

8.上传大文件

[[email protected] ~]$ cd /home/hadoop/hadoop
[[email protected] hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 16.1338 s, 32.5 MB/s
[[email protected] hadoop]$ bin/hdfs dfs -put bigfile
Hadoop单击版测试,搭建集群节点