搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作


主机环境:

Centos7.3(自定义台数,此处举例使用3台)

安装前准备:

JDK:

jdk-8u144-linux-x64.tar.gz

Hadoop:

http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz

在单机模式及小节点集群的Hadoop集群配置,往往NameNode和SecondaryNameNode进程在同一主机上,在企业运营环境中,一旦发生主机宕机等问题,那么整个Hadoop都会瘫痪,对整个大数据的集群会造成极大损失,为此,当节点数大于等于2之后,便可以进行配置,将NameNode和SecondaryNameNode进行分离,使SecondaryNameNode分配到从节点上,降低业务差错。

安装步骤:

配置涉及到Linux操作中配置hosts映射、关闭防火墙、节点间ssh免密认证,需要使用root账户登录各节点,对系统的配置参考前篇博文《Centos7下安装Cloudera Manager5.7.0》,步骤直接跳过,修改映射添加master(作为主节点),slave1,slave2,并直接开始对Hadoop进行配置:

1.下载并解压:

hadoop-2.6.0-cdh5.7.0.tar.gz

[[email protected]]# tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C /opt

2.对hadoop配置环境变量并生效:

[[email protected]]# vim /etc/profile

export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.7.0

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME:$PATH

source /etc/profile

3.修改hadoop配置文件:

[[email protected] hadoop]# cd /app/hadoop-2.6.0-cdh5.7.0/etc/hadoop

[[email protected] hadoop]# vim core-site.xml

    <property>

        <name>fs.defaultFS</name>

        <value>hdfs://master:8020</value>

    </property>

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/opt/hadoop-2.6.0-cdh5.7.0/tmp</value>

    </property>

    <property>

        <name>io.file.buffer.size</name>

        <value>8192</value>

    </property>

    <property>

        <name>fs.checkpoint.period</name>

        <value>3600</value>

    </property>

    <property>

        <name>fs.checkpoint.size</name>

        <value>67108864</value>

    </property>

    <property>

        <name>ha.zookeeper.quorum</name>

        <value>master:2181,slave1:2181,slave2:2181</value>

    </property>

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

[[email protected] hadoop]# vim hadoop-env.sh

export JAVA_HOME=/usr/jvm/jdk1.8.0

export HADOOP_LOG_DIR=/opt/hadoop-2.6.0-cdh5.7.0/logs

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

[[email protected] hadoop]# vim hdfs-site.xml

    <property>

        <name>dfs.datanode.data.dir</name>

        <value>file:/opt/hadoop-2.6.0-cdh5.7.0/dfs/data</value>

    </property>

    <property>

        <name>hadoop.tmp.dir</name>

        <value>file:/opt/hadoop-2.6.0-cdh5.7.0/tmp</value>

    </property>

    <property>

        <name>dfs.replication</name>

        <value>3</value>

    </property>

    <property>

        <name>dfs.blocksize</name>

        <value>128m</value>

    </property>

    <property>

        <name>dfs.namenode.handler.count</name>

        <value>100</value>

    </property>

    <property>

        <name>dfs.webhdfs.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>dfs.permissions</name>

        <value>false</value>

    </property>

    <property>

        <name>dfs.http.address</name>

        <value>master:50070</value>

    </property>

    <property>

        <name>dfs.namenode.secondary.http-address</name>

        <value>slave1:50090</value>

    </property>

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

在常规配置之后,添加SecondaryNameNode的http端口,修改为需要放置该进程的节点,如:slave1

[[email protected] hadoop]# vim mapred-site.xml

<property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

    <property>

        <name>mapreduce.jobhistory.address</name>

        <value>master:10020</value>

    </property>

    <property>

        <name>mapreduce.jobhistory.webapp.address</name>

        <value>master:19888</value>

    </property>

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

[[email protected] hadoop]# vim slaves

master

slave1

slave2

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

[[email protected]]$ vim yarn-site.xml

<property>

        <name>yarn.nodemanager.aux-services</name>

<!--    <value>mapreduce_shuffle</value> -->

        <value>mapreduce_shuffle,spark_shuffle</value>

    </property>

    <property>

        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

        <value>org.apache.hadoop.mapred.ShuffleHandler</value>

    </property>

    <property>

        <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>

        <value>org.apache.spark.network.yarn.YarnShuffleService</value>

    </property>

        <property>

        <name>spark.shuffle.service.port</name>

        <value>7337</value>

    </property>

    <property>

        <name>yarn.nodemanager.resource.memory-mb</name>

        <value>8192</value>

    </property>

    <property>

        <name>yarn.nodemanager.resource.cpu_vcores</name>

        <value>4</value>

    </property>

    <property>

        <name>yarn.nodemanager.log-dirs</name>

        <value>/opt/hadoop-2.6.0-cdh5.7.0/logs</value>

    </property>

    <property>

        <name>yarn.resourcemanager.hostname</name>

        <value>master</value>

    </property>

    <property>

         <name>yarn.resourcemanager.address</name>

         <value>master:8032</value>

    </property>

    <property>

         <name>yarn.resourcemanager.webapp.address</name>

         <value>master:8088</value>

    </property>

    <!-- The following two properties must be setted while Jdk 8 being used, or error appeares -->

    <property>

        <name>yarn.nodemanager.pmem-check-enabled</name>

        <value>false</value>

    </property>

    <property>

        <name>yarn.nodemanager.vmem-check-enabled</name>

        <value>false</value>

    </property>

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

常规配置结束,但是还需要增加一个masters文件,作为SecondaryNameNode的节点,此处设置为slave1:

[[email protected] hadoop]# vim masters

slave1 

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

到此,所有配置已经完毕,初始化HDFS并启动Hadoop,并连接到slave1查看进程:

[[email protected] hadoop-2.6.0-cdh5.7.0]# bin/hadoop namenode -format

[[email protected] hadoop-2.6.0-cdh5.7.0]# sbin/start-all.sh

[[email protected] hadoop-2.6.0-cdh5.7.0]# ssh slave1

[[email protected] ~]# jps

搭建多节点Hadoop集群,并对NameNode和SecondaryNameNode进程进行分离操作

配置完成。