Hadoop之HA环境搭建
三个JournalNode
两个NameNode
N个DataNode
1、zookeeper集群搭建
将zookeeper包上传到Node2服务器并解压到/opt/hx目录下
tar xf zookeeper-3.4.6.tar.gz -C /opt/hx/ |
配置zookeeper环境变量
JAVA_HOME=/usr/java/jdk1.7.0_80 JRE_HOME=/usr/java/jdk1.7.0_80/jre HADOOP_HOME=/opt/hx/hadoop-2.6.5 ZOOKEEPER_HOME=/opt/hx/zookeeper-3.4.6 CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin export JAVA_HOME JRE_HOME HADOOP_HOME CLASS_PATH PATH . /etc/profile |
zookeeper的dataDir目录修改
cd /opt/hx/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vi zoo.cfg |
修改zoo.cfg文件的dataDir目录 dataDir=/var/hx/zk 并在末尾增加 server.1=192.168.220.12:2888:3888 |
创建上述dataDir目录 mkdir -p /var/hx/zk |
创建完成后将对应服务器server后的数字写到myid下
echo 1 > /var/hx/zk/myid |
cd /opt/hx/ scp -r ./zookeeper-3.4.6/ node03:`pwd` scp -r ./zookeeper-3.4.6/ node04:`pwd` |
在node03及04上配置环境变量(同node02)并将server后数字写到myid下
登录node03 mkdir -p /var/hx/zk echo 2 > /var/hx/zk/myid 登录node04 mkdir -p /var/hx/zk echo 3 > /var/hx/zk/myid |
配置完成后就可以启动zookeeper,启动超过半数后的查看第二台的状态将会变成leader
zkServer.sh start |
2、免**设置
node01在完全分布式搭建时已经做过免**,现在只做node02的即可
生成node02的**
cd .ssh/ ssh-****** -t dsa -P '' -f ./id_dsa 将公钥追加到认证文件中 cat id_dsa.pub >> authorized_keys 并将公钥拷贝到node01上并追加到node01的认证文件中 scp ./id_dsa.pub [email protected]:`pwd`/node02.pub 登录node01 cat node02.pub >> authorized_keys |
3、修改文件etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node02:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node01:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node02:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/var/hx/hadoop/ha/jn</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> |
4、修改文件/etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/hx/hadoop/ha</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node02:2181,node03:2181,node04:2181</value> </property> </configuration> |
5、修改文件etc/hadoop/slaves
node02 node03 node04 |
6、将修改的配置文件分发到其他服务器
scp hdfs-site.xml core-site.xml node02:`pwd` scp hdfs-site.xml core-site.xml node03:`pwd` scp hdfs-site.xml core-site.xml node04:`pwd` |
7、配置完成后在node01,node02,node03上先启动journalnode
hadoop-daemon.sh start journalnode |
8、在node01上做namenode格式化
hdfs namenode -format |
9、在node01上启动namenode
hadoop-daemon.sh start namenode |
hdfs namenode -bootstrapStandby |
11、zkfc格式化zookeeper
hdfs zkfc -formatZK |
12、启动
start-dfs.sh |
13、MR-YARN集群搭建
修改文件/etc/hadoop/mapred-site.xml:
cp mapred-site.xml.template mapred-site.xml vi mapred-site.xml <configuration> |
修改文件/etc/hadoop/yarn-site.xml:
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node04</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>node02:2181,node03:2181,node04:2181</value> </property> </configuration> |
将上述两个配置文件分发:
scp mapred-site.xml yarn-site.xml node02:`pwd` scp mapred-site.xml yarn-site.xml node03:`pwd` scp mapred-site.xml yarn-site.xml node04:`pwd` |
启动NodeManager
start-yarn.sh |
在node03,node04上启动ResourceManager
yarn-daemon.sh start resourcemanager |