本文共 3227 字,大约阅读时间需要 10 分钟。
在开始操作之前,请确保您已经完成了单节点 Hadoop 环境的搭建和验证。这篇文章将基于此环境进行扩展。
基于之前的文章环境,进行以下配置:
hadoop-env.sh
文件,添加以下环境变量:export JAVA_HOME=/opt/local/jdk1.8.0_261export HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport HDFS_ZKFC_USER=rootexport HDFS_JOURNALNODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root
core-site.xml
:fs.defaultFS hdfs://hdfs-dsj hadoop.tmp.dir /opt/local/hadoop-3.3.1/ha hadoop.http.staticuser.user root ha.zookeeper.quorum node01:2181,node02:2181,node03:2181
hdfs-site.xml
:dfs.nameservices hdfs-dsj dfs.ha.namenodes.hdfs-dsj nn1,nn2 dfs.namenode.rpc-address.hdfs-dsj.nn1 node01:8020 dfs.namenode.rpc-address.hdfs-dsj.nn2 node02:8020 dfs.namenode.http-address.hdfs-dsj.nn1 node01:9870 dfs.namenode.http-address.hdfs-dsj.nn2 node02:9870 dfs.namenode.shared.edits.dir qjournal://node01:8485;node02:8485;node03:8485/hdfs-dsj dfs.journalnode.edits.dir /opt/local/hadoop-3.3.1/ha/qjm dfs.client.failover.proxy.provider.hdfs-dsj org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.ha.automatic-failover.enabled true dfs.replication 2
full
和logs
文件夹删除。scp -r /opt/local/hadoop-3.3.1 root@node02:/opt/local/scp -r /opt/local/hadoop-3.3.1 root@node03:/opt/local/
zkServer.sh startzkServer.sh status
hdfs --daemon start journalnode
hdfs namenode -format
hdfs --daemon start namenode
hdfs namenode -bootstrapStandby
hdfs zkfc -formatZK
start-dfs.sh
通过访问以下地址验证集群:
http://node01:9870
http://node02:9870
stop-dfs.sh
zkServer.sh stop
以上步骤将帮助您成功搭建一个Hadoop集群。
转载地址:http://daufk.baihongyu.com/