您好,登錄后才能下訂單哦!
一、概述
1.實驗環境基于以前搭建的haoop HA;
2.spark HA所需要的zookeeper環境前文已經配置過,此處不再重復。
3.所需軟件包為:scala-2.12.3.tgz、spark-2.2.0-bin-hadoop2.7.tar
4.主機規劃
bd1 bd2 bd3 | Worker |
bd4 bd5 | Master、Worker |
二、配置Scala
1.解壓并拷貝
[root@bd1 ~]# tar -zxf scala-2.12.3.tgz [root@bd1 ~]# cp -r scala-2.12.3 /usr/local/
2.配置環境變量
[root@bd1 ~]# vim /etc/profile export SCALA_HOME=/usr/local/scala export PATH=:$SCALA_HOME/bin:$PATH [root@bd1 ~]# source /etc/profile
3.驗證
[root@bd1 ~]# scala -version Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.
三、配置Spark
1.解壓并拷貝
[root@bd1 ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz [root@bd1 ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark
2.配置環境變量
[root@bd1 ~]# vim /etc/profile export SCALA_HOME=/usr/local/scala export PATH=:$SCALA_HOME/bin:$PATH [root@bd1 ~]# source /etc/profile
3.修改spark-env.sh #文件不存在需要拷貝模板
[root@bd1 conf]# vim spark-env.sh export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop export SCALA_HOME=/usr/local/scala export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark" export SPARK_WORKER_MEMORY=1g export SPARK_WORKER_CORES=2 export SPARK_WORKER_INSTANCES=1
4.修改spark-defaults.conf #文件不存在需要拷貝模板
[root@bd1 conf]# vim spark-defaults.conf spark.master spark://master:7077 spark.eventLog.enabled true spark.eventLog.dir hdfs://master:/user/spark/history spark.serializer org.apache.spark.serializer.KryoSerializer
5.在HDFS文件系統中新建日志文件目錄
hdfs dfs -mkdir -p /user/spark/history hdfs dfs -chmod 777 /user/spark/history
6.修改slaves
[root@bd1 conf]# vim slaves bd1 bd2 bd3 bd4 bd5
四、同步到其他主機
1.使用scp同步Scala到bd2-bd5
scp -r /usr/local/scala root@bd2:/usr/local/ scp -r /usr/local/scala root@bd3:/usr/local/ scp -r /usr/local/scala root@bd4:/usr/local/ scp -r /usr/local/scala root@bd5:/usr/local/
2.同步Spark到bd2-bd5
scp -r /usr/local/spark root@bd2:/usr/local/ scp -r /usr/local/spark root@bd3:/usr/local/ scp -r /usr/local/spark root@bd4:/usr/local/ scp -r /usr/local/spark root@bd5:/usr/local/
五、啟動集群并測試HA
1.啟動順序為:zookeeper-->hadoop-->spark
2.啟動spark
bd4:
[root@bd4 sbin]# cd /usr/local/spark/sbin/ [root@bd4 sbin]# ./start-all.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.out bd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.out bd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.out bd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.out bd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.out bd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out [root@bd4 sbin]# jps 3153 DataNode 7235 Jps 3046 JournalNode 7017 Master 3290 NodeManager 7116 Worker 2958 QuorumPeerMain
bd5:
[root@bd5 sbin]# ./start-master.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out [root@bd5 sbin]# jps 3584 NodeManager 5602 RunJar 3251 QuorumPeerMain 8564 Master 3447 DataNode 8649 Jps 8474 Worker 3340 JournalNode
3.停掉bd4的Master進程
[root@bd4 sbin]# kill -9 7017 [root@bd4 sbin]# jps 3153 DataNode 7282 Jps 3046 JournalNode 3290 NodeManager 7116 Worker 2958 QuorumPeerMain
五、總結
一開始時想把Master放到bd1和bd2上,但是啟動Spark后發現兩個節點上都是Standby。然后修改配置文件轉移到bd4和bd5上,才順利運行。換言之Spark HA的Master必須位于Zookeeper集群上才能正常運行,即該節點上要有JournalNode這個進程。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。