spark提交部署方式

1、bin/spark-submit --master spark://123.321.123.321:7077 --deploy-mode client jars/sparkApp.jar
2、bin/spark-submit --master spark://123.321.123.321:7077  jars/sparkApp.jar
3、bin/spark-submit --master spark://123.321.123.321:7077 --deploy-mode cluster jars/sparkApp.jar

1=2 都是本地运行,3是布置到集群

1/2 运行成功界面如下 

spark提交部署方式

3成功运行界面如下

spark提交部署方式

 

可能出现的错误 

错误一、     
19/10/18 11:06:54 INFO MemoryStore: ensureFreeSpace(2245) called with curMem=268125, maxMem=280248975
19/10/18 11:06:54 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 267.0 MB)
19/10/18 11:06:54 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on izbp1aiqq9qrjpvel26rx0z:44205 (size: 2.2 KB, free: 267.2 MB)
19/10/18 11:06:54 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
19/10/18 11:06:54 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
19/10/18 11:06:54 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[3] at map at SparkDemo.scala:18)
19/10/18 11:06:54 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
19/10/18 11:07:09 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/10/18 11:07:24 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

解决方案 

       将spark-env.sh中配置修改 SPARK_WORKER_MEMORY=1g 
              SPARK_MASTER_IP=izbp1aiqq9qrjpvel26rx0z 必须和提交的masterip一致 ,修改后重启 master和 slaves

错误二 

   7077连接拒绝 

master没有启动 

解决 sbin/start-master.sh 启动主机