Spark之Core高级知识分享一(Glossary+Components)

Spark之Spark Core高级进阶一

1.Glossary(术语)

Term meaning note
Application User program built on Spark. Consists of a driver program and executors on the cluster. a driver program + executors
Application jar A jar containing the user’s Spark application. In some cases users will want to create an “uber jar” containing their application along with its dependencies. The user’s jar should never include Hadoop or Spark libraries, however, these will be added at runtime. 用户spark程序的jar包
Driver program The process running the main() function of the application and creating the SparkContext 程序的入口+创建SparkContext
Cluster manager An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN) 外部服务,用于申请作业资源
Deploy mode Distinguishes where the driver process runs. In “cluster” mode, the framework launches the driver inside of the cluster. In “client” mode, the submitter launches the driver outside of the cluster. driver process 运行在cluster端还是client端,client端即提交应用程序的地方
Worker node Any node that can run application code in the cluster 运行spark程序的机器
Executor A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors. 他是一个进程,用于运行tasks以及保存数据到内存或磁盘,每个s应用程序有自己的excutors,类似于MR的Container
Task A unit of work that will be sent to one executor 发送到executor上的最小工作单元,
Job A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. save, collect); you’ll see this term used in the driver’s logs. 一个action算子即构成一个job作业
Stage Each job gets divided into smaller sets of tasks called stages that depend on each other (similar to the map and reduce stages in MapReduce); you’ll see this term used in the driver’s logs. 遇到shuffle即划分stage,一个job由多个stage组成,一个stage由多个task组成,同一job下的stage具有依赖关系,当前stage完成前,依赖的stage一定也完成了,类似于MR中R依赖于M

扩展1:一个stage中的task数是由RDD分区数决定的,一个task就是一个stage从头干到尾

2.Components

Spark applications run as independent(独立的) sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).

Specifically(具体的说), to run on a cluster, the SparkContext can connect to several types of cluster managers (either Spark’s own standalone cluster manager, Mesos or YARN), which allocate resources across(跨) applications. Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application. Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors. Finally, SparkContext sends tasks to the executors to run.
Spark之Core高级知识分享一(Glossary+Components)

There are several useful things to note about this architecture(架构):

  • Each application gets its own executor processes, which stay up(运行) for the duration of the whole application and run tasks in multiple threads. This has the benefit(好处) of isolating(隔离) applications from each other, on both the scheduling side (each driver schedules its own tasks) and executor side (tasks from different applications run in different JVMs). However, it also means that data cannot be shared across different Spark applications (instances of SparkContext) without writing it to an external storage system.
  • Spark is agnostic(不关心) to the underlying cluster manager. As long as(只要) it can acquire executor processes, and these communicate with each other, it is relatively(相对 ) easy to run it even on(即使) a cluster manager that also supports other applications (e.g. Mesos/YARN).
  • The driver program must listen for and accept incoming connections from its executors throughout its lifetime (e.g., see spark.driver.port in the network config section). As such(因此), the driver program must be network addressable from the worker nodes.
  • Because the driver schedules tasks on the cluster, it should be run close to the worker nodes,** preferably on the same local area network**. If you’d like to send requests to the cluster remotely, it’s better to open an RPC to the driver and have it submit operations from nearby than to run a driver far away from the worker nodes.