Functions for working with and creating Spark contexts.
Functions for working with and creating Spark contexts.
(add-file! spark-context path)(add-file! spark-context path recursive?)Add a file to be downloaded with this Spark job on every node.
Add a file to be downloaded with this Spark job on every node.
(add-jar! spark-context path)Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
(cancel-all-jobs! spark-context)Cancel all jobs that have been scheduled or are running.
Cancel all jobs that have been scheduled or are running.
(cancel-job-group! spark-context group-id)Cancel active jobs for the specified group.
See set-job-group! for more information.
Cancel active jobs for the specified group. See `set-job-group!` for more information.
(clear-job-group! spark-context)Clear the current thread's job group ID and its description.
Clear the current thread's job group ID and its description.
(config spark-context)Return the Spark configuration used for the given context.
Return the Spark configuration used for the given context.
(get-local-property spark-context k)Get a local property set for this thread, or null if not set.
Get a local property set for this thread, or null if not set.
(info spark-context)Build a map of information about the Spark context.
Build a map of information about the Spark context.
(persistent-rdds spark-context)Return a Java map of JavaRDDs that have marked themselves as persistent via
a cache! call.
Return a Java map of JavaRDDs that have marked themselves as persistent via a `cache!` call.
(set-checkpoint-dir! spark-context path)Set the directory under which RDDs are going to be checkpointed.
Set the directory under which RDDs are going to be checkpointed.
(set-job-description! spark-context description)Set a human readable description of the current job.
Set a human readable description of the current job.
(set-job-group! spark-context group-id description)(set-job-group! spark-context group-id description interrupt?)Assign a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.
Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.
The application can later use cancel-job-group! to cancel all running jobs
in this group. If interrupt? is set to true for the job group, then job
cancellation will result in the job's executor threads being interrupted.
Assign a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared. Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group. The application can later use `cancel-job-group!` to cancel all running jobs in this group. If `interrupt?` is set to true for the job group, then job cancellation will result in the job's executor threads being interrupted.
(set-local-property! spark-context k v)Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.
Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.
(set-log-level! spark-context level)Control the Spark application's logging level.
Control the Spark application's logging level.
(spark-context conf)(spark-context master app-name)Create a new spark context which takes its settings from the given configuration object.
Create a new spark context which takes its settings from the given configuration object.
(stop! spark-context)Shut down the Spark context.
Shut down the Spark context.
(with-context binding-vec & body)Evaluate body within a new Spark context by constructing one from the
given expression. The context is stopped after evaluation is complete.
Evaluate `body` within a new Spark context by constructing one from the given expression. The context is stopped after evaluation is complete.
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |