Blocks until job-id has had all of its tasks completed or the job is killed. Returns true if the job completed successfully, false if the job was killed.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Blocks until job-id has had all of its tasks completed or the job is killed. Returns true if the job completed successfully, false if the job was killed. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
(build-resume-point {:keys [workflow catalog windows] :as new-job} coordinates)
Inputs: [{:keys [workflow catalog windows], :as new-job} :- os/Job coordinates :- os/ResumeCoordinate] Returns: os/ResumePoint
Builds a resume point for use in the :resume-point key of job data. This resume point will assume a direct mapping between the job resuming and the job it is resuming from. All tasks and windows should have the same name. Note, that it is safe to manipulate this data to allow resumption from jobs that are not identical, as long as you correctly map between task names, windows, etc.
Inputs: [{:keys [workflow catalog windows], :as new-job} :- os/Job coordinates :- os/ResumeCoordinate] Returns: os/ResumePoint Builds a resume point for use in the :resume-point key of job data. This resume point will assume a direct mapping between the job resuming and the job it is resuming from. All tasks and windows should have the same name. Note, that it is safe to manipulate this data to allow resumption from jobs that are not identical, as long as you correctly map between task names, windows, etc.
Deletes all checkpoints for a given job.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Deletes all checkpoints for a given job. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Takes a peer-config, zookeeper-address string, or a curator connection, and deletes all job chunks from ZooKeeper. This should be performed when the job is no longer running and its immutable definition is no longer required. Note that this will also clear the latest checkpoint coordinates, so it should not be called if a resume point will later be built that resumes state from this job.
Takes a peer-config, zookeeper-address string, or a curator connection, and deletes all job chunks from ZooKeeper. This should be performed when the job is no longer running and its immutable definition is no longer required. Note that this will also clear the latest checkpoint coordinates, so it should not be called if a resume point will later be built that resumes state from this job.
Takes a zookeeper-address string or a curator connection, and deletes all data for a given tenancy from ZooKeeper, including job data and cluster logs. Must not be performed while a cluster tenancy has live peers.
Takes a zookeeper-address string or a curator connection, and deletes all data for a given tenancy from ZooKeeper, including job data and cluster logs. Must not be performed while a cluster tenancy has live peers.
Invokes the garbage collector on Onyx. Compresses all local replicas for peers, decreasing memory usage. Also deletes old log entries from ZooKeeper, freeing up disk space.
Local replicas clear out all data about completed and killed jobs - as if they never existed.
Does not clear out old checkpoints. Use gc-checkpoints to clear those away.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Invokes the garbage collector on Onyx. Compresses all local replicas for peers, decreasing memory usage. Also deletes old log entries from ZooKeeper, freeing up disk space. Local replicas clear out all data about completed and killed jobs - as if they never existed. Does not clear out old checkpoints. Use gc-checkpoints to clear those away. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Invokes the garbage collector on Onyx's checkpoints for a given job. Deletes all checkpoints in non-current replica versions, and all except the active checkpoint in the current replica version.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Invokes the garbage collector on Onyx's checkpoints for a given job. Deletes all checkpoints in non-current replica versions, and all except the active checkpoint in the current replica version. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Resolves the history of job-id and tenancy-id that correspond to a given job-name, specified under the :job-name key of job-data. This information can then be used to playback a log and get the current job state, and to resolve to resume point coordinates via onyx.api/job-snapshot-coordinates. Connector can take either a ZooKeeper address as a string, or a ZooKeeper log component. History is the order of earliest to latest.
Resolves the history of job-id and tenancy-id that correspond to a given job-name, specified under the :job-name key of job-data. This information can then be used to playback a log and get the current job state, and to resolve to resume point coordinates via onyx.api/job-snapshot-coordinates. Connector can take either a ZooKeeper address as a string, or a ZooKeeper log component. History is the order of earliest to latest.
Reads the latest full snapshot coordinate stored for a given job-id and tenancy-id. This snapshot coordinate can be supplied to build-resume-point to build a full resume point.
Takes a zookeeper address, a peer-config, or an already started Onyx client.
Reads the latest full snapshot coordinate stored for a given job-id and tenancy-id. This snapshot coordinate can be supplied to build-resume-point to build a full resume point. Takes a zookeeper address, a peer-config, or an already started Onyx client.
Plays back the replica log and returns a map describing the current status of this job on the cluster.
Plays back the replica log and returns a map describing the current status of this job on the cluster.
Kills a currently executing job, given its job ID. All peers executing tasks for this job cleanly stop executing and volunteer to work on other jobs. Task lifecycle APIs for closing tasks are invoked. This job is never again scheduled for execution.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Kills a currently executing job, given its job ID. All peers executing tasks for this job cleanly stop executing and volunteer to work on other jobs. Task lifecycle APIs for closing tasks are invoked. This job is never again scheduled for execution. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
(map-set-workflow->workflow workflow)
Converts a workflow in format: {:a #{:b :c} :b #{:d}} to format: [[:a :b] [:a :c] [:b :d]]
Converts a workflow in format: {:a #{:b :c} :b #{:d}} to format: [[:a :b] [:a :c] [:b :d]]
Reads the latest full snapshot coordinate stored for a given job-name and optional tenancy-id. This snapshot coordinate can be supplied to build-resume-point to build a full resume point.
Reads the latest full snapshot coordinate stored for a given job-name and optional tenancy-id. This snapshot coordinate can be supplied to build-resume-point to build a full resume point.
(shutdown-env env)
Shuts down the given development environment.
Shuts down the given development environment.
(shutdown-peer peer)
Shuts down the virtual peer, which releases all of its resources and removes it from the execution of any tasks. This peer will no longer volunteer for tasks. Returns nil.
Shuts down the virtual peer, which releases all of its resources and removes it from the execution of any tasks. This peer will no longer volunteer for tasks. Returns nil.
(shutdown-peer-group peer-group)
Shuts down the given peer-group
Shuts down the given peer-group
(shutdown-peers peers)
Like shutdown-peer, but takes a sequence of peers as an argument, shutting each down in order. Returns nil.
Like shutdown-peer, but takes a sequence of peers as an argument, shutting each down in order. Returns nil.
(start-env env-config)
Starts a development environment using an in-memory implementation of ZooKeeper.
Starts a development environment using an in-memory implementation of ZooKeeper.
(start-peer-group peer-config)
Starts a set of shared resources that are used across all virtual peers on this machine.
Starts a set of shared resources that are used across all virtual peers on this machine.
(start-peers n peer-group)
Launches n virtual peers. Each peer may be stopped by passing it to the shutdown-peer function.
Launches n virtual peers. Each peer may be stopped by passing it to the shutdown-peer function.
Takes a connector and a job map, sending the job to the cluster for eventual execution. Returns a map with :success? indicating if the job was submitted to ZooKeeper. The job map may contain a :metadata key, among other keys described in the user guide. The :metadata key may optionally supply a :job-id value. Repeated submissions of a job with the same :job-id will be treated as an idempotent action. If a job has been submitted more than once, the original task IDs associated with the catalog will be returned, and the job will not run again, even if it has been killed or completed. If two or more jobs with the same :job-id are submitted, each will race to write a content-addressable hash value to ZooKeeper. All subsequent submitting jobs must match the hash value exactly, otherwise the submission will be rejected. This forces all jobs under the same :job-id to have exactly the same value.
Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Takes a connector and a job map, sending the job to the cluster for eventual execution. Returns a map with :success? indicating if the job was submitted to ZooKeeper. The job map may contain a :metadata key, among other keys described in the user guide. The :metadata key may optionally supply a :job-id value. Repeated submissions of a job with the same :job-id will be treated as an idempotent action. If a job has been submitted more than once, the original task IDs associated with the catalog will be returned, and the job will not run again, even if it has been killed or completed. If two or more jobs with the same :job-id are submitted, each will race to write a content-addressable hash value to ZooKeeper. All subsequent submitting jobs must match the hash value exactly, otherwise the submission will be rejected. This forces all jobs under the same :job-id to have exactly the same value. Takes either a peer configuration and constructs a client once for the operation (closing it on completion) or an already started client.
Sends all events from the log to the provided core.async channel. Starts at the origin of the log and plays forward monotonically.
Returns a map with keys :replica and :env. :replica contains the origin replica. :env contains an Component with a :log connection to ZooKeeper, convenient for directly querying the znodes. :env can be shutdown with the onyx.api/shutdown-env function.
Takes either a peer configuration and constructs a client once for the operation or an already started client.
Sends all events from the log to the provided core.async channel. Starts at the origin of the log and plays forward monotonically. Returns a map with keys :replica and :env. :replica contains the origin replica. :env contains an Component with a :log connection to ZooKeeper, convenient for directly querying the znodes. :env can be shutdown with the onyx.api/shutdown-env function. Takes either a peer configuration and constructs a client once for the operation or an already started client.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close