Liking cljdoc? Tell your friends :D

monkey.ci.dispatcher.core

The container dispatcher is responsible for picking pending container jobs or builds from a table, and assigning them to one of the available runners (either build or job). Since builds and container jobs both require containers, and as such compete for the same resources, they are merged into one by this dispatcher. Pending tasks (as we could call them) are saved in a table. This table is checked by the dispatcher whenever a job/pending, build/pending, job/end or build/end event is received. Depending on its strategy, it picks the next task (or tasks) to start and dispaches them to the registered runners, according to the available resources and requirements.

For example, some runners only support certain architectures, while others only have limited resources available. Some runners can have priority over others, probably to reduce costs (i.e. we would use the k8s runner before an oci runner).

The table is required to be able to pick from multiple waiting tasks, as opposed to an event queue, which will only allow to process them in sequence. Storing this in memory is not an option, since multiple replicas also mean multiple dispatchers. So this table is a way for them to sync up. It also keeps data over restarts.

Both builds and container jobs require infra resources to run, so it's logical that there is some sort of coordination. Various approaches are possible, but for now we have chosen the simplest approach: a single process is responsible for allocating the resources. It polls from the job and build topics as long as resources are available. If no more resources are available, it stops polling. Normally resources should become available again, and polling can resume. Any containers that are unassignable because they make requirements that can never be met (e.g. too many cpu's), are immediately marked as failed.

The available resources can be requested by the dispatcher, but for efficiency's sake it would be better that this information is kept locally, and updated by received events (build/start, job/start, build/end and job/end).

The container dispatcher is responsible for picking pending container jobs or builds
from a table, and assigning them to one of the available runners (either build or
job).  Since builds and container jobs both require containers, and as such compete
for the same resources, they are merged into one by this dispatcher.  Pending tasks
(as we could call them) are saved in a table.  This table is checked by the dispatcher
whenever a `job/pending`, `build/pending`, `job/end` or `build/end` event is received.
Depending on its strategy, it picks the next task (or tasks) to start and dispaches
them to the registered runners, according to the available resources and requirements.

For example, some runners only support certain architectures, while others only have
limited resources available.  Some runners can have priority over others, probably to
reduce costs (i.e. we would use the k8s runner before an oci runner).

The table is required to be able to pick from multiple waiting tasks, as opposed to
an event queue, which will only allow to process them in sequence.  Storing this
in memory is not an option, since multiple replicas also mean multiple dispatchers.
So this table is a way for them to sync up.  It also keeps data over restarts.

Both builds and container jobs require infra resources to run, so it's logical that
there is some sort of coordination.  Various approaches are possible, but for now we
have chosen the simplest approach: a single process is responsible for allocating the
resources.  It polls from the job and build topics as long as resources are available.
If no more resources are available, it stops polling.  Normally resources should become
available again, and polling can resume.  Any containers that are unassignable because
they make requirements that can never be met (e.g. too many cpu's), are immediately 
marked as failed.

The available resources can be requested by the dispatcher, but for efficiency's sake
it would be better that this information is kept locally, and updated by received
events (`build/start`, `job/start`, `build/end` and `job/end`).
raw docstring

arch-filterclj

(arch-filter runner)
source

assign-runnerclj

(assign-runner task runners)

Given a task (either build or container job), determines the runner to use. The task contains cpu and memory requirements, and optional architecture (amd or arm). The runners provide available resources and supported architectures.

Given a task (either build or container job), determines the runner to use.  The task
contains cpu and memory requirements, and optional architecture (amd or arm).  The
runners provide available resources and supported architectures.
sourceraw docstring

consume-k8sclj

(consume-k8s r {:keys [cpus memory] :as res})
source

consume-ociclj

(consume-oci r _)
source

consumersclj

source

exclusivity-filterclj

(exclusivity-filter runners runner)

Creates a filter fn that removes all tasks that can be run by other runners, if at least one task is available that can only be run by this runner.

Creates a filter fn that removes all tasks that can be run by other runners, if
at least one task is available that can only be run by this runner.
sourceraw docstring

get-next-queued-taskclj

(get-next-queued-task qt runners runner-id)

Finds the next queued task to schedule for the runner. Tasks are filtered like so:

  1. Drop all tasks that have mismatching architectures.
  2. If there are tasks that can only be run on this runner, keep only those.
  3. Drop tasks that require too many resources.
  4. Sort oldest first.
  5. Take the first in the list.

Rule (2) is required to avoid large tasks constantly getting pushed back because smaller ones keep getting selected. If a task that required many resources can only be run on this specific runner, we should wait until it has sufficient resources available.

Finds the next queued task to schedule for the runner.  Tasks are filtered like so:
 1. Drop all tasks that have mismatching architectures.
 2. If there are tasks that can only be run on this runner, keep only those.
 3. Drop tasks that require too many resources.
 4. Sort oldest first.
 5. Take the first in the list.

Rule (2) is required to avoid large tasks constantly getting pushed back because smaller
ones keep getting selected.  If a task that required many resources can only be run on 
this specific runner, we should wait until it has sufficient resources available.
sourceraw docstring

has-capacity?clj

(has-capacity? [_ {:keys [count]}])
source

matchersclj

source

matches-arch?clj

(matches-arch? [{:keys [arch]} {:keys [archs] :as r}])
source

matches-cpus?clj

(matches-cpus? [{{:keys [cpus]} :resources} {avail :cpus}])
source

matches-k8s?clj

Checks if the given k8s runner can run the given task

Checks if the given k8s runner can run the given task
sourceraw docstring

matches-mem?clj

(matches-mem? [{{:keys [memory]} :resources} {avail :memory}])
source

matches-oci?clj

Checks if the given oci runner can run the task

Checks if the given oci runner can run the task
sourceraw docstring

oldest-firstclj

source

release-k8sclj

(release-k8s r {:keys [cpus memory] :as res})
source

release-ociclj

(release-oci r _)
source

release-runner-resourcesclj

(release-runner-resources r task)

Updates runner by increasing available resources

Updates runner by increasing available resources
sourceraw docstring

releasersclj

source

resource-filterclj

(resource-filter runner)
source

use-runner-resourcesclj

(use-runner-resources r task)

Updates runner by decreasing available resources

Updates runner by decreasing available resources
sourceraw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close