The container dispatcher is responsible for picking pending container jobs or builds
from a table, and assigning them to one of the available runners (either build or
job). Since builds and container jobs both require containers, and as such compete
for the same resources, they are merged into one by this dispatcher. Pending tasks
(as we could call them) are saved in a table. This table is checked by the dispatcher
whenever a job/pending
, build/pending
, job/end
or build/end
event is received.
Depending on its strategy, it picks the next task (or tasks) to start and dispaches
them to the registered runners, according to the available resources and requirements.
For example, some runners only support certain architectures, while others only have limited resources available. Some runners can have priority over others, probably to reduce costs (i.e. we would use the k8s runner before an oci runner).
The table is required to be able to pick from multiple waiting tasks, as opposed to an event queue, which will only allow to process them in sequence. Storing this in memory is not an option, since multiple replicas also mean multiple dispatchers. So this table is a way for them to sync up. It also keeps data over restarts.
Both builds and container jobs require infra resources to run, so it's logical that there is some sort of coordination. Various approaches are possible, but for now we have chosen the simplest approach: a single process is responsible for allocating the resources. It polls from the job and build topics as long as resources are available. If no more resources are available, it stops polling. Normally resources should become available again, and polling can resume. Any containers that are unassignable because they make requirements that can never be met (e.g. too many cpu's), are immediately marked as failed.
The available resources can be requested by the dispatcher, but for efficiency's sake
it would be better that this information is kept locally, and updated by received
events (build/start
, job/start
, build/end
and job/end
).
The container dispatcher is responsible for picking pending container jobs or builds from a table, and assigning them to one of the available runners (either build or job). Since builds and container jobs both require containers, and as such compete for the same resources, they are merged into one by this dispatcher. Pending tasks (as we could call them) are saved in a table. This table is checked by the dispatcher whenever a `job/pending`, `build/pending`, `job/end` or `build/end` event is received. Depending on its strategy, it picks the next task (or tasks) to start and dispaches them to the registered runners, according to the available resources and requirements. For example, some runners only support certain architectures, while others only have limited resources available. Some runners can have priority over others, probably to reduce costs (i.e. we would use the k8s runner before an oci runner). The table is required to be able to pick from multiple waiting tasks, as opposed to an event queue, which will only allow to process them in sequence. Storing this in memory is not an option, since multiple replicas also mean multiple dispatchers. So this table is a way for them to sync up. It also keeps data over restarts. Both builds and container jobs require infra resources to run, so it's logical that there is some sort of coordination. Various approaches are possible, but for now we have chosen the simplest approach: a single process is responsible for allocating the resources. It polls from the job and build topics as long as resources are available. If no more resources are available, it stops polling. Normally resources should become available again, and polling can resume. Any containers that are unassignable because they make requirements that can never be met (e.g. too many cpu's), are immediately marked as failed. The available resources can be requested by the dispatcher, but for efficiency's sake it would be better that this information is kept locally, and updated by received events (`build/start`, `job/start`, `build/end` and `job/end`).
Http endpoints for the dispatcher. Mainly for monitoring.
Http endpoints for the dispatcher. Mainly for monitoring.
Main class for the dispatcher, used when running as a microservice
Main class for the dispatcher, used when running as a microservice
Runtime components for executing the dispatcher
Runtime components for executing the dispatcher
No vars found in this namespace.
Event state management functions
Event state management functions
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close