(provider-openai-compatible
{:keys [name endpoint model] :or {name "openai-compatible"} :as options})Builds a provider that talks to an OpenAI-compatible chat completions endpoint.
options accepts:
:endpoint - chat completions URL:model - default model name:name - provider display name, defaults to
"openai-compatible":api-key - raw API key string:api-key-env - environment variable name containing the API key:api-key-fn - zero-arg function that returns the API key string:timeout-ms - request timeout in milliseconds:max-retries - retry count for retryable HTTP responses such as
429, 502, 503, and 504; defaults to 3:retry-delay-ms - base delay for retryable responses when the
server does not send Retry-After; defaults to 20000:pricing - either a direct pricing map or a map of model IDs to
pricing maps. Pricing maps use :input-per-1m-usd,
:output-per-1m-usd, and optional :cached-input-per-1m-usd:headers - extra HTTP headers map:default-body - extra request body fields merged into every call:http-request-modifier - function, var, or symbol called with
{:provider ... :request ... :endpoint ... :timeout-ms ... :headers ... :payload ...} and expected to return the updated context map before the
HTTP request is created:http-response-modifier - function, var, or symbol called with
{:provider ... :request ... :endpoint ... :timeout-ms ... :payload ... :status ... :headers ... :body ... :response ...} and expected to return
the updated context map after the HTTP response arrives and before it is
normalizedRequest maps sent to the returned provider may include :prompt or
:messages, plus chat-completions fields like :temperature, :top-p,
:max-tokens, :presence-penalty, :frequency-penalty, :stop, :n,
:tools, :tool-choice, :response-format, :stream, :metadata,
optional :http-request-modifier, and optional :http-response-modifier.
When :model-tools is present in the request, those model-facing tools are
exposed as OpenAI/GitHub function tools and tool_calls responses are
normalized back into :tool-requests for the session loop. Wire tool names
are normalized to the Bedrock-safe character set [a-zA-Z0-9_-]+ and kept
within 64 characters, while still mapping back to the original tool names
internally. Malformed tool-call arguments are surfaced back through the
session loop as tool errors so the model can retry.
Successful responses include :usage when the provider returns token usage
data, and include local-only :token-counts and :cost when usage data is
available. :cost additionally needs pricing information. This bookkeeping
is not sent back to the model in later turns.
Builds a provider that talks to an OpenAI-compatible chat completions endpoint.
`options` accepts:
- required `:endpoint` - chat completions URL
- required `:model` - default model name
- optional `:name` - provider display name, defaults to
`"openai-compatible"`
- optional `:api-key` - raw API key string
- optional `:api-key-env` - environment variable name containing the API key
- optional `:api-key-fn` - zero-arg function that returns the API key string
- optional `:timeout-ms` - request timeout in milliseconds
- optional `:max-retries` - retry count for retryable HTTP responses such as
`429`, `502`, `503`, and `504`; defaults to `3`
- optional `:retry-delay-ms` - base delay for retryable responses when the
server does not send `Retry-After`; defaults to `20000`
- optional `:pricing` - either a direct pricing map or a map of model IDs to
pricing maps. Pricing maps use `:input-per-1m-usd`,
`:output-per-1m-usd`, and optional `:cached-input-per-1m-usd`
- optional `:headers` - extra HTTP headers map
- optional `:default-body` - extra request body fields merged into every call
- optional `:http-request-modifier` - function, var, or symbol called with
`{:provider ... :request ... :endpoint ... :timeout-ms ... :headers ...
:payload ...}` and expected to return the updated context map before the
HTTP request is created
- optional `:http-response-modifier` - function, var, or symbol called with
`{:provider ... :request ... :endpoint ... :timeout-ms ... :payload ...
:status ... :headers ... :body ... :response ...}` and expected to return
the updated context map after the HTTP response arrives and before it is
normalized
Request maps sent to the returned provider may include `:prompt` or
`:messages`, plus chat-completions fields like `:temperature`, `:top-p`,
`:max-tokens`, `:presence-penalty`, `:frequency-penalty`, `:stop`, `:n`,
`:tools`, `:tool-choice`, `:response-format`, `:stream`, `:metadata`,
optional `:http-request-modifier`, and optional `:http-response-modifier`.
When `:model-tools` is present in the request, those model-facing tools are
exposed as OpenAI/GitHub function tools and `tool_calls` responses are
normalized back into `:tool-requests` for the session loop. Wire tool names
are normalized to the Bedrock-safe character set `[a-zA-Z0-9_-]+` and kept
within 64 characters, while still mapping back to the original tool names
internally. Malformed tool-call arguments are surfaced back through the
session loop as tool errors so the model can retry.
Successful responses include `:usage` when the provider returns token usage
data, and include local-only `:token-counts` and `:cost` when usage data is
available. `:cost` additionally needs pricing information. This bookkeeping
is not sent back to the model in later turns.cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |