The litellm.core namespace provides direct access to LLM providers without configuration management.
(require '[litellm.core :as core])
Best for:
The primary function for making LLM requests.
(core/completion provider model request-map)
(core/completion provider model request-map config)
Parameters:
provider - Provider keyword (:openai, :anthropic, :gemini, :mistral, :ollama, :openrouter)model - Model name string (e.g., "gpt-4", "claude-3-opus-20240229")request-map - Request parameters including :messages, :temperature, etc.config - Optional config map with :api-key, :api-base, :timeoutReturns:
:choices, :usage, etc.core.async channelExamples:
;; Basic completion
(def response
  (core/completion :openai "gpt-4o-mini"
    {:messages [{:role :user :content "Hello!"}]}
    {:api-key "sk-..."}))
;; With temperature
(def response
  (core/completion :anthropic "claude-3-sonnet-20240229"
    {:messages [{:role :user :content "Write a poem"}]
     :temperature 0.9}
    {:api-key "sk-ant-..."}))
;; Streaming
(def ch
  (core/completion :openai "gpt-4"
    {:messages [{:role :user :content "Tell me a story"}]
     :stream true}
    {:api-key "sk-..."}))
Simplified chat interface for single messages.
(core/chat provider model message & {:keys [system-prompt] :as config})
Examples:
;; Simple question
(core/chat :openai "gpt-4o-mini" "What is 2+2?"
  :api-key "sk-...")
;; With system prompt
(core/chat :anthropic "claude-3-sonnet-20240229"
  "Explain quantum physics"
  :system-prompt "You are a physics professor"
  :api-key "sk-ant-...")
Convenience functions for each provider:
;; OpenAI
(core/openai-completion "gpt-4" {...} :api-key "sk-...")
;; Anthropic
(core/anthropic-completion "claude-3-opus-20240229" {...} :api-key "sk-ant-...")
;; Gemini
(core/gemini-completion "gemini-pro" {...} :api-key "...")
;; Mistral
(core/mistral-completion "mistral-medium" {...} :api-key "...")
;; Ollama
(core/ollama-completion "llama3" {...} :api-base "http://localhost:11434")
;; OpenRouter
(core/openrouter-completion "openai/gpt-4" {...} :api-key "sk-or-...")
Extract text content from response.
(core/extract-content response)
;; => "The content of the response..."
Extract the full message object.
(core/extract-message response)
;; => {:role :assistant :content "..." :tool-calls [...]}
Get token usage information.
(core/extract-usage response)
;; => {:prompt-tokens 10 :completion-tokens 20 :total-tokens 30}
List all available providers.
(core/list-providers)
;; => [:openai :anthropic :gemini :mistral :ollama :openrouter]
Check if a provider is registered.
(core/provider-available? :openai)
;; => true
Get provider capabilities and status.
(core/provider-info :openai)
;; => {:name :openai
;;     :streaming true
;;     :function-calling true
;;     :vision false}
Check streaming support.
(core/supports-streaming? :anthropic)
;; => true
Check function calling support.
(core/supports-function-calling? :gemini)
;; => false
Validate a request before sending.
(core/validate-request :openai {:messages [...]})
;; Throws exception if invalid
Estimate token count for text.
(core/estimate-tokens "Hello, world!")
;; => 4
Estimate tokens for a full request.
(core/estimate-request-tokens {:messages [{:role :user :content "Hi"}]})
;; => {:prompt-tokens 5 :estimated-completion-tokens 100}
Calculate estimated cost.
(core/calculate-cost :openai "gpt-4" 1000 500)
;; => {:prompt-cost 0.03 :completion-cost 0.06 :total-cost 0.09}
(def conversation
  [{:role :system :content "You are a helpful assistant"}
   {:role :user :content "What's the capital of France?"}
   {:role :assistant :content "Paris"}
   {:role :user :content "What's its population?"}])
(def response
  (core/completion :openai "gpt-4"
    {:messages conversation}
    {:api-key "sk-..."}))
(def response
  (core/completion :openai "gpt-4"
    {:messages [{:role :user :content "What's the weather in Boston?"}]
     :tools [{:type "function"
              :function {:name "get_weather"
                        :description "Get current weather"
                        :parameters {:type "object"
                                    :properties {:location {:type "string"}}
                                    :required ["location"]}}}]}
    {:api-key "sk-..."}))
;; Check for tool calls
(when-let [tool-calls (-> response core/extract-message :tool-calls)]
  (doseq [call tool-calls]
    (println "Function:" (get-in call [:function :name]))
    (println "Args:" (get-in call [:function :arguments]))))
(require '[litellm.errors :as errors])
(try
  (core/completion :openai "gpt-4"
    {:messages [{:role :user :content "Hello"}]}
    {:api-key "invalid"})
  (catch Exception e
    (if (errors/litellm-error? e)
      (do
        (println "Category:" (errors/get-error-category e))
        (println "Summary:" (errors/error-summary e)))
      (throw e))))
Can you improve this documentation?Edit on GitHub
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs | 
| ← | Move to previous article | 
| → | Move to next article | 
| Ctrl+/ | Jump to the search field |