Configure an LLM
Databao supports both local and cloud LLMs. You can use it with Anthropic Claude and OpenAI models, local models running in Ollama, or any other model of your choice through an OpenAI-compatible server, such as LM Studio or llama.cpp.
Anthropic Claude models (cloud)
Get an API key
Get an API key from the Claude Console .
Configure the LLM
-
Add the API key as an environment variable:
%env ANTHROPIC_API_KEY=your_api_key -
Add the LLM config to your code as follows:
import databao from databao import LLMConfig ... llm_config = LLMConfig(name="claude-sonnet-4-20250514", temperature=0) agent = databao.new_agent(llm_config=llm_config) ...
OpenAI models (cloud)
Get an API key
Get an API key from the API key page .
Configure the LLM
-
Add the API key as an environment variable:
%env OPENAI_API_KEY=your_api_key -
Add the LLM config to your code as follows:
import databao from databao import LLMConfig ... llm_config = LLMConfig(name="gpt-4o-mini", temperature=0) agent = databao.new_agent(llm_config=llm_config) ...
Ollama (local)
Install Ollama and download the model
-
Download and install Ollama on your local machine.
-
(Optional) Download the model:
ollama pull qwen3:8bIf you don’t download the model at this step, Databao will download it automatically when you use it.
We recommend downloading the model before starting to use Databao. Some models can be large in size, often over 10 GB, and downloading them can take some time.
-
Open Ollama.
Configure the LLM
-
Add the LLM config to your code as follows:
import databao from databao import LLMConfig ... llm = LLMConfig(name="ollama:gpt-oss:20b", temperature=0) agent = databao.new_agent(llm_config=llm_config) ...
OpenAI-compatible server (local)
Install an LLM server
- Download and install an LLM server of your choice.
Configure the LLM
-
Create a LLM config file with the following parameters. If needed, replace the model name and modify other parameters.
qwen3-8b-oai.yaml# Match the name used by the OAI server. This example is for LM Studio (when running with `lms server start`): name: qwen/qwen3-8b # For ollama when running with `ollama serve` use: `name: qwen3:8b` # N.B. If using ollama, we recommend using ollama directly as in qwen3-8b-ollama.yaml. api_base_url: http://localhost:8080/v1 max_tokens: 32768 temperature: 0.6 use_responses_api: false timeout: 600
Connect the LLM config in Databao
-
In your code, override the default LLM config with the one you created:
import databao from databao import LLMConfig ... llm_config = LLMConfig(name="qwen3-8b-oai") ...