Local LLM
Run OpeniBank with local or cloud LLMs using simple environment variables.
OpeniBank is local-first by default, but you can mix local and cloud models depending on the workload.
Run with Ollama
- Install Ollama: https://ollama.com
- Pull a model:
ollama pull llama3.1
- Point OpeniBank to the local provider:
export OPENIBANK_LLM_PROVIDER=ollama
export OPENIBANK_LLM_MODEL=llama3.1
export OPENIBANK_LLM_BASE_URL=http://localhost:11434
Run with cloud APIs
export OPENIBANK_LLM_PROVIDER=anthropic
export OPENIBANK_LLM_MODEL=claude-3-5-sonnet
export OPENIBANK_LLM_API_KEY=your_api_key
export OPENIBANK_LLM_PROVIDER=openai
export OPENIBANK_LLM_MODEL=gpt-4o-mini
export OPENIBANK_LLM_API_KEY=your_api_key
Resonators still enforce commitments, permits, and receipts regardless of the LLM provider.