Run OpenAI, Claude, Mistral, llamafile, and more from one interface, now in Go!

Go where the models are

Run OpenAI, Claude, Mistral, llamafile, and more from one interface, now in Go!

When we released any-llm v1.0 last year, the goal was simple: one interface to use any model, cloud or local, without rewriting your code every time a new provider ships. That goal resonated. Thousands of Python developers adopted any-llm to decouple their product logic from their model provider. In production systems, that decoupling is often the difference between iterating quickly and being locked into a single vendor’s API quirks.

But the LLM ecosystem doesn’t live in one language. Go powers a significant share of production infrastructure, from API servers to CLI tools to agent frameworks. Go developers deserve the same flexibility.

Today we’re releasing any-llm-go, the official Go port of any-llm.

What you get

Every provider differs slightly in streaming behavior, error semantics, and feature support. any-llm-go normalizes those differences behind a single, predictable interface following OpenAI API standard.

any-llm-go ships with support for eight providers out of the box:

Provider Completion Streaming Tools Reasoning Embeddings
Anthropic
DeepSeek
Gemini
Groq
Llamafile
Mistral
Ollama
OpenAI

Every provider normalizes to the same response format. Write your logic once, swap providers by changing a single import.

Go
import (
    anyllm "github.com/mozilla-ai/any-llm-go"
    "github.com/mozilla-ai/any-llm-go/providers/openai"
)

provider, err := openai.New()
if err != nil {
    log.Fatal(err)
}

response, err := provider.Completion(ctx, anyllm.CompletionParams{
    Model: "gpt-4o-mini",
    Messages: []anyllm.Message{
        {Role: anyllm.RoleUser, Content: "Hello!"},
    },
})

Want to switch to Anthropic? Change openai to anthropic and gpt-4o-mini to claude-sonnet-4-20250514. Everything else stays the same: request shape, streaming logic, and error handling.

Built for Go, not ported from Python

This isn’t a line-for-line translation. any-llm-go is designed around Go’s strengths:

Streaming uses channels, not iterators. CompletionStream returns a <-chan ChatCompletionChunk that works naturally with range and select.
Errors are values, not exceptions. Every provider’s SDK errors are normalized into typed sentinel errors (ErrRateLimit, ErrAuthentication, ErrContextLength) that work with errors.Is and errors.As.
Configuration uses functional options. openai.New(anyllm.WithAPIKey("..."), anyllm.WithTimeout(30*time.Second)) gives you type-safe, composable setup.
Context flows everywhere. Every call takes a context.Context for cancellation, timeouts, and tracing.

The result is a library that feels like Go, not like Go wearing Python’s clothes.

The OpenAI-compatible base: add a provider in 50 lines

Not every provider has a dedicated Go SDK. Many (Groq, DeepSeek, Mistral, Llamafile) expose OpenAI-compatible APIs instead. Rather than writing a full implementation for each of these, any-llm-go includes a shared OpenAI-compatible base provider.

Adding a new compatible provider is straightforward. Define a config, call openai.NewCompatible(), and you’re done. The Groq provider, for example, is essentially a thin wrapper:

Go
provider, err := openai.NewCompatible(openai.CompatibleConfig{
    APIKeyEnvVar:   "GROQ_API_KEY",
    BaseURLEnvVar:  "",
    Capabilities:   groqCapabilities(),
    DefaultAPIKey:  "",
    DefaultBaseURL: "https://api.groq.com/openai/v1",
    Name:           "groq",
    RequireAPIKey:  true,
}, opts...)

The base handles completions, streaming, tool calls, embeddings, error conversion, and model listing. Your wrapper just needs to specify the API endpoint, the environment variable for the key, and which capabilities are supported.

This is by design. We want adding providers to be easy, because we want *you* to add them, and because the provider landscape changes faster than any single team can keep up with.

How to contribute

We built any-llm-go with contribution in mind. The Contributing Guide walks through the full process, but the short version is:

1. Pick a provider from the planned list (Cohere, Together AI, AWS Bedrock, Azure OpenAI) or propose a new one.
2. Check if it’s OpenAI-compatible. If so, you can use the compatible base and keep your implementation minimal.
3. If it has a native Go SDK, use it. Wrap the SDK, normalize the responses, convert the errors.
4. Write tests and docs. We use the Anthropic provider as the reference implementation.

We’ve tried to make the codebase approachable. Every provider follows the same file organization, the same patterns, the same test structure. Once you’ve read one, you can write another.

This is an open library by design, extensible, inspectable, and shaped by its users.

What’s next

This initial release focuses on getting the core right: a stable interface, solid error handling, and broad provider coverage. On the roadmap:

– More providers (Cohere, Together AI, AWS Bedrock, Azure OpenAI)
– Batch completion support
– Continued parity with the Python any-llm as both libraries evolve

any-llm-go also works with the any-llm managed platform, now in beta. It provides a vault to manage your API keys, an observability stack to monitor your LLMs performance, and per-project budget controls. If you’re managing LLM keys and costs across multiple providers and teams, take a look.

Get started

Shell
go get github.com/mozilla-ai/any-llm-go

Check out the documentation, explore the examples, or jump straight into the provider list.

Found a bug? Want a new provider? Open an issue or start a discussion. We’d love to hear from you.

This article first appeared on Read More