Owning Code in the Age of AI
Software engineering is going through a shift that feels small on the surface but changes something fundamental: code is no longer scarce. For decades, writing…
Software engineering is going through a shift that feels small on the surface but changes something fundamental: code is no longer scarce. For decades, writing…
Every AI model has blind spots. It might overlook context, lean toward certain patterns, or fill gaps with confident guesses. When you’re using an AI…
A core part of building any-llm is making sure it is present where developers already are. Over the past few months, we’ve integrated any-llm into…
Go where the models are When we released any-llm v1.0 last year, the goal was simple: one interface to use any model, cloud or local,…
Introduction In Evaluating Multilingual, Context-Aware Guardrails: Evidence from a Humanitarian LLM Use Case, we explored how guardrails responded to the same policies and prompts in…
Effective large language model (LLM) evaluation needs to be context-, language-, task-, and domain-specific. As developers gravitate towards custom performance benchmarks, they are also increasingly…
The recent State of AI report by OpenRouter and Andreessen Horowitz (a16z) offers compelling insights into the growing adoption of open-weight LLMs. It categorizes models…
Most teams don’t wake up asking for “more AI.” They just want less busywork and fewer tabs open. In practice, that usually means one thing:…
Without an AI Coding policy that promotes transparency alongside innovation, Open Source codebases are going to struggle Remember early 2025? “Vibe coding” was a meme…
Over the last few weeks, we’ve been running a small, gated alpha of any-llm managed platform: our client-side encrypted API key vault and usage tracking service…
In October, we shipped mcpd as a “requirements.txt for agentic systems”, a way to declaratively manage your MCP servers across environments. A few weeks later,…
A Year of Building Momentum The year 2025 has been a busy one at Mozilla.ai. From hosting live demos and speaking at conferences, to releasing…
Following up on Baris Guler’s excellent exploration of browser-native AI agents using WebLLM + WASM + WebWorkers, we’re excited to present a complementary approach that…
We recently released any-llm v1.0, our SDK for building with multiple providers using a single unified interface, as well as any-llm-gateway. Today we’re expanding the…
Encoders power the parts of a system where latency, repeatability, and stable outputs aren’t optional. Yet many teams still default to autoregressive models because they’re…