All research Philosophy

The Case for Ultra-Small Models

March 3, 2026

The Scale Obsession

The AI industry has been captivated by scale. More parameters, more data, more compute — the assumption being that bigger is always better. But is it?

Privacy by Default

When your AI assistant runs entirely on your device, your conversations never leave your hardware. No telemetry, no training on your data, no server breach risks. Privacy isn't a feature — it's an architecture decision.

Latency That Feels Native

Cloud round-trips add 100–500ms per request. For a conversational AI, that's the difference between feeling like you're talking to someone and feeling like you're submitting a form. Lambert responds in milliseconds.

What You Lose (And Don't)

Small models can't write a novel or solve novel math olympiad problems. But for the vast majority of daily tasks — summarizing, explaining, drafting, answering questions — a well-trained small model is surprisingly competent for its size. Don't trust us, try it.

Lambert Axiom 0.1

Our Axiom series targets the fastest chatbot: very experimental, ultra small. With int4 quantization, Lambert Axiom 0.1 is a 5M params model. It fits on 10MB of RAM. Proof of concept — fast, minimal, raw.

Lambert Lemma 0.2

Our Lemma series targets the sweet spot: smart enough to be genuinely useful, small enough to run on a MacBook Air. With int4 quantization, Lambert Lemma 0.2 is a 13M params model. It fits on 25MB of RAM. Handles everyday conversations and simple tasks.

Lambert Theorem 0.2

Our Theorem series targets the best model: can think, understand tool calling. With int4 quantization, Lambert Theorem 0.2 is a 50M params model. It fits in under 75MB of RAM. Can follow instructions and use tools — still experimental on long conversations.