All research Roadmap

Towards Neuromorphic Silicon: Our 2030 Vision

March 3, 2026

Custom silicon designed natively for SNN workloads. Here's what we're building, why conventional GPUs are inadequate for brain-inspired AI, and the technical roadmap ahead.

Why Custom Silicon?

GPUs were designed for dense matrix multiply. They're efficient at what they do, but SNNs are fundamentally different: sparse, asynchronous, and temporal. Running SNNs on GPUs is like running water through a pipe designed for sand.

We're Already Building for This Future

Because we believe neuromorphic silicon is inevitable, Lambert's architecture is designed from day one to be SNN-compatible. Our models can already be converted to spiking neural networks today — running on conventional hardware for now, but ready for dedicated silicon when it arrives. We're not waiting for the chip to design the software. We're designing the software so the chip has something worth running.

The Architecture

Neuromorphic chips mirror the brain's structure: cores correspond to populations of neurons, synaptic connections are implemented as sparse weight tables, spike routing replaces matrix multiply as the primary operation, and temporal dynamics are handled in hardware, not software loops.

Power Targets

Our 2030 target: sub-100mW inference for Lambert-class models. For reference, a human brain runs on approximately 20W total — and most of that is baseline metabolism, not computation. Dedicated neuromorphic silicon could run local AI inference on less than 1% of a smartphone battery per day.

Timeline

  • 2025: SNN research program launch, architecture exploration

  • 2027: First FPGA prototypes of neuromorphic cores

  • 2028: ASIC tape-out of first neuromorphic test chip

  • 2030: Production neuromorphic chip with Lambert integration

Why This Matters

Every device — phones, watches, earbuds, IoT sensors — could run a capable AI assistant locally. No internet required. The intelligence comes with the hardware.