Web Analytics

Literal Labs Raises $6.2M to Build Ultra-Efficient AI Chips Based on the Human Brain

Literal Labs, a UK-based semiconductor startup developing brain-inspired AI chips, has raised $6,223,938 in pre-seed funding to build ultra-efficient neuromorphic processors for the next generation of edge and autonomous systems. The round was backed by Northern Gritstone, Mercuri, Sure Valley Ventures, Cambridge Future Tech SPV, and several angel investors.

Founded by a team with deep experience in ARM, Graphcore, and top academic labs, Literal Labs is targeting the next frontier in AI hardware: drastically reducing power consumption by mimicking the way neurons fire in the human brain.


What Literal Labs Is Building

Literal Labs is designing neuromorphic processors - AI chips that don’t just run deep learning models faster, but run them more like the brain does. This means:

These chips are not just about speed - they’re about efficiency. Literal Labs believes the future of AI won’t be won by brute-force compute, but by smarter, leaner silicon that works in real-world environments, not just data centers.


Why Neuromorphic Computing Matters

AI workloads are exploding - but so is energy use. Training and running modern models can burn massive amounts of power, and traditional GPUs are increasingly hitting their limits in cost, size, and sustainability.

That’s why neuromorphic computing is gaining ground:

Literal Labs is positioning itself at the intersection of these trends - offering AI compute that doesn’t compromise on power or portability.


Why This Round Stands Out

$6.2 million in pre-seed funding is a statement - especially in semiconductor deep tech, where investors often wait for tape-out before they commit.

But Literal Labs isn’t just pitching a better chip. They’re offering a better worldview on what intelligence at the edge should actually look like.

And that’s where the real founder insight kicks in: most startups chase the dominant curve. Literal Labs questioned the shape of the curve itself.

They didn’t ask how to shrink GPUs. They asked whether neural compute needs to be continuous at all, or whether spiking, event-based architectures could unlock orders of magnitude better efficiency by simply aligning with how biology already solved this problem.

Here’s the value drop: If you’re building foundational tech, your edge won’t come from being slightly better. It’ll come from being structurally different - because that’s what breaks cost curves, energy ceilings, and architectural inertia.

Literal Labs didn’t just design new silicon. They changed the mental model of how compute should behave. And when your product forces a reframe, adoption becomes inevitable - not because you convince people, but because the old assumptions stop making sense.


Market Outlook: A New Era for AI Hardware Is Taking Shape

The exponential growth of AI has revealed a fundamental bottleneck: current compute architectures weren’t designed for intelligence - they were designed for throughput. As a result, power-hungry GPUs and server farms are quickly becoming unsustainable for both edge and planetary-scale workloads.

This is where neuromorphic and event-driven chips step in - not as successors to GPUs, but as a new class of compute entirely.

Key signals in the market:

With these forces converging, Literal Labs is positioned to define a new tier in the AI stack: not brute-force compute, not cloud-bound AI - but biologically aligned silicon that can live anywhere intelligence is needed.


Related Articles