Connect with us

Tech

Letta: UC Berkeley GenAI Spin Out Raises $10 Million (Seed)

Published

on

Letta: UC Berkeley GenAI Spin Out Raises  Million (Seed)

Letta, a new GenAI startup spun out of UC Berkeley’s AI research lab, has emerged from stealth with a $10 million seed round led by Felicis with participation from Sunflower Capital and Essence VC. And notable angels include Jeff Dean (Chief Scientist at Google DeepMind), Clem Delangue (CEO of HuggingFace), Cristobal Valenzuela (CEO of Runway), Jordan Tigani (CEO of MotherDuck), Tristan Handy (CEO of dbt Labs), Robert Nishihara (co-founder of Anyscale), and Barry McCardel (CEO of Hex).

Letta’s focus is driving the next generation of AI through advanced memory systems and is led by the same research team that created the popular MemGPT open-source project. Before MemGPT, the default information architecture of most AI agents was stateless, which means that AI agents could not retain their memory (state) between user sessions.

The MemGPT research paper first introduced the concept of self-editing memory for LLMs, enabling an LLM to update its memory to learn over time as it interacts with human users. Suppose an AI agent can add relevant context to its understanding of the scenario and person it interacts with, keeping track of changes, updates, and new information over time. In that case, the agent becomes more useful and practically applicable in real-world scenarios. Letta believes that truly useful AI can only be built with stateful APIs.

Letta co-founders Charles Packer and Sarah Wooders met during PhD research at the Sky Lab at UC Berkeley under the same advisors, Joseph Gonzalez and Ion Stoica. Both professors are also joining Letta’s founding team in an advisory capacity.

Letta plans to use the funding to continue building a new hosted product, Letta Cloud, for developers to build and deploy agents with advanced memory systems. Letta Cloud includes a hosted agent service, which allows developers to deploy and run stateful agents in the cloud, accessible via REST APIs. Letta Cloud is model agnostic, meaning developers can easily swap model endpoints and bring their agents to any LLM provider (even enabling a single agent to run on multiple models).

Plus, Letta Cloud offers an “Agent Development Environment” (ADE) for agent builders to develop and debug agents by directly viewing and editing both the agent’s prompts and memory. This is enabled by Letta’s approach to white-box memory – which unlike many existing agent frameworks – makes the exact prompts and memories being passed to the LLM on each reasoning step transparent to the developer.

Agents have attracted interest from startup founders and investors alike – 42 of the 260 companies (16%) in YC’s W24 batch explicitly mention “agent” in their company description. But agents today face numerous issues in production – they are unreliable, hard to control, degrade over time (derailment), and are generally unable to perform complex tasks involving executing many actions over an extended period. Letta believes these problems all stem from a lack of proper memory management.

The company is also releasing its new Agent Developer Environment and API platform for building and deploying AI agents for free today. It is onboarding early developers to the beta of its Letta Cloud-hosted platform.

KEY QUOTES:

“We are just starting to understand how to build compound AI systems around large foundation models. Charles and Sarah’s PhD research at Berkeley laid the groundwork for how to build these stateful AI systems, and I’m excited to see them continue this research agenda at Letta.” Letta believes the most important unsolved problem in AI today is memory.”

“The most powerful characteristics of a useful AI agent – personalization, self-improvement, tool use, reasoning and planning – are all fundamentally memory management problems,” said Packer, Letta’s CEO. “The key challenge with agents is understanding how to construct the context window of the LLM, which forms the AI’s ‘memory’, and developing an agentic loop around the LLM to manage the context window over time. Similar to how a microprocessor or CPU is just one part of a computer, LLMs are just one part of larger AI systems. At Letta, our vision is to build the complete AI computer around the LLM.”

-Ion Stoica, Professor at UC Berkeley and one of the co-founders of Databricks

“AI memory shouldn’t be a black box. We want to make sure developers have full visibility into the memory and state of their agents, and have full control over what LLMs they want to use. Developers shouldn’t have to choose between performance and model lock-in.”

-Sarah Wooders, Letta’s CTO

“We are thrilled to support Letta in their groundbreaking journey to advance multi-agent AI by building and enabling innovative memory systems. Letta’s work addresses one of the most pressing challenges in AI today — effective memory management. By building on the transformative MemGPT research, Letta is poised to unlock a new generation of AI applications. At Felicis, we believe in investing in visionary teams that tackle fundamental problems and drive the future of technology. Charles, Sarah, and their team are exactly that, and we are excited to be part of their mission to redefine what AI can achieve.”

-Astasia Myers, General Partner at Felicis

Continue Reading