Show HN: Mem0 – open-source Memory Layer for AI apps

https://github.com/mem0ai/mem0
Hey HN! We're Taranjeet and Deshraj, the founders of Mem0 (https://mem0.ai). Mem0 adds a stateful memory layer to AI applications, allowing them to remember user interactions, preferences, and context over time. This enables AI apps to deliver increasingly personalized and intelligent experiences that evolve with every interaction. There’s a demo video at https://youtu.be/VtRuBCTZL1o and a playground to try out at https://app.mem0.ai/playground. You'll need to sign up to use the playground – this helps ensure responses are more tailored to you by associating interactions with an individual profile.

Current LLMs are stateless—they forget everything between sessions. This limitation leads to repetitive interactions, a lack of personalization, and increased computational costs because developers must repeatedly include extensive context in every prompt.

When we were building Embedchain (an open-source RAG framework with over 2M downloads), users constantly shared their frustration with LLMs’ inability to remember anything between sessions. They had to repeatedly input the same context, which was costly and inefficient. We realized that for AI to deliver more useful and intelligent responses, it needed memory. That’s when we started building Mem0.

Mem0 employs a hybrid datastore architecture that combines graph, vector, and key-value stores to store and manage memories effectively. Here is how it works:

Adding memories: When you use mem0 with your AI App, it can take in any messages or interactions and automatically detects the important parts to remember.

Organizing information: Mem0 sorts this information into different categories: - Facts and structured data go into a key-value store for quick access. - Connections between things (like people, places, or objects) are saved in a graph store that understands relationships between different entities. - The overall meaning and context of conversations are stored in a vector store that allows for finding similar memories later.

Retrieving memories: When given an input query, Mem0 searches for and retrieves related stored information by leveraging a combination of graph traversal techniques, vector similarity and key-value lookups. It prioritizes the most important, relevant, and recent information, making sure the AI always has the right context, no matter how much memory is stored.

Unlike traditional AI applications that operate without memory, Mem0 introduces a continuously learning memory layer. This reduces the need to repeatedly include long blocks of context in every prompt, which lowers computational costs and speeds up response times. As Mem0 learns and retains information over time, AI applications become more adaptive and provide more relevant responses without relying on large context windows in each interaction.

We’ve open-sourced the core technology that powers Mem0—specifically the memory management functionality in the vector and graph databases, as well as the stateful memory layer—under the Apache 2.0 license. This includes the ability to add, organize, and retrieve memories within your AI applications.

However, certain features that are optimized for production use, such as low latency inference, and the scalable graph and vector datastore for real-time memory updates, are part of our paid platform. These advanced capabilities are not part of the open-source package but are available for those who need to scale memory management in production environments.

We’ve made both our open-source version and platform available for HN users. You can check out our GitHub repo (https://github.com/mem0ai/mem0) or explore the platform directly at https://app.mem0.ai/playground.

We’d love to hear what you think! Please feel free to dive into the playground, check out the code, and share any thoughts or suggestions with us. Your feedback will help shape where we take Mem0 from here!


Cancel
Loading...