Everyone's trying vectors and graphs for AI memory. We went back to SQL

2 hours ago 1

When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.

You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.

Over the past few years, people have tried a bunch of ways to fix it:

1. Prompt stuffing / fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast.

2. Vector databases (RAG) – Store embeddings in Pinecone/Weaviate. Recall is semantic, but retrieval is noisy and loses structure.

3. Graph databases – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain.

4. Hybrid systems – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex.

And then there’s the twist: Relational databases! Yes, the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory.

Instead of exotic stores, you can:

- Keep short-term vs long-term memory in SQL tables

- Store entities, rules, and preferences as structured records

- Promote important facts into permanent memory

- Use joins and indexes for retrieval

This is the approach we’ve been working on at Gibson. We built an open-source project called Memori (https://memori.gibsonai.com/), a multi-agent memory engine that gives your AI agents human-like memory.

It’s kind of ironic, after all the hype around vectors and graphs, one of the best answers to AI memory might be the tech we’ve trusted for 50+ years.

I would love to know your thoughts about our approach!

Read Entire Article