Long-term Memory for LLMs

Add two lines to your OpenAI call to automatically personalize responses based on past conversations or internal documents.

Backed by Combinator

Can you recommmend me some shoes?
I don't have specific information about your preferences or needs. Please provide more details about the type of shoes you're looking for, such as the occasion, style, and any specific requirements.

No Memory

Can you recommmend me some shoes?
Sure! I know you love Nike and hypebeast fashion, so how about some Air Jordan 1 Retro High OG 'University Blue'? They are quite popular right now.

With Remembrall Memory

Bager Akbay

bagerakbay

This is like an LLM learning to sleep when it gets tired. 😴 if it learns to compare new experiences to pretrained experiences and store them with references, it would be huge.
Samin | Ai Specialist πŸ€–

SaminXD

Very interesting would be fun to tinker with and implement in the right use cases.
MasteringMachines

MstrMachines

Such an interesting project.
CryptoHAL

hal_crypto

Genius!!
Jerry Lusato

jerrylusato

Awesome!
Nelson Carrillo

nelsonc___

man, this is clever. Nice job
Paul Rowan π“…ƒ

jockeyclarke

I don’t care what the nitpickers say. This looks really slick and right along the lines of some experiments I wanted to try out. This is effectively compressing the chat history so it fits into the context window, yes? And you re-compress the history as it becomes less relevant?
Bager Akbay

bagerakbay

This is like an LLM learning to sleep when it gets tired. 😴 if it learns to compare new experiences to pretrained experiences and store them with references, it would be huge.
Samin | Ai Specialist πŸ€–

SaminXD

Very interesting would be fun to tinker with and implement in the right use cases.
MasteringMachines

MstrMachines

Such an interesting project.
CryptoHAL

hal_crypto

Genius!!
Jerry Lusato

jerrylusato

Awesome!
Nelson Carrillo

nelsonc___

man, this is clever. Nice job
Paul Rowan π“…ƒ

jockeyclarke

I don’t care what the nitpickers say. This looks really slick and right along the lines of some experiments I wanted to try out. This is effectively compressing the chat history so it fits into the context window, yes? And you re-compress the history as it becomes less relevant?
BadTech Bandit ∞ #AI, #drones, web3 & beyond

BadTechBandit

This is very interesting!
Sam Gabell

samgabell

Would love access to this beta.
G.I Joe

Joedefendre

With an LLM constantly performing CRUD operations on the vector database, the database can become an infinite cache for conversation history.
pystar 🐍

pystar

This is very impressive and yet very simple.
Alex Volkov

altryne

Interesting approach to augment context window away, proxy your API calls, run RAG context augmentation during chat calls, <100ms is impressive! Very interesting!
Amit Vyas

thevyasamit

If this thing is real then it’s really gonna be a hit for long text summarization task using LLMs.
BadTech Bandit ∞ #AI, #drones, web3 & beyond

BadTechBandit

This is very interesting!
Sam Gabell

samgabell

Would love access to this beta.
G.I Joe

Joedefendre

With an LLM constantly performing CRUD operations on the vector database, the database can become an infinite cache for conversation history.
pystar 🐍

pystar

This is very impressive and yet very simple.
Alex Volkov

altryne

Interesting approach to augment context window away, proxy your API calls, run RAG context augmentation during chat calls, <100ms is impressive! Very interesting!
Amit Vyas

thevyasamit

If this thing is real then it’s really gonna be a hit for long text summarization task using LLMs.

Pricing

Start using Remembrall for free. Upgrade for API usage in production. Export your data for free on any plan.

Free

Free for everyone

Unlimited Playground Usage
Unlimited LLM Calls
Full Observability
Unlimited Data Export
Get Started

Starter

$50/month

Everything in free plus:
Long-term Memory API
Retrieval Augmented Generation
Instant Chat Replay

Enterprise

Contact Us

Everything in starter plus:
Access Memory Stores Programmatically
Document Context API
24/7 Dedicated Support
Contact Us
or self-onboard @ $2000/month + $0.008/API call