You corrected your agent's coding style last Tuesday. Today it made the same mistake. You explained your project's architecture to Cursor this morning. Tonight, Claude Code has no idea. Every conversation starts from zero.
PLUR gives your agents a shared memory that persists, learns from feedback, and travels between tools and devices. Corrections stick. Preferences compound. Models are commodities — your knowledge isn't. PLUR makes it portable.
| Knowledge type | Record | Win rate |
|---|---|---|
| House rules Tag formats, file placement, project conventions | 12 – 0 | 100% |
| Tool routing Finding the right tool among 100+ options | 10 – 2 | 83% |
| Past experience API quirks, debugging insights, infrastructure details | 4 – 0 | 100% |
| Learned style Communication tone, design preferences | 5 – 2 | 71% |
| General tasks No memory advantage expected | — | No penalty |
Engrams — memory units modeled after how the human brain stores knowledge. They strengthen with use and fade when irrelevant. The system gets better over time, not just bigger. READ THE SPEC →
Correct something in Claude Code. Cursor already knows. OpenClaw remembers it on Telegram. Switch AI models next month — your memory carries over. No vendor lock-in, ever.
Open the file. Read what your agent remembers. Edit it. Delete it. It's YAML on your disk — not a database, not a cloud, not a black box. MIT licensed, forever.
Your memory lives on your machine. Open the file, see what's there. No platform can revoke it or hold it hostage.
An AI that remembers your preferences, your patterns, your way of thinking — and carries that across every tool you use.
Your memory outlives any model. Switch from Claude to GPT to whatever comes next. The knowledge transfers.
Nothing shared without your say. Nothing taken without giving back. Private until you choose otherwise.