# mem0-python-server 🧠 A focused FastAPI wrapper around [mem0](https://github.com/mem0ai/mem0) that provides persistent memory over a REST API for OpenClaw and related pipelines. ## Highlights ✨ - Two dedicated collections: **conversational** and **knowledge** - Local reranking with graceful fallback when reranker is down - Clear REST contract for storage, search, and recall - Docker-first workflow with hot reload ## Quick links 🔗 - **PROJECT.md** — purpose, scope, and operating assumptions - **API.md** — full endpoint reference (requests + responses) ## Architecture (at a glance) 🧩 - **LLM:** Groq (default: `meta-llama/llama-4-scout-17b-16e-instruct`) - **Vector store:** Chroma (`192.168.0.200:8001`) - **Embedder:** Ollama (`nomic-embed-text`) - **Reranker:** local REST server (`192.168.0.200:5200`) ## Collections 📚 - **Conversational** → Chroma collection: `openclaw_mem` → `/memories` - **Knowledge** → Chroma collection: `knowledge_mem` → `/knowledge` ## Run it (Docker) 🐳 ```bash docker compose up --build ``` ## Config 🔐 Create a `.env` file (never commit it): ```env GROQ_API_KEY=your_key_here RERANKER_URL=http://192.168.0.200:5200/rerank ``` ## Docs - API reference: **API.md** - Project overview: **PROJECT.md** If you want the README expanded again or a different doc split, say the word.