# mem0-python-server ๐Ÿง  **Version:** v1.0.0 A focused FastAPI wrapper around [mem0](https://github.com/mem0ai/mem0) that provides persistent memory over a REST API for OpenClaw and related pipelines. ## Highlights โœจ - Two dedicated collections: **conversational** and **knowledge** - Local reranking with graceful fallback when reranker is down - Clear REST contract for storage, search, and recall - Docker-first workflow with hot reload ## Quick links ๐Ÿ”— - **PROJECT.md** โ€” purpose, scope, and operating assumptions - **API.md** โ€” full endpoint reference (requests + responses) ## Architecture (at a glance) ๐Ÿงฉ - **LLM:** Groq (default: `meta-llama/llama-4-scout-17b-16e-instruct`) - **Vector store:** Chroma (`192.168.0.200:8001`) - **Embedder:** Ollama (`nomic-embed-text`) - **Reranker:** local REST server (`192.168.0.200:5200`) ## Collections ๐Ÿ“š - **Conversational** โ†’ Chroma collection: `openclaw_mem` โ†’ `/memories` - **Knowledge** โ†’ Chroma collection: `knowledge_mem` โ†’ `/knowledge` ## Run it (Docker) ๐Ÿณ ```bash docker compose up --build ``` ## Config ๐Ÿ” Create a `.env` file (never commit it): ```env GROQ_API_KEY=your_key_here RERANKER_URL=http://192.168.0.200:5200/rerank ``` ## Raw conversational writes ๐Ÿงช `POST /memories/raw` lets another project inject already-processed memories straight into the conversational collection (`openclaw_mem`) while preserving any supplied metadata (including `created_at`) without hitting the mem0 extraction LLM. ## Docs - API reference: **API.md** - Project overview: **PROJECT.md** If you want the README expanded again or a different doc split, say the word.