Bläddra i källkod

Add raw conversational memory endpoint

Lukas Goldschmidt 1 månad sedan
förälder
incheckning
ba59dd8a91
3 ändrade filer med 25 tillägg och 0 borttagningar
  1. 17 0
      API.md
  2. 3 0
      README.md
  3. 5 0
      mem0core/routes.py

+ 17 - 0
API.md

@@ -45,6 +45,23 @@ Returns the mem0 add response with the created memory payload.
 
 ---
 
+### `POST /memories/raw`
+Store a pre-processed or pre-summarised memory directly in the conversational collection without running the mem0 extraction LLM. Works just like `/knowledge` in that `infer` is forced to `false` and metadata is stored verbatim, making it a fast way to inject cleaned data. You can also set `metadata.created_at` (ISO 8601 string) to control the stored timestamp; otherwise the server uses the current time.
+
+**Request**
+```json
+{
+  "text": "Remember the new session plans for March.",
+  "user_id": "alice",
+  "metadata": { "created_at": "2026-03-23T16:00:00Z", "source": "planner" }
+}
+```
+
+**Response**
+Same mem0 add response as `/memories`, with `metadata` preserved exactly as submitted.
+
+---
+
 ### `POST /memories/search`
 Search with reranking. Fetches `limit × 3` candidates, reranks locally, returns top `limit`.
 

+ 3 - 0
README.md

@@ -34,6 +34,9 @@ GROQ_API_KEY=your_key_here
 RERANKER_URL=http://192.168.0.200:5200/rerank
 ```
 
+## Raw conversational writes 🧪
+`POST /memories/raw` lets another project inject already-processed memories straight into the conversational collection (`openclaw_mem`) while preserving any supplied metadata (including `created_at`) without hitting the mem0 extraction LLM.
+
 ## Docs
 - API reference: **API.md**
 - Project overview: **PROJECT.md**

+ 5 - 0
mem0core/routes.py

@@ -45,6 +45,11 @@ def build_router(memory_conv: Memory, memory_know: Memory) -> APIRouter:
         """Store conversational memory with LLM extraction and deduplication enabled."""
         return await handle_add(req, memory_conv, verbatim_allowed=False)
 
+    @router.post("/memories/raw", summary="Store raw conversational memory", tags=["memories"])
+    async def add_raw_memory(req: Request):
+        """Store processed or pre-summarized conversational memory without rerunning the mem0 LLM."""
+        return await handle_add(req, memory_conv, verbatim_allowed=True)
+
     @router.post("/memories/search", summary="Search conversational memory", tags=["memories"])
     async def search_memories(req: Request):
         """Search conversational memory and rerank candidates by relevance."""