Lukas Goldschmidt hai 1 mes
pai
achega
bce59d5759
Modificáronse 10 ficheiros con 962 adicións e 35 borrados
  1. 12 0
      .env.example
  2. 2 0
      .gitignore
  3. 13 2
      PROJECT.md
  4. 63 1
      README.md
  5. 47 11
      killserver.sh
  6. 1 0
      requirements.txt
  7. 11 1
      restart.sh
  8. 7 0
      run.sh
  9. 193 9
      test.sh
  10. 613 11
      virtuoso_mcp.py

+ 12 - 0
.env.example

@@ -0,0 +1,12 @@
+# Virtuoso MCP example environment
+VIRTUOSO_ENDPOINT=http://localhost:8891/sparql
+# VIRTUOSO_ENDPOINT=http://192.168.0.200:8890/sparql-auth
+# VIRTUOSO_USER=dba
+# VIRTUOSO_PASS=password
+GRAPH_URI=http://world.eu.org/example1
+SPARQL_TIMEOUT=10
+SPARQL_UPDATE_TIMEOUT=15
+SPARQL_DEFAULT_LIMIT=100
+SPARQL_MAX_LIMIT=500
+MCP_ALLOW_EXAMPLE_LOAD=false
+EXAMPLE_GRAPH=http://world.eu.org/cannabis-breeding#test

+ 2 - 0
.gitignore

@@ -14,3 +14,5 @@ venv/
 
 # VSCode
 .vscode/
+
+examples

+ 13 - 2
PROJECT.md

@@ -8,14 +8,18 @@ Build a minimal MCP server that proxies Virtuoso Community Edition SPARQL endpoi
 
 - Implement `sparql_query` tool that POSTs to `http://localhost:8891/sparql` with Accept header `application/sparql-results+json`.
 - Return parsed JSON straight to the caller; consider timeouts and result limits.
-- Provide sanitization / guardrails to prevent runaway queries.
+- Provide sanitization / guardrails to prevent runaway queries (SELECT-only + LIMIT enforcement).
 - Validate the server works from a simple CLI script before wiring to OpenClaw.
 
 ## Stage 2 — Helper Tools
 
 - `get_entities_by_type`: fetches all subjects of `rdf:type <TYPE>`.
-- `search_by_label`: filters `rdfs:label` via case-insensitive substring matching.
+- `search_label`: filters `rdfs:label` via case-insensitive substring matching.
 - `list_graphs`: enumerates distinct graphs that currently contain triples.
+- `get_predicates_for_subject`: lists distinct predicates for a subject URI.
+- `get_labels_for_subject`: returns labels for a subject URI.
+- `insert_triple`: insert a single triple (debugging updates).
+- `load_examples`: optionally load Turtle example files from `examples/` into a graph (guarded by `MCP_ALLOW_EXAMPLE_LOAD=true`).
 - Later add more semantic tools (predicate discovery, ontology hints) rather than letting the agent write arbitrary SPARQL.
 
 ## Stage 3 — Schema Awareness & Introspection
@@ -60,3 +64,10 @@ Build a minimal MCP server that proxies Virtuoso Community Edition SPARQL endpoi
 - Caching of frequent query results.
 - Hybrid symbolic + vector search mix.
 - Expose MCP server as a possible `tools.json` descriptor for OpenClaw.
+
+## Domain plugin layers
+
+- Introduce a `DOMAIN_LAYERS` environment variable that lists plugin modules (default `garden_layer.plugin`).
+- Each plugin module exposes a `register_layer(tools)` hook that registers domain-prefixed tools (e.g., `garden_add_seedling`).
+- On startup, the MCP server imports those modules, calls their hooks, and the new endpoints appear in the `/mcp` tool list without modifying the single FastAPI route.
+- This keeps the core server generic while letting any specialized layer (garden, almanac, inventory) add helpers via a simple plugin contract.

+ 63 - 1
README.md

@@ -19,14 +19,76 @@ MCP Server
 └── Vector DBs (e.g., Qdrant)
 ```
 
+## Guardrails (current)
+
+- `sparql_query` is **SELECT-only** and always uses a LIMIT (default `SPARQL_DEFAULT_LIMIT`).
+- Any LIMIT above `SPARQL_MAX_LIMIT` is clamped.
+- Example data loads are disabled unless `MCP_ALLOW_EXAMPLE_LOAD=true` is set.
+
+## Configuration (env)
+
+`run.sh` and `test.sh` will source a local `.env` file if present. Use `.env.example` as a template.
+
+- `VIRTUOSO_ENDPOINT` (default `http://localhost:8891/sparql`; can be `.../sparql-auth` for digest auth)
+- `VIRTUOSO_USER` / `VIRTUOSO_PASS` (optional; enables HTTP Digest auth)
+- `GRAPH_URI` (used for prefix `:`)
+- `SPARQL_TIMEOUT` (seconds)
+- `SPARQL_UPDATE_TIMEOUT` (seconds)
+- `SPARQL_DEFAULT_LIMIT`
+- `SPARQL_MAX_LIMIT`
+- `MCP_ALLOW_EXAMPLE_LOAD` (`true`/`false`)
+- `EXAMPLE_GRAPH` (graph URI for `load_examples`)
+
 ## Design Principles
 
 1. Tool-based abstraction: Provide helpers such as `sparql_query`, `get_entities_by_type`, `list_graphs` instead of exposing raw SPARQL.
 2. Gradual complexity: Ship a minimal working setup, then layer on helper tooling, schema introspection, and connectors.
 3. Separation of concerns: Virtuoso stores RDF, MCP runs tool interfaces, and LLMs focus on reasoning/tool selection.
+4. Guardrails: Raw queries are SELECT-only, bounded by a default LIMIT, and clamped to a maximum size.
 
 ## Success Criteria
 
 - Phase 1: MCP tool (`sparql_query`) returns valid SPARQL JSON results.
-- Phase 2: LLM relies on helper tools instead of free-form queries.
+- Phase 2: LLM relies on helper tools instead of free-form queries (Stage 2 helpers are now present).
 - Phase 3: Multiple data sources accessible through a unified MCP interface.
+
+## Example loading (test instances)
+
+Set `MCP_ALLOW_EXAMPLE_LOAD=true` to enable the `load_examples` tool. It loads Turtle files from `examples/` into the `EXAMPLE_GRAPH` (default `http://world.eu.org/cannabis-breeding#test`). This is meant for test instances only.
+
+**Note:** the example files are Turtle (`.ttl`) and the loader sends them as SPARQL Update with Turtle prefixes preserved.
+
+## Current helper tools
+
+### Core query/navigation
+- `sparql_query` (SELECT-only, LIMIT enforced)
+- `list_graphs`
+- `search_label`
+- `get_entities_by_type`
+- `get_predicates_for_subject`
+- `get_labels_for_subject`
+- `traverse_property` (follow any property link, incoming or outgoing, and get labels/descriptions)
+
+### Ontology discovery (generic, reusable across domain layers)
+- `list_classes` (list ontology classes, optional term filter)
+- `list_properties` (list ontology properties, optional term/domain/range filters)
+- `describe_class` (class label/comment + properties declaring it as domain)
+- `describe_property` (property label/comment/domain/range/type + usage samples)
+
+### Relationship helpers
+- `describe_subject` (see all predicates/objects for a subject with optional labels)
+- `path_traverse` (walk a configured property path from a subject and return each step)
+- `property_usage_statistics` (count property usage and sample subjects/objects)
+- `batch_insert` (send TTL or multiple triples in a single guarded update; useful for staging domain changes)
+
+### Update/test helpers
+- `insert_triple` (single-triple update helper)
+- `load_examples` (optional; requires `MCP_ALLOW_EXAMPLE_LOAD=true`)
+
+## Layering recommendation
+
+Keep ontology discovery in `virtuoso_mcp` so any specialized layer (garden, inventory, analytics, etc.) can reuse it. Domain modules should call these generic tools instead of re-implementing ontology probing logic.
+
+## Domain plugin layers
+
+To expose domain-specific helpers automatically, set the `DOMAIN_LAYERS` environment variable to a comma-separated list of Python modules (the default is `garden_layer.plugin`). Each module must expose a `register_layer(tools)` function that receives the MCP `TOOLS` dictionary and adds prefixed entries (e.g., `garden_add_seedling`). `virtuoso_mcp` calls those hooks at startup, so simply `pip install --upgrade git+https://repo.home.world.eu.org/lucky/garden_layer.git` and update `DOMAIN_LAYERS` to include `garden_layer.plugin`. The new tools appear in the `/mcp` tool list (`curl -sS http://127.0.0.1:8501/ | jq .tools`) without changing the single `/mcp` endpoint surface.

+ 47 - 11
killserver.sh

@@ -5,18 +5,54 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 cd "$SCRIPT_DIR"
 
 PID_FILE="server.pid"
+PORT="${MCP_PORT:-8501}"
 
-if [[ ! -f "$PID_FILE" ]]; then
-  echo "No PID file found; server not running?"
-  exit 1
+echo "[killserver] Checking for running virtuoso_mcp instances..."
+
+if [[ -f "$PID_FILE" ]]; then
+  PID="$(cat "$PID_FILE" 2>/dev/null || true)"
+  if [[ -n "${PID:-}" ]] && kill -0 "$PID" 2>/dev/null; then
+    echo "[killserver] Stopping PID from pidfile: $PID"
+    kill "$PID" || true
+    sleep 0.5
+    if kill -0 "$PID" 2>/dev/null; then
+      echo "[killserver] PID $PID still alive, sending SIGKILL"
+      kill -9 "$PID" || true
+    fi
+  else
+    echo "[killserver] Stale or empty pidfile, removing."
+  fi
+  rm -f "$PID_FILE"
+fi
+
+STRAY_PIDS="$(ps -ef | grep -E 'uvicorn[[:space:]]+virtuoso_mcp:app' | grep -v grep | awk '{print $2}' || true)"
+if [[ -n "${STRAY_PIDS:-}" ]]; then
+  echo "[killserver] Killing stray uvicorn PIDs: $STRAY_PIDS"
+  for p in $STRAY_PIDS; do
+    kill "$p" || true
+  done
+  sleep 0.5
+  for p in $STRAY_PIDS; do
+    if kill -0 "$p" 2>/dev/null; then
+      kill -9 "$p" || true
+    fi
+  done
 fi
 
-PID=$(cat "$PID_FILE")
-if kill -0 "$PID" >/dev/null 2>&1; then
-  kill "$PID"
-  sleep 1
-  echo "Server (PID $PID) terminated."
-else
-  echo "PID $PID is not running."
+if command -v lsof >/dev/null 2>&1; then
+  PORT_PIDS="$(lsof -ti tcp:"$PORT" || true)"
+  if [[ -n "${PORT_PIDS:-}" ]]; then
+    echo "[killserver] Port $PORT still in use by: $PORT_PIDS"
+    for p in $PORT_PIDS; do
+      kill "$p" || true
+    done
+    sleep 0.5
+    for p in $PORT_PIDS; do
+      if kill -0 "$p" 2>/dev/null; then
+        kill -9 "$p" || true
+      fi
+    done
+  fi
 fi
-rm -f "$PID_FILE"
+
+echo "[killserver] Done."

+ 1 - 0
requirements.txt

@@ -2,3 +2,4 @@ fastapi>=0.115
 uvicorn[standard]>=0.23
 pydantic>=2.6
 requests>=2.31
+pytest>=8.4

+ 11 - 1
restart.sh

@@ -4,5 +4,15 @@ set -euo pipefail
 SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 cd "$SCRIPT_DIR"
 
-./killserver.sh >/dev/null 2>&1 || true
+./killserver.sh
 ./run.sh
+
+MAX_WAIT=20
+for i in $(seq 1 "$MAX_WAIT"); do
+  if curl -sS http://127.0.0.1:8501/ >/dev/null 2>&1; then
+    break
+  fi
+  sleep 0.5
+done
+
+curl -sS http://127.0.0.1:8501/ | jq .tools

+ 7 - 0
run.sh

@@ -4,6 +4,13 @@ set -euo pipefail
 SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 cd "$SCRIPT_DIR"
 
+if [[ -f .env ]]; then
+  set -a
+  # shellcheck source=/dev/null
+  source .env
+  set +a
+fi
+
 LOG_DIR="logs"
 mkdir -p "$LOG_DIR"
 PID_FILE="server.pid"

+ 193 - 9
test.sh

@@ -4,19 +4,203 @@ set -euo pipefail
 SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 cd "$SCRIPT_DIR"
 
+if [[ -f .env ]]; then
+  set -a
+  # shellcheck source=/dev/null
+  source .env
+  set +a
+fi
+
 PORT=8501
 BASE_URL="http://127.0.0.1:$PORT"
+TEST_GRAPH="http://world.eu.org/cannabis-breeding#test"
+
+if ! command -v jq >/dev/null 2>&1; then
+  echo "ERROR: jq is required for test output parsing."
+  exit 1
+fi
+
+pass_count=0
+fail_count=0
+
+section() {
+  echo
+  echo "============================================================"
+  echo "$1"
+  echo "============================================================"
+}
+
+pass() {
+  pass_count=$((pass_count + 1))
+  echo "✅ PASS: $1"
+}
+
+fail() {
+  fail_count=$((fail_count + 1))
+  echo "❌ FAIL: $1"
+  echo "    detail: $2"
+}
+
+call_mcp() {
+  local payload="$1"
+  curl -sS -X POST "$BASE_URL/mcp" \
+    -H "Content-Type: application/json" \
+    -d "$payload"
+}
+
+assert_tool_ok() {
+  local label="$1"
+  local payload="$2"
+  local response
+  response="$(call_mcp "$payload")" || {
+    fail "$label" "HTTP request failed"
+    return 1
+  }
+
+  if ! echo "$response" | jq -e . >/dev/null 2>&1; then
+    fail "$label" "Non-JSON response: $response"
+    return 1
+  fi
+
+  local status
+  status="$(echo "$response" | jq -r '.status // empty')"
+  if [[ "$status" != "ok" ]]; then
+    local detail
+    detail="$(echo "$response" | jq -r '.detail // .error // "unknown error"')"
+    fail "$label" "$detail"
+    return 1
+  fi
+
+  pass "$label"
+  TOOL_LAST_RESPONSE="$response"
+  return 0
+}
+
+section "Health check"
+root_json="$(curl -sS "$BASE_URL/")"
+echo "$root_json" | jq '{status, virtuoso, tools, guardrails}'
+if [[ "$(echo "$root_json" | jq -r '.status // empty')" == "MCP server running" ]]; then
+  pass "root endpoint"
+else
+  fail "root endpoint" "unexpected status"
+fi
+
+section "Read-only tools"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "list_graphs" '{"tool":"list_graphs","input":{}}'; then
+  echo "Graphs returned:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[].g.value' | sed 's/^/  - /' || true
+fi
+
+section "Update path (single triple + small Turtle)"
+if [[ "${MCP_ALLOW_EXAMPLE_LOAD:-false}" == "true" ]]; then
+  TOOL_LAST_RESPONSE=""
+  if assert_tool_ok "insert_triple" '{"tool":"insert_triple","input":{"subject":"http://world.eu.org/example1#TestPlant1","predicate":"http://www.w3.org/1999/02/22-rdf-syntax-ns#type","object":"http://world.eu.org/cannabis-breeding#IndividualPlant","object_type":"uri","graph":"http://world.eu.org/cannabis-breeding#test"}}'; then
+    echo "Inserted query preview:"
+    echo "$TOOL_LAST_RESPONSE" | jq -r '.query' | sed 's/^/  /'
+  fi
+
+  TOOL_LAST_RESPONSE=""
+  if assert_tool_ok "load_examples (productioncycle_export.ttl)" '{"tool":"load_examples","input":{"files":["productioncycle_export.ttl"],"graph":"http://world.eu.org/cannabis-breeding#test"}}'; then
+    echo "Loaded files:"
+    echo "$TOOL_LAST_RESPONSE" | jq -r '.result.loaded[] | "  - \(.file) -> \(.graph)"'
+  fi
+
+  TOOL_LAST_RESPONSE=""
+  if assert_tool_ok "load_examples (export large TTL)" '{"tool":"load_examples","input":{"files":["export_2026-03-26T14_10_20.ttl"],"graph":"http://world.eu.org/cannabis-breeding#test"}}'; then
+    echo "Loaded files:"
+    echo "$TOOL_LAST_RESPONSE" | jq -r '.result.loaded[] | "  - \(.file) -> \(.graph)"'
+  fi
+else
+  echo "ℹ️  Skipped update tests (MCP_ALLOW_EXAMPLE_LOAD != true)"
+fi
+
+section "Helper retrieval checks"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "get_entities_by_type(Strain)" '{"tool":"get_entities_by_type","input":{"type_uri":"http://world.eu.org/cannabis-breeding#Strain","limit":5}}'; then
+  echo "Entity IRIs:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[].s.value' | sed 's/^/  - /' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "search_label(term=King)" '{"tool":"search_label","input":{"term":"King","limit":5}}'; then
+  echo "Label hits:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[] | "  - \(.label.value) (\(.s.value))"' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "get_predicates_for_subject(Strain_king_kong)" '{"tool":"get_predicates_for_subject","input":{"subject_uri":"http://world.eu.org/example1#Strain_king_kong","limit":10}}'; then
+  echo "Predicates found:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[].p.value' | sed 's/^/  - /' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "get_labels_for_subject(Strain_king_kong)" '{"tool":"get_labels_for_subject","input":{"subject_uri":"http://world.eu.org/example1#Strain_king_kong"}}'; then
+  echo "Subject labels:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[].label.value' | sed 's/^/  - /' || true
+fi
+
+section "Clone inspection"
+KEROSENE_ROOT="http://world.eu.org/example1#Plant_90d53925-bb5"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "traverse_property(cloneOf incoming)" '{"tool":"traverse_property","input":{"subject_uri":"http://world.eu.org/example1#Plant_90d53925-bb5","property_uri":"http://world.eu.org/cannabis-breeding#cloneOf","direction":"incoming","limit":20}}'; then
+  echo "Clones of Kerosene Krash root:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[] | "  - \(.neighbor.value) (label: \(.label.value // "<no label>"))"'
+fi
+
+section "Ontology discovery"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "list_classes(term=cycle)" '{"tool":"list_classes","input":{"term":"cycle","limit":10}}'; then
+  echo "Class hits:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[] | "  - \(.class.value) (\(.label.value // "<no label>"))"' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "list_properties(term=clone)" '{"tool":"list_properties","input":{"term":"clone","limit":10}}'; then
+  echo "Property hits:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[] | "  - \(.property.value) (\(.label.value // "<no label>"))"' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "describe_property(cb:cloneOf)" '{"tool":"describe_property","input":{"property_uri":"http://world.eu.org/cannabis-breeding#cloneOf","usage_limit":5}}'; then
+  echo "cloneOf metadata rows:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.metadata.results.bindings | length | tostring | "  - rows: " + .'
+fi
+
+section "Subject/path helpers"
+PATH_SUBJECT="http://world.eu.org/example1#Plant_cookie_kerosene_2026_3"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "describe_subject(cookie path)" "{\"tool\":\"describe_subject\",\"input\":{\"subject_uri\":\"${PATH_SUBJECT}\",\"limit\":10}}"; then
+  echo "Subject triples:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.results.bindings[] | "  - \(.predicate.value) -> \(.objectLabel.value // .object.value)"' || true
+fi
+
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "path_traverse(seed lineage)" "{\"tool\":\"path_traverse\",\"input\":{\"subject_uri\":\"${PATH_SUBJECT}\",\"property_path\":[\"http://world.eu.org/cannabis-breeding#grownFromSeedProduct\",\"http://world.eu.org/cannabis-breeding#seedProductFromPollination\"],\"limit\":5}}"; then
+  echo "Path traverse results:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.result.results.bindings[] | "  - " + (.n1Label.value // .n1.value) + " -> " + (.n2Label.value // .n2.value)' || true
+fi
 
-echo "Checking /"
-curl -fsS "$BASE_URL/"
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "property_usage_statistics" '{"tool":"property_usage_statistics","input":{"property_uri":"http://world.eu.org/cannabis-breeding#cloneOf","examples_limit":3}}'; then
+  echo "cloneOf usage count:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.count.results.bindings[0].usageCount.value // "0" | "  - " + .' || true
+  echo "Sample bindings:" 
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.examples.results.bindings[] | "  - " + (.subjectLabel.value // .subject.value) + " -> " + (.objectLabel.value // .object.value)' || true
+fi
 
-echo
+TOOL_LAST_RESPONSE=""
+if assert_tool_ok "batch_insert(test)" '{"tool":"batch_insert","input":{"ttl":"<http://world.eu.org/example1#batch_test_subject> <http://www.w3.org/2000/01/rdf-schema#label> \"batch helper\" .","graph":"http://world.eu.org/cannabis-breeding#test"}}'; then
+  echo "Batch insert query:"
+  echo "$TOOL_LAST_RESPONSE" | jq -r '.result.query' | sed 's/^/  /'
+fi
 
-echo "Calling MCP tool list_graphs"
-curl -sSf -X POST "$BASE_URL/mcp" \
-  -H "Content-Type: application/json" \
-  -d '{"tool":"list_graphs","input":{}}'
+section "Summary"
+echo "Passed: $pass_count"
+echo "Failed: $fail_count"
 
-echo
+if [[ "$fail_count" -gt 0 ]]; then
+  exit 1
+fi
 
-echo "Tests passed (assuming HTTP 200 responses)"
+echo "All checks passed."

+ 613 - 11
virtuoso_mcp.py

@@ -1,22 +1,48 @@
 import logging
 import os
 import re
-from typing import Any, Dict
+from importlib import import_module
+from pathlib import Path
+from typing import Any, Callable, Dict, List, Optional
 
 import requests
+from requests.auth import HTTPDigestAuth
 from fastapi import FastAPI, HTTPException
 from pydantic import BaseModel
 
-logging.basicConfig(level=logging.INFO)
+LOG_LEVEL = os.getenv("MCP_LOG_LEVEL", "INFO").upper()
+logging.basicConfig(level=getattr(logging, LOG_LEVEL, logging.INFO))
 logger = logging.getLogger("virtuoso_mcp")
 
 app = FastAPI(title="MCP Server")
 
 # --- CONFIG ---
-VIRTUOSO_SPARQL = os.getenv("VIRTUOSO_SPARQL", "http://localhost:8891/sparql")
+VIRTUOSO_ENDPOINT = os.getenv("VIRTUOSO_ENDPOINT") or os.getenv(
+    "VIRTUOSO_SPARQL", "http://localhost:8891/sparql"
+)
+VIRTUOSO_USER = os.getenv("VIRTUOSO_USER")
+VIRTUOSO_PASS = os.getenv("VIRTUOSO_PASS")
 SPARQL_TIMEOUT = float(os.getenv("SPARQL_TIMEOUT", 10.0))
+SPARQL_UPDATE_TIMEOUT = float(os.getenv("SPARQL_UPDATE_TIMEOUT", 15.0))
+SPARQL_DEFAULT_LIMIT = int(os.getenv("SPARQL_DEFAULT_LIMIT", 100))
+SPARQL_MAX_LIMIT = int(os.getenv("SPARQL_MAX_LIMIT", 500))
+GRAPH_URI = os.getenv("GRAPH_URI", "http://world.eu.org/example1")
+EXAMPLES_DIR = Path(__file__).resolve().parent / "examples"
+EXAMPLE_GRAPH = os.getenv(
+    "EXAMPLE_GRAPH", "http://world.eu.org/cannabis-breeding#test"
+)
+ALLOW_EXAMPLE_LOAD = os.getenv("MCP_ALLOW_EXAMPLE_LOAD", "false").lower() == "true"
 SESSION = requests.Session()
 
+PREFIXES = f"""
+PREFIX : <{GRAPH_URI}>
+PREFIX cb: <http://world.eu.org/cannabis-breeding#>
+PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+""".strip()
+
 # --- MODELS ---
 class SparqlQueryRequest(BaseModel):
     query: str
@@ -29,28 +55,149 @@ class ToolRequest(BaseModel):
 
 # --- CORE SPARQL FUNCTION ---
 
+def _build_auth() -> Optional[HTTPDigestAuth]:
+    if VIRTUOSO_USER and VIRTUOSO_PASS:
+        return HTTPDigestAuth(VIRTUOSO_USER, VIRTUOSO_PASS)
+    return None
+
+
+def _with_prefixes(query: str) -> str:
+    if re.search(r"^\s*prefix\b", query, re.IGNORECASE):
+        return query
+    return f"{PREFIXES}\n{query}"
+
+
 def run_sparql(query: str) -> Dict[str, Any]:
     """Execute a SPARQL query against Virtuoso and return the JSON payload."""
     logger.debug("Sending SPARQL query: %s", query)
     try:
         response = SESSION.post(
-            VIRTUOSO_SPARQL,
-            data={"query": query},
-            headers={"Accept": "application/sparql-results+json"},
+            VIRTUOSO_ENDPOINT,
+            data=_with_prefixes(query).encode("utf-8"),
+            headers={
+                "Accept": "application/sparql-results+json",
+                "Content-Type": "application/sparql-query",
+            },
             timeout=SPARQL_TIMEOUT,
+            auth=_build_auth(),
         )
-        response.raise_for_status()
+        if not response.ok:
+            logger.warning("SPARQL request failed: %s", response.status_code)
+            response.raise_for_status()
         return response.json()
     except Exception as exc:  # pragma: no cover - propagate for FastAPI
         logger.warning("SPARQL request failed: %s", exc)
         raise HTTPException(status_code=500, detail=str(exc))
 
 
+def run_sparql_update(query: str) -> Dict[str, Any]:
+    """Execute a SPARQL UPDATE (INSERT/DELETE) against Virtuoso."""
+    logger.debug("Sending SPARQL update: %s", query)
+    try:
+        response = SESSION.post(
+            VIRTUOSO_ENDPOINT,
+            data=_with_prefixes(query).encode("utf-8"),
+            headers={"Content-Type": "application/sparql-update"},
+            timeout=SPARQL_UPDATE_TIMEOUT,
+            auth=_build_auth(),
+        )
+        if not response.ok:
+            detail = (response.text or "").strip()
+            logger.warning("SPARQL update failed: %s", response.status_code)
+            raise HTTPException(
+                status_code=500,
+                detail=detail or f"SPARQL update failed with {response.status_code}",
+            )
+        return {"status": "ok"}
+    except HTTPException:
+        raise
+    except Exception as exc:  # pragma: no cover - propagate for FastAPI
+        logger.warning("SPARQL update failed: %s", exc)
+        raise HTTPException(status_code=500, detail=str(exc))
+
+
 # --- TOOL HELPERS ---
 
+def escape_sparql_string(value: str) -> str:
+    """Escape a string for SPARQL literal usage."""
+    if value is None:
+        return ""
+    return (
+        str(value)
+        .replace("\\", "\\\\")
+        .replace('"', "\\\"")
+        .replace("\n", "\\n")
+        .replace("\r", "")
+    )
+
+
 def sanitize_term(term: str) -> str:
     """Escape quotes inside label searches so we can safely interpolate strings."""
-    return re.sub(r"\"", "\\\"", term)
+    return escape_sparql_string(term)
+
+
+def _extract_limit(query: str) -> Optional[int]:
+    match = re.search(r"\blimit\s+(\d+)\b", query, re.IGNORECASE)
+    if not match:
+        return None
+    try:
+        return int(match.group(1))
+    except ValueError:
+        return None
+
+
+def _apply_limit(query: str, default_limit: int, max_limit: int) -> str:
+    limit = _extract_limit(query)
+    if limit is None:
+        return f"{query.strip()}\nLIMIT {default_limit}"
+    if limit > max_limit:
+        return re.sub(
+            r"\blimit\s+\d+\b",
+            f"LIMIT {max_limit}",
+            query,
+            flags=re.IGNORECASE,
+        )
+    return query
+
+
+def guard_select_query(query: str) -> str:
+    """Enforce that raw queries are read-only and bounded by LIMIT."""
+    lowered = query.lower()
+    if re.search(r"\b(insert|delete|load|clear|drop|create|move|copy|add)\b", lowered):
+        raise HTTPException(status_code=400, detail="SPARQL update operations are not allowed")
+    if "select" not in lowered:
+        raise HTTPException(status_code=400, detail="Only SELECT queries are allowed")
+    return _apply_limit(query, SPARQL_DEFAULT_LIMIT, SPARQL_MAX_LIMIT)
+
+
+def ttl_to_sparql_insert(ttl_text: str, graph: Optional[str]) -> str:
+    prefix_lines: List[str] = []
+    body_lines: List[str] = []
+    for raw_line in ttl_text.splitlines():
+        line = raw_line.strip()
+        if not line:
+            continue
+        prefix_match = re.match(r"@prefix\s+([\w-]+):\s*<([^>]+)>\s*\.", line)
+        if prefix_match:
+            prefix_lines.append(
+                f"PREFIX {prefix_match.group(1)}: <{prefix_match.group(2)}>"
+            )
+            continue
+        if line.startswith("@base"):
+            # Skip @base entries for now; they are rare in our exports.
+            continue
+        body_lines.append(raw_line)
+
+    if not body_lines:
+        raise HTTPException(status_code=400, detail="No RDF triples found in input")
+
+    prefixes = "\n".join(prefix_lines)
+    body = "\n".join(body_lines)
+    if graph:
+        insert_body = f"GRAPH <{graph}> {{\n{body}\n}}"
+    else:
+        insert_body = body
+    return f"{prefixes}\nINSERT DATA {{\n{insert_body}\n}}"
 
 
 # --- MCP TOOL IMPLEMENTATIONS ---
@@ -59,7 +206,8 @@ def tool_sparql_query(input_data: Dict[str, Any]) -> Dict[str, Any]:
     query = input_data.get("query")
     if not query:
         raise ValueError("Missing 'query' field")
-    return run_sparql(query)
+    guarded = guard_select_query(query)
+    return run_sparql(guarded)
 
 
 def tool_list_graphs(_input: Dict[str, Any]) -> Dict[str, Any]:
@@ -75,27 +223,475 @@ def tool_list_graphs(_input: Dict[str, Any]) -> Dict[str, Any]:
 def tool_search_label(input_data: Dict[str, Any]) -> Dict[str, Any]:
     term = input_data.get("term", "")
     sanitized = sanitize_term(term)
+    limit = int(input_data.get("limit", 20))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
     query = f"""
     SELECT ?s ?label WHERE {{
         ?s rdfs:label ?label .
         FILTER(CONTAINS(LCASE(?label), LCASE(\"{sanitized}\")))
     }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_get_entities_by_type(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    type_uri = input_data.get("type_uri")
+    if not type_uri:
+        raise ValueError("Missing 'type_uri' field")
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+    query = f"""
+    SELECT ?s WHERE {{
+        ?s rdf:type <{type_uri}> .
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_get_predicates_for_subject(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject_uri = input_data.get("subject_uri")
+    if not subject_uri:
+        raise ValueError("Missing 'subject_uri' field")
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+    query = f"""
+    SELECT DISTINCT ?p WHERE {{
+        <{subject_uri}> ?p ?o .
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_get_labels_for_subject(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject_uri = input_data.get("subject_uri")
+    if not subject_uri:
+        raise ValueError("Missing 'subject_uri' field")
+    query = f"""
+    SELECT ?label WHERE {{
+        <{subject_uri}> rdfs:label ?label .
+    }}
     LIMIT 20
     """
     return run_sparql(query)
 
 
+def tool_traverse_property(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject_uri = input_data.get("subject_uri")
+    property_uri = input_data.get("property_uri")
+    if not subject_uri or not property_uri:
+        raise ValueError("Missing 'subject_uri' or 'property_uri'")
+    direction = input_data.get("direction", "outgoing")
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+
+    if direction not in {"outgoing", "incoming"}:
+        raise ValueError("direction must be 'outgoing' or 'incoming'")
+
+    if direction == "outgoing":
+        triple = f"<{subject_uri}> <{property_uri}> ?neighbor ."
+    else:
+        triple = f"?neighbor <{property_uri}> <{subject_uri}> ."
+
+    query = f"""
+    SELECT ?neighbor ?label ?description WHERE {{
+        {triple}
+        OPTIONAL {{ ?neighbor rdfs:label ?label }}
+        OPTIONAL {{ ?neighbor dc:description ?description }}
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_list_classes(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    """List ontology classes (rdfs:Class and owl:Class) with optional term filtering."""
+    term = sanitize_term(input_data.get("term", ""))
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+
+    term_filter = ""
+    if term:
+        term_filter = f"""
+        FILTER(
+            CONTAINS(LCASE(COALESCE(STR(?label), STR(?class))), LCASE(\"{term}\")) ||
+            CONTAINS(LCASE(COALESCE(STR(?comment), \"\")), LCASE(\"{term}\"))
+        )
+        """
+
+    query = f"""
+    SELECT DISTINCT ?class ?label ?comment WHERE {{
+        {{ ?class rdf:type rdfs:Class . }}
+        UNION
+        {{ ?class rdf:type <http://www.w3.org/2002/07/owl#Class> . }}
+        OPTIONAL {{ ?class rdfs:label ?label }}
+        OPTIONAL {{ ?class rdfs:comment ?comment }}
+        {term_filter}
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_list_properties(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    """List ontology properties with optional term/domain/range filtering."""
+    term = sanitize_term(input_data.get("term", ""))
+    domain_uri = input_data.get("domain_uri")
+    range_uri = input_data.get("range_uri")
+    limit = int(input_data.get("limit", 100))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+
+    filters = []
+    if term:
+        filters.append(
+            f"""
+            FILTER(
+                CONTAINS(LCASE(COALESCE(STR(?label), STR(?property))), LCASE(\"{term}\")) ||
+                CONTAINS(LCASE(COALESCE(STR(?comment), \"\")), LCASE(\"{term}\"))
+            )
+            """
+        )
+    if domain_uri:
+        filters.append(f"FILTER(?domain = <{domain_uri}>)")
+    if range_uri:
+        filters.append(f"FILTER(?range = <{range_uri}>)")
+
+    query = f"""
+    SELECT DISTINCT ?property ?label ?comment ?domain ?range WHERE {{
+        {{ ?property rdf:type rdf:Property . }}
+        UNION
+        {{ ?property rdf:type <http://www.w3.org/2002/07/owl#ObjectProperty> . }}
+        UNION
+        {{ ?property rdf:type <http://www.w3.org/2002/07/owl#DatatypeProperty> . }}
+        OPTIONAL {{ ?property rdfs:label ?label }}
+        OPTIONAL {{ ?property rdfs:comment ?comment }}
+        OPTIONAL {{ ?property rdfs:domain ?domain }}
+        OPTIONAL {{ ?property rdfs:range ?range }}
+        {' '.join(filters)}
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_describe_class(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    """Describe a class and include properties that declare it as rdfs:domain."""
+    class_uri = input_data.get("class_uri")
+    if not class_uri:
+        raise ValueError("Missing 'class_uri' field")
+
+    query = f"""
+    SELECT ?label ?comment ?property ?propertyLabel ?propertyComment ?range WHERE {{
+        OPTIONAL {{ <{class_uri}> rdfs:label ?label }}
+        OPTIONAL {{ <{class_uri}> rdfs:comment ?comment }}
+        OPTIONAL {{
+            ?property rdfs:domain <{class_uri}> .
+            OPTIONAL {{ ?property rdfs:label ?propertyLabel }}
+            OPTIONAL {{ ?property rdfs:comment ?propertyComment }}
+            OPTIONAL {{ ?property rdfs:range ?range }}
+        }}
+    }}
+    LIMIT {SPARQL_MAX_LIMIT}
+    """
+    return run_sparql(query)
+
+
+def tool_describe_property(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    """Describe a property and include usage examples from the graph."""
+    property_uri = input_data.get("property_uri")
+    if not property_uri:
+        raise ValueError("Missing 'property_uri' field")
+
+    usage_limit = int(input_data.get("usage_limit", 10))
+    usage_limit = min(max(usage_limit, 1), SPARQL_MAX_LIMIT)
+
+    metadata_query = f"""
+    SELECT ?label ?comment ?domain ?range ?type WHERE {{
+        OPTIONAL {{ <{property_uri}> rdfs:label ?label }}
+        OPTIONAL {{ <{property_uri}> rdfs:comment ?comment }}
+        OPTIONAL {{ <{property_uri}> rdfs:domain ?domain }}
+        OPTIONAL {{ <{property_uri}> rdfs:range ?range }}
+        OPTIONAL {{ <{property_uri}> rdf:type ?type }}
+    }}
+    LIMIT {SPARQL_MAX_LIMIT}
+    """
+
+    usage_query = f"""
+    SELECT ?subject ?subjectLabel ?object ?objectLabel WHERE {{
+        ?subject <{property_uri}> ?object .
+        OPTIONAL {{ ?subject rdfs:label ?subjectLabel }}
+        OPTIONAL {{ ?object rdfs:label ?objectLabel }}
+    }}
+    LIMIT {usage_limit}
+    """
+
+    return {
+        "metadata": run_sparql(metadata_query),
+        "usage": run_sparql(usage_query),
+    }
+
+
+def tool_describe_subject(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject_uri = input_data.get("subject_uri")
+    if not subject_uri:
+        raise ValueError("Missing 'subject_uri' field")
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+    query = f"""
+    SELECT ?predicate ?object ?objectLabel WHERE {{
+        <{subject_uri}> ?predicate ?object .
+        OPTIONAL {{ ?object rdfs:label ?objectLabel }}
+    }}
+    LIMIT {limit}
+    """
+    return run_sparql(query)
+
+
+def tool_path_traverse(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject_uri = input_data.get("subject_uri")
+    property_path = input_data.get("property_path") or input_data.get("properties")
+    if not subject_uri or not property_path:
+        raise ValueError("Missing 'subject_uri' or 'property_path'")
+    if isinstance(property_path, str):
+        property_path = [p.strip() for p in property_path.split(",") if p.strip()]
+    if not isinstance(property_path, list) or not property_path:
+        raise ValueError("'property_path' must be a non-empty list of property URIs")
+    direction = input_data.get("direction", "outgoing")
+    limit = int(input_data.get("limit", 50))
+    limit = min(max(limit, 1), SPARQL_MAX_LIMIT)
+
+    statements = []
+    optional_lines = []
+    select_terms = []
+    prev_subject = f"<{subject_uri}>"
+
+    for idx, prop_uri in enumerate(property_path, start=1):
+        step_var = f"?n{idx}"
+        if direction == "outgoing":
+            statements.append(f"{prev_subject} <{prop_uri}> {step_var} .")
+        else:
+            statements.append(f"{step_var} <{prop_uri}> {prev_subject} .")
+        select_terms.append(step_var)
+        optional_lines.append(f"OPTIONAL {{ {step_var} rdfs:label {step_var}Label }}")
+        optional_lines.append(f"OPTIONAL {{ {step_var} dc:description {step_var}Description }}")
+        prev_subject = step_var
+
+    select_clause = " ".join(select_terms)
+    query = f"""
+    SELECT {select_clause} WHERE {{
+        {'\n        '.join(statements)}
+        {'\n        '.join(optional_lines)}
+    }}
+    LIMIT {limit}
+    """
+    return {
+        "property_path": property_path,
+        "direction": direction,
+        "result": run_sparql(query),
+    }
+
+
+def tool_property_usage_statistics(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    property_uri = input_data.get("property_uri")
+    if not property_uri:
+        raise ValueError("Missing 'property_uri' field")
+    examples_limit = int(input_data.get("examples_limit", 5))
+    examples_limit = min(max(examples_limit, 1), SPARQL_MAX_LIMIT)
+
+    count_query = f"""
+    SELECT (COUNT(DISTINCT ?subject) AS ?usageCount) WHERE {{
+        ?subject <{property_uri}> ?object .
+    }}
+    LIMIT {SPARQL_MAX_LIMIT}
+    """
+    usage_query = f"""
+    SELECT ?subject ?subjectLabel ?object ?objectLabel WHERE {{
+        ?subject <{property_uri}> ?object .
+        OPTIONAL {{ ?subject rdfs:label ?subjectLabel }}
+        OPTIONAL {{ ?object rdfs:label ?objectLabel }}
+    }}
+    LIMIT {examples_limit}
+    """
+    return {
+        "count": run_sparql(count_query),
+        "examples": run_sparql(usage_query),
+    }
+
+
+def tool_batch_insert(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    ttl_text = input_data.get("ttl")
+    triples = input_data.get("triples")
+    graph = input_data.get("graph") or GRAPH_URI
+
+    if not ttl_text and not triples:
+        raise ValueError("Provide either 'ttl' text or 'triples' list")
+
+    def _format_object(obj_value: Any, obj_type: str, datatype: Optional[str], lang: Optional[str]) -> str:
+        if obj_type == "uri":
+            return f"<{obj_value}>"
+        if obj_type == "literal":
+            return f'"{escape_sparql_string(obj_value)}"'
+        if obj_type == "typed_literal":
+            if not datatype:
+                raise ValueError("Missing datatype for typed_literal")
+            return f'"{escape_sparql_string(obj_value)}"^^<{datatype}>'
+        if obj_type == "lang_literal":
+            if not lang:
+                raise ValueError("Missing lang for lang_literal")
+            return f'"{escape_sparql_string(obj_value)}"@{lang}'
+        raise ValueError(f"Unknown object_type: {obj_type}")
+
+    if ttl_text:
+        query = ttl_to_sparql_insert(ttl_text, graph)
+    else:
+        lines = []
+        for triple in triples:
+            subj = triple.get("subject")
+            pred = triple.get("predicate")
+            obj_value = triple.get("object")
+            if not subj or not pred or obj_value is None:
+                raise ValueError("Each triple must provide subject, predicate, and object")
+            obj_type = triple.get("object_type", "uri")
+            datatype = triple.get("datatype")
+            lang = triple.get("lang")
+            obj_text = _format_object(obj_value, obj_type, datatype, lang)
+            lines.append(f"<{subj}> <{pred}> {obj_text} .")
+        ttl_bulk = "\n".join(lines)
+        query = ttl_to_sparql_insert(ttl_bulk, graph)
+
+    result = run_sparql_update(query)
+    return {**result, "query": query}
+
+
+def tool_insert_triple(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    subject = input_data.get("subject")
+    predicate = input_data.get("predicate")
+    obj = input_data.get("object")
+    obj_type = input_data.get("object_type", "uri")
+    graph = input_data.get("graph")
+
+    if not subject or not predicate or obj is None:
+        raise ValueError("Missing 'subject', 'predicate', or 'object' field")
+
+    if obj_type == "uri":
+        obj_value = f"<{obj}>"
+    elif obj_type == "literal":
+        obj_value = f"\"{escape_sparql_string(obj)}\""
+    elif obj_type == "typed_literal":
+        datatype = input_data.get("datatype")
+        if not datatype:
+            raise ValueError("Missing 'datatype' for typed_literal")
+        obj_value = f"\"{escape_sparql_string(obj)}\"^^<{datatype}>"
+    elif obj_type == "lang_literal":
+        lang = input_data.get("lang")
+        if not lang:
+            raise ValueError("Missing 'lang' for lang_literal")
+        obj_value = f"\"{escape_sparql_string(obj)}\"@{lang}"
+    else:
+        raise ValueError("Unknown object_type")
+
+    ttl = f"<{subject}> <{predicate}> {obj_value} ."
+    update_query = ttl_to_sparql_insert(ttl, graph)
+    try:
+        result = run_sparql_update(update_query)
+    except HTTPException as exc:
+        detail = f"{exc.detail}\n\nSPARQL:\n{update_query}"
+        raise HTTPException(status_code=exc.status_code, detail=detail)
+    return {**result, "query": update_query}
+
+
+def tool_load_examples(input_data: Dict[str, Any]) -> Dict[str, Any]:
+    if not ALLOW_EXAMPLE_LOAD:
+        raise HTTPException(status_code=403, detail="Example loading is disabled")
+
+    files = input_data.get("files") or []
+    if isinstance(files, str):
+        files = [files]
+    if not files:
+        files = [p.name for p in EXAMPLES_DIR.glob("*.ttl")]
+
+    graph = input_data.get("graph") or EXAMPLE_GRAPH
+    results = []
+
+    for filename in files:
+        file_path = (EXAMPLES_DIR / filename).resolve()
+        if not file_path.exists():
+            raise HTTPException(status_code=400, detail=f"Missing example file: {filename}")
+        if EXAMPLES_DIR not in file_path.parents:
+            raise HTTPException(status_code=400, detail="Invalid example file path")
+        ttl_text = file_path.read_text(encoding="utf-8")
+        update_query = ttl_to_sparql_insert(ttl_text, graph)
+        run_sparql_update(update_query)
+        results.append({"file": filename, "graph": graph})
+
+    return {"loaded": results}
+
+
 # --- TOOL REGISTRY ---
 TOOLS = {
     "sparql_query": tool_sparql_query,
     "list_graphs": tool_list_graphs,
     "search_label": tool_search_label,
+    "get_entities_by_type": tool_get_entities_by_type,
+    "get_predicates_for_subject": tool_get_predicates_for_subject,
+    "get_labels_for_subject": tool_get_labels_for_subject,
+    "traverse_property": tool_traverse_property,
+    "list_classes": tool_list_classes,
+    "list_properties": tool_list_properties,
+    "describe_class": tool_describe_class,
+    "describe_property": tool_describe_property,
+    "describe_subject": tool_describe_subject,
+    "path_traverse": tool_path_traverse,
+    "property_usage_statistics": tool_property_usage_statistics,
+    "batch_insert": tool_batch_insert,
+    "insert_triple": tool_insert_triple,
+    "load_examples": tool_load_examples,
 }
 
+
+def load_domain_layers(tools: Dict[str, Callable[[Dict[str, Any]], Any]]) -> None:
+    raw = os.getenv("DOMAIN_LAYERS", "garden_layer.plugin")
+    modules = [item.strip() for item in raw.split(",") if item.strip()]
+    if not modules:
+        return
+    for module_name in modules:
+        try:
+            module = import_module(module_name)
+        except ImportError as exc:
+            logger.warning("Domain layer '%s' could not be imported: %s", module_name, exc)
+            continue
+        register = getattr(module, "register_layer", None)
+        if not callable(register):
+            logger.warning("Domain layer '%s' does not expose register_layer", module_name)
+            continue
+        try:
+            register(tools)
+            logger.info("Loaded domain layer '%s'", module_name)
+        except Exception as exc:
+            logger.exception("Domain layer '%s' failed to register: %s", module_name, exc)
+
+load_domain_layers(TOOLS)
+
 TOOL_DOCS = {
-    "sparql_query": "Execute arbitrary SPARQL and return the JSON result.",
+    "sparql_query": "Execute a bounded SELECT query and return the JSON result.",
     "list_graphs": "List up to 50 active graph URIs.",
     "search_label": "Search rdfs:label values that contain a term (case-insensitive).",
+    "get_entities_by_type": "List subjects of a given rdf:type.",
+    "get_predicates_for_subject": "List distinct predicates used by a subject.",
+    "get_labels_for_subject": "Fetch rdfs:label values for a subject.",
+    "traverse_property": "Traverse a property (incoming or outgoing) for a subject and return labels/descriptions.",
+    "list_classes": "List ontology classes with optional label/comment term filtering.",
+    "list_properties": "List ontology properties with optional term/domain/range filters.",
+    "describe_class": "Describe a class and list properties that use it as rdfs:domain.",
+    "describe_property": "Describe a property (label/comment/domain/range/type) and sample usage.",
+    "describe_subject": "Return subject predicates/objects (with labels) to inspect an individual node.",
+    "path_traverse": "Follow a property path (list of predicates) from a subject, returning each step's nodes.",
+    "property_usage_statistics": "Count how often a property is used and sample subjects/objects.",
+    "batch_insert": "Insert multiple triples or TTL at once with a single guarded update.",
+    "insert_triple": "Insert a single triple (useful for debugging updates).",
+    "load_examples": "Load Turtle examples from the local examples/ directory into a graph.",
 }
 
 
@@ -129,7 +725,13 @@ def root():
     return {
         "status": "MCP server running",
         "tools": list(TOOLS.keys()),
-        "virtuoso": VIRTUOSO_SPARQL,
+        "virtuoso": VIRTUOSO_ENDPOINT,
+        "guardrails": {
+            "default_limit": SPARQL_DEFAULT_LIMIT,
+            "max_limit": SPARQL_MAX_LIMIT,
+            "allow_example_load": ALLOW_EXAMPLE_LOAD,
+            "turtle_examples": True,
+        },
     }