OUTLOOK.md 4.7 KB

📈 Trends MCP Server — Requirements Spec

🎯 Goal

Provide free, normalized entity intelligence for:

  • keywords
  • entities (BTC, AI, ETFs, etc.)
  • topics

👉 This is not a paid trend terminal 👉 It’s an entity resolution + attention context engine

First-order priority

The most reliable value in this project is no longer historical trend curves. It is entity normalization, topic disambiguation, and related-query discovery.

If Google refuses time-series requests, the MCP should still remain useful by:

  • returning Knowledge Graph MID candidates
  • resolving canonical entity labels
  • surfacing related queries and topics
  • providing topic-aware context for downstream MCPs
  • keeping a SQLite snapshot history for temporal inspection and later diffing

🧠 Core Insight

Markets often move after attention increases.

So this MCP helps answer:

  • “Is something gaining attention?”
  • “Is this narrative heating up?”
  • “Is attention diverging from price?”

🏗️ 1. Internal Architecture

📊 Data Sources (providers/)

Primary:

  • Google Trends (via unofficial APIs like pytrends)

Optional later:

  • Twitter/X trends
  • Reddit mentions
  • YouTube search trends

🔄 Normalization Layer (CRITICAL)

Problem:

Google Trends data is:

  • relative (0–100)
  • inconsistent across queries

Solution:

Normalize across:

  • timeframes
  • keywords

Techniques:

  • anchor keywords (e.g. always compare vs “bitcoin”)
  • rescaling across batches

🧠 Entity Mapping Layer

Map:

  • “btc”, “bitcoin”, “bitcoin price” → BTC

👉 This is essential for consistency with your other MCPs


🗃️ Storage / Cache

Cache trend series:

Key:

trends:{keyword}:{timeframe}

TTL:

  • 15–60 minutes (trends don’t change second-by-second)

🧰 2. Agent-Facing Tools

Keep them interpretive, not raw


1. get_interest_over_time

“Show attention trend”

{
  "keyword": "bitcoin",
  "timeframe": "7d"
}

Output:

{
  "series": [12, 18, 25, 40, 65, 80],
  "trend": "rising"
}

👉 Include simple interpretation (“rising”, “falling”, “flat”)


2. compare_interest

“Which topic is hotter?”

{
  "keywords": ["bitcoin", "ethereum"],
  "timeframe": "7d"
}

Output:

{
  "winner": "bitcoin",
  "ratios": {
    "bitcoin": 1.0,
    "ethereum": 0.72
  }
}

3. detect_trending_entities

“What is gaining attention?”

{
  "category": "crypto"
}

Output:

[
  {
    "entity": "Solana",
    "trend_score": 0.91,
    "velocity": "high"
  }
]

4. get_related_queries

“What are people searching around this?”

{
  "keyword": "bitcoin"
}

Output:

[
  "bitcoin etf",
  "bitcoin price prediction",
  "btc news"
]

5. get_attention_score (VERY useful)

“How much attention does X have right now?”

{
  "entity": "BTC"
}

Output:

{
  "score": 0.78,
  "relative_to_baseline": 1.4
}

⚠️ 3. What NOT to expose

Avoid:

  • raw Google Trends responses
  • overly granular time series
  • unnormalized keyword data

❌ Bad:

get_raw_trends(keyword)

🧠 4. Key Challenges

1. Normalization (hardest problem)

Trends are:

  • relative per query
  • not directly comparable

👉 If you skip normalization: your MCP becomes misleading


2. Keyword ambiguity

Example:

  • “apple” → company or fruit?

👉 You need:

  • context-aware mapping
  • or restrict to known entities (better for v1)

3. Sparse data

Some queries:

  • have low volume
  • return noisy signals

👉 filter aggressively


⚡ 5. Signal Engineering (where value comes from)

Raw trend data is weak.

Value comes from:


📈 Trend direction

  • rising / falling / flat

🚀 Velocity

  • how fast attention increases

🔥 Spike detection

  • sudden jumps

⚖️ Relative strength

  • vs other entities

🧩 6. Relationship to Other MCPs

This is your early warning system

Combined with:

  • Crypto MCP → confirms movement
  • News MCP → explains movement

🔥 Example synergy:

Trends MCP: → “Ethereum ETF” searches spiking

News MCP: → few articles yet

Crypto MCP: → price still flat

👉 This is:

pre-news signal


🧭 7. Design Philosophy

Each tool should answer:

“Where is attention moving?”


🚀 8. Suggested Build Order

  1. basic keyword trend fetch
  2. caching
  3. simple slope detection (rising/falling)
  4. entity mapping (BTC, ETH, etc.)
  5. comparison tool

👉 normalization improvements can come later