A TikTok hashtag with 2 million views and a Google search term with 2 million monthly searches are not the same thing - and treating them as if they are produces misleading analysis. Cross-platform trend comparison is only valid when the underlying data is normalized to a consistent scale. Trends MCP normalizes all sources to a calibrated 0-100 scale so you can legitimately compare Google, TikTok, Reddit, Amazon, and more in a single query.
Free API access
100 free requests per month. No credit card, no setup fee.
Replaced my manual Google Trends scraper in an afternoon. The data is clean and the latency is surprisingly low for a free tier.
We use it for keyword trend reports. The free monthly quota keeps us batching queries for weekly digests. Upgrading is there when we need more headroom.
Hooked it into my MCP server in like 20 minutes. The JSON response is well-structured and the docs are solid. Exactly what I needed.
We pipe weekly series into BigQuery for a few brand cohorts. Compared to maintaining our old Selenium job, this is boring in the best way. Uptime has been solid.
Great for slide-ready trend screenshots when leadership asks why we are prioritizing a feature. I wish the dashboard had saved views, but the API side is great.
Running it from Cursor with the MCP config took one try. I am not a trends person, but my side project now emails me when a niche keyword spikes hard week over week.
Using the growth endpoints to sanity-check retail names before I write up notes. Occasionally the normalization differs from what I see in the raw Google UI, but it is consistent run to run.
Pulling multi-source ranked lists into a notebook is straightforward. Error payloads are actually readable when I fat-finger a parameter, which matters more than people admit.
Does what it says. I knocked a star because onboarding assumed I already knew MCP wiring; a copy-paste block for Claude Desktop would have saved me 15 minutes.
We track TikTok hashtag momentum against paid spend in a Looker sheet. Not glamorous work, but it is the first tool my team did not argue about during rollout.
Retries are predictable and I have not seen weird HTML in responses (looking at you, scrapers). Would pay for a team key rotation flow, but for now we rotate manually.
Quick checks on retail buzz before we dig into filings. Not a silver bullet, but it is faster than opening twelve browser tabs and reconciling by hand.
Helpful for spotting whether a topic is a one-day meme or sticking around. I still cross-check with Search Console, but this gets me 80% of the signal in one call.
I demo this in workshops when people ask how to ground LLM answers in something fresher than training data. The MCP angle lands well with engineers who hate glue code.
Solid for client reporting. Billing is clear enough that finance stopped asking me what line item this is. Minor nit: peak hours can feel a touch slower, still acceptable.
I wired this behind a small CLI for contributors who want trend context in issues. Keeping the surface area tiny matters for OSS, and the schema has not churned on me yet.
Daily pulls for a 30-day window go straight into our internal scoreboard. Stakeholders finally stopped debating whose screenshot of Trends was newer.
We are pre-revenue, so free tier discipline matters. I hit the cap once during a brainstorm where everyone wanted to try random keywords. Learned to batch smarter.
Security review passed without drama: HTTPS, scoped keys, no bizarre third-party redirects in the chain we could find. That is rarer than vendors think.
I do not need this daily, but when App Store rank shifts look weird, having Reddit and news context in one place saves me from context switching across six apps.
I use it to see if a story is genuinely blowing up or just loud on one platform. It is not a replacement for reporting, but it keeps my ledes honest.
We moved off a brittle Playwright script that broke every time Google shuffled markup. Same data shape every week now, which is all I wanted from life.
Seasonal demand spikes line up with what we see in Amazon search interest here. Merch team stopped sending me screenshots from random tools that never matched.
Solid for client decks. I docked one star only because I still export to Sheets manually; a direct connector would be nice someday.
Steam concurrents plus Reddit chatter in one workflow beats our old spreadsheet ritual before milestone reviews.
Quick pulse on whether a feature name is confusing people in search before we ship copy. Cheap sanity check compared to a full survey.
Monitored from Grafana via a thin wrapper. p95 stayed under our SLO budget last month. One noisy day during a holiday but nothing alarming.
Narrative fights in meetings got shorter once we could point at the same trend line everyone agreed on. Sounds silly until you have lived through it.
Using normalized series as a weak prior in a forecasting experiment. Citation-friendly timestamps in the payload made reproducing runs less painful.
Approved for our pilot group after a quick vendor review. Would love SAML, not a blocker for our size.
YouTube search interest plus TikTok hashtags in one place helps me explain why a sponsor should care about a vertical without hand-waving.
Cron job hits the API before standup; Slack gets a compact summary. Took an afternoon to wire, has been stable for two quarters.
Useful for public-interest topics where search interest is a rough proxy for attention. I still triangulate with primary sources; this is one signal among several.
Runs in a VPC egress-only subnet with allowlisted domains. Fewer exceptions to explain to auditors than our last vendor.
Spotting when a topic is about to flood Discord saves my team from reactive moderation fires. Not perfect, but directionally right often enough.
For lean teams the ROI story writes itself. I would not build an in-house scraper for this anymore unless compliance forced it.
Examples in the docs match what the MCP actually returns. You would be surprised how rare that is in this category.
Pager stayed quiet. When something upstream flaked once, the error string told me which parameter to fix without opening logs first.
Students use it for coursework demos. Budget is tight so free tier matters; we coach them to cache aggressively.
Helps prep talking points when retail interest in our name swings after earnings. Not material disclosure, just context for Q&A prep.
Response sizes stay small enough for mobile hotspots. I hate APIs that dump megabytes for a sparkline.
What are you working on?
How will you connect?
The most common error in multi-source trend analysis is treating raw volumes from different platforms as if they measure the same thing on the same scale. They do not. Fixing this requires a consistent normalization methodology applied before the data reaches any analysis or visualization layer.
Consider three platforms tracking interest in the same keyword on the same day:
A naive reading suggests TikTok has by far the highest interest. But this ignores that TikTok's total daily content volume is orders of magnitude larger than Reddit's, and that "hashtag views" and "Reddit mentions" measure fundamentally different behaviors. TikTok's video algorithm shows hashtag content to users who did not actively seek it; Reddit mentions require active posting to a community.
Without normalization, you cannot answer: is 8,400 Reddit mentions a lot or a little for this keyword on this platform? Is 2,200,000 TikTok views above or below average for a topic at this level of cultural penetration?
Normalization converts each platform's raw volume to a position within that platform's own distribution. A normalized value of 60 on Google Search means: this keyword's search volume is at approximately the 60th percentile of Google Search volumes for comparable keywords. A normalized value of 60 on Reddit means: this keyword's discussion volume is at the 60th percentile of Reddit discussion volumes for comparable keywords.
Now the comparison is valid. Both 60s represent equivalent relative penetration on their respective platforms. A keyword scoring 60 on Google and 30 on Reddit is genuinely stronger on Google than Reddit - the cross-platform comparison reflects something real about relative interest.
This is what Trends MCP's 0-100 scale provides. It is not Google's native 0-100 (which changes with every query window), and it is not a raw volume number. It is a consistently calibrated relative position within each platform's distribution.
Google's native normalization is query-dependent. If you query a single keyword, it returns 100 at its peak during the selected date range and all other values scaled to that peak. If you add a second keyword to the same query, both keywords are re-scaled to the new combined peak. Add a third keyword and all three rescale again.
This means:
- You cannot combine data from two separate Google Trends queries and compare the values
- You cannot replicate a study's exact values by running the query again later (the peak may have shifted)
- A keyword that scored 40 in one query batch may score 75 in another batch with different comparison terms
For any multi-keyword or multi-time-period analysis, Google's native normalization produces numbers that look precise but are not comparable to each other. This is the core methodological critique in the academic literature on Google Trends.
Trends MCP normalizes all data against a consistent historical baseline, not against the current query. The same keyword returns the same normalized value regardless of what else you query, when you query it, or what time window you select.
Normalization is the correct approach for cross-platform comparison. But there are cases where absolute volume is what you need - specifically, when the magnitude of activity matters, not just the relative position.
Trends MCP provides absolute volume estimates where the underlying data supports it. The estimates are calibrated against search panel data and are not the same as Google Ads search volumes, but they provide a consistent cardinal scale for each source. Where the data quality score for a given data point is high, the absolute estimate is reliable for quantitative use. Where the score is low (typically for niche or low-volume keywords), the normalized value is more reliable than the absolute estimate.
Every data point in Trends MCP includes a data_quality_score (0-1). This score reflects:
A score of 0.9+ means the normalized value is reliable and the absolute estimate (where provided) is well-calibrated. A score of 0.3 means the data is sparse and the normalized value should be treated as directional rather than precise.
For analysis or dashboards, filtering on data quality score - or displaying low-quality points differently - produces more honest output than treating all data points as equally reliable.
When you plot Google, TikTok, and Reddit trend lines on the same chart using Trends MCP's normalized values:
This is the condition required for the leading indicator analysis that makes multi-source trend data useful - the hypothesis that TikTok leads Google by 2-4 weeks is only testable if the two signals are on a comparable scale. Raw volumes cannot support this analysis. Normalized values can.
Connect
An API key is required to connect. Get your free key above, then copy the pre-filled config for your client.
Cursor
Cursor Settings → Tools & MCP → Add a Custom MCP Server
"trends-mcp": { "url": "https://api.trendsmcp.ai/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
+ Add to Cursor
Or paste into Mac / Linux — ~/.cursor/mcp.json
Windows — %USERPROFILE%\.cursor\mcp.json
↑ Get your free key above first — the config won't work without it.
Claude Desktop
User → Settings → Developer → Edit Config — add inside mcpServers
"trends-mcp": { "command": "npx", "args": [ "-y", "mcp-remote", "https://api.trendsmcp.ai/mcp", "--header", "Authorization:${AUTH_HEADER}" ], "env": { "AUTH_HEADER": "Bearer YOUR_API_KEY" } }
Mac — ~/Library/Application Support/Claude/claude_desktop_config.json
Windows — %APPDATA%\Claude\claude_desktop_config.json
Fully quit and restart Claude Desktop after saving.
Claude Code (CLI)
claude mcp add --transport http trends-mcp https://api.trendsmcp.ai/mcp \ --header "Authorization: Bearer YOUR_API_KEY"
Windsurf
Settings → Advanced Settings → Cascade → Add custom server +
"trends-mcp": { "url": "https://api.trendsmcp.ai/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
Mac / Linux — ~/.codeium/windsurf/mcp_config.json
Windows — %USERPROFILE%\.codeium\windsurf\mcp_config.json
Or: Command Palette → Windsurf: Configure MCP Servers
VS Code
Extensions sidebar → search @mcp trends-mcp → Install — or paste manually into .vscode/mcp.json inside servers
"trends-mcp": { "type": "http", "url": "https://api.trendsmcp.ai/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
Paste into .vscode/mcp.json, or:
Command Palette (⇧⌘P / Ctrl+Shift+P) → MCP: Add Server
Data Sources
All data is normalized to a 0-100 scale for consistent cross-platform comparison.
Tools
Four tools, organized by how you start. With a keyword, track history and growth. Without one, use discovery to see ranked movers or what is live right now.
You already have a keyword.
Chart how it moves over time and compare growth across sources.
No keyword required.
Ranked lists on one source with a growth sort you choose, or a live snapshot of what is trending across platforms.
Outputs
FAQ