Academic research using internet trend data faces two persistent problems: reproducibility and multi-platform coverage. Google Trends returns relative values that change based on query context, and pytrends breaks without warning. Trends MCP provides structured, API-based access to normalized trend data from Google, Reddit, YouTube, Wikipedia, news, and more - with consistent responses, absolute volume estimates, and data quality scores suitable for peer-reviewed research.
Free API access
100 free requests per month. No credit card, no setup fee.
Replaced my manual Google Trends scraper in an afternoon. The data is clean and the latency is surprisingly low for a free tier.
We use it for keyword trend reports. The free monthly quota keeps us batching queries for weekly digests. Upgrading is there when we need more headroom.
Hooked it into my MCP server in like 20 minutes. The JSON response is well-structured and the docs are solid. Exactly what I needed.
We pipe weekly series into BigQuery for a few brand cohorts. Compared to maintaining our old Selenium job, this is boring in the best way. Uptime has been solid.
Great for slide-ready trend screenshots when leadership asks why we are prioritizing a feature. I wish the dashboard had saved views, but the API side is great.
Running it from Cursor with the MCP config took one try. I am not a trends person, but my side project now emails me when a niche keyword spikes hard week over week.
Using the growth endpoints to sanity-check retail names before I write up notes. Occasionally the normalization differs from what I see in the raw Google UI, but it is consistent run to run.
Pulling multi-source ranked lists into a notebook is straightforward. Error payloads are actually readable when I fat-finger a parameter, which matters more than people admit.
Does what it says. I knocked a star because onboarding assumed I already knew MCP wiring; a copy-paste block for Claude Desktop would have saved me 15 minutes.
We track TikTok hashtag momentum against paid spend in a Looker sheet. Not glamorous work, but it is the first tool my team did not argue about during rollout.
Retries are predictable and I have not seen weird HTML in responses (looking at you, scrapers). Would pay for a team key rotation flow, but for now we rotate manually.
Quick checks on retail buzz before we dig into filings. Not a silver bullet, but it is faster than opening twelve browser tabs and reconciling by hand.
Helpful for spotting whether a topic is a one-day meme or sticking around. I still cross-check with Search Console, but this gets me 80% of the signal in one call.
I demo this in workshops when people ask how to ground LLM answers in something fresher than training data. The MCP angle lands well with engineers who hate glue code.
Solid for client reporting. Billing is clear enough that finance stopped asking me what line item this is. Minor nit: peak hours can feel a touch slower, still acceptable.
I wired this behind a small CLI for contributors who want trend context in issues. Keeping the surface area tiny matters for OSS, and the schema has not churned on me yet.
Daily pulls for a 30-day window go straight into our internal scoreboard. Stakeholders finally stopped debating whose screenshot of Trends was newer.
We are pre-revenue, so free tier discipline matters. I hit the cap once during a brainstorm where everyone wanted to try random keywords. Learned to batch smarter.
Security review passed without drama: HTTPS, scoped keys, no bizarre third-party redirects in the chain we could find. That is rarer than vendors think.
I do not need this daily, but when App Store rank shifts look weird, having Reddit and news context in one place saves me from context switching across six apps.
I use it to see if a story is genuinely blowing up or just loud on one platform. It is not a replacement for reporting, but it keeps my ledes honest.
We moved off a brittle Playwright script that broke every time Google shuffled markup. Same data shape every week now, which is all I wanted from life.
Seasonal demand spikes line up with what we see in Amazon search interest here. Merch team stopped sending me screenshots from random tools that never matched.
Solid for client decks. I docked one star only because I still export to Sheets manually; a direct connector would be nice someday.
Steam concurrents plus Reddit chatter in one workflow beats our old spreadsheet ritual before milestone reviews.
Quick pulse on whether a feature name is confusing people in search before we ship copy. Cheap sanity check compared to a full survey.
Monitored from Grafana via a thin wrapper. p95 stayed under our SLO budget last month. One noisy day during a holiday but nothing alarming.
Narrative fights in meetings got shorter once we could point at the same trend line everyone agreed on. Sounds silly until you have lived through it.
Using normalized series as a weak prior in a forecasting experiment. Citation-friendly timestamps in the payload made reproducing runs less painful.
Approved for our pilot group after a quick vendor review. Would love SAML, not a blocker for our size.
YouTube search interest plus TikTok hashtags in one place helps me explain why a sponsor should care about a vertical without hand-waving.
Cron job hits the API before standup; Slack gets a compact summary. Took an afternoon to wire, has been stable for two quarters.
Useful for public-interest topics where search interest is a rough proxy for attention. I still triangulate with primary sources; this is one signal among several.
Runs in a VPC egress-only subnet with allowlisted domains. Fewer exceptions to explain to auditors than our last vendor.
Spotting when a topic is about to flood Discord saves my team from reactive moderation fires. Not perfect, but directionally right often enough.
For lean teams the ROI story writes itself. I would not build an in-house scraper for this anymore unless compliance forced it.
Examples in the docs match what the MCP actually returns. You would be surprised how rare that is in this category.
Pager stayed quiet. When something upstream flaked once, the error string told me which parameter to fix without opening logs first.
Students use it for coursework demos. Budget is tight so free tier matters; we coach them to cache aggressively.
Helps prep talking points when retail interest in our name swings after earnings. Not material disclosure, just context for Q&A prep.
Response sizes stay small enough for mobile hotspots. I hate APIs that dump megabytes for a sparkline.
What are you working on?
How will you connect?
Internet trend data has become a standard source in social science, public health, political science, and economics research. Google Trends is cited in thousands of peer-reviewed papers. Reddit and Wikipedia trend data have appeared in research on information diffusion, collective attention, and public discourse. The methodological challenges - reproducibility, data quality, multi-source comparability - are well-documented in the literature but rarely solved at the data access layer.
Trends MCP addresses the data access problem specifically. It does not solve the methodological questions about what trend data measures or how to interpret it - those remain discipline-specific. What it provides is a reliable, structured, API-based data pipeline that reduces the infrastructure burden for researchers.
The canonical issue in academic use of Google Trends is normalization context-dependence. When you query Google Trends for a single keyword, it returns values normalized to the peak of that keyword in your selected time window. When you add a second keyword to the same query, both keywords are renormalized to the overall peak across both keywords in the window. The result: identical keywords return different values depending on what else is in the query.
This creates a reproducibility problem. A paper that reports Google Trends values for "vaccine hesitancy" cannot be exactly reproduced by another researcher who runs the same query on a different date, because the normalization window has changed. This is documented in academic literature (e.g., Mavragani et al., 2018; Olson et al., 2019) and is a known limitation for research using the native Google Trends interface.
Trends MCP normalizes values consistently against the full data history, not relative to the current query window. Two researchers running the same query on different dates get the same historical values. This is not a complete solution to all reproducibility concerns in trend research, but it removes the most common source of replication failure.
The 0-100 relative scale from Google Trends is useful for identifying directional trends but problematic for quantitative research that requires absolute measures. You cannot directly compare two keywords on a relative scale if they have very different absolute search volumes - a keyword at 50 and another at 50 may have a 10:1 difference in actual volume.
Trends MCP provides absolute volume estimates alongside normalized values where the underlying data supports it. This enables:
The absolute volume estimates are calibrated against search panel data and are not the same as reported Google Ads search volumes, but they provide a consistent cardinal scale within each source that the native 0-100 does not.
Beyond Google Search, two sources are particularly valuable for academic research:
Wikipedia page views (source='wikipedia'): Wikipedia page views are a clean proxy for public information-seeking on a specific topic - distinct from searching (Google) or discussing (Reddit/social). They reflect the population's intent to learn about something. Researchers in public health have used Wikipedia health topic page views to study public attention during disease outbreaks; political scientists have used them to study attention to political events. The signal is clean (less noise than social media discussion) and available historically.
News volume (source='news'): The volume of news articles covering a keyword over time, with sentiment scores. Useful for media studies research, event detection, and studying the relationship between news coverage and public search behavior.
Trends MCP returns a data_quality_score (0-1) with each data point. This is the kind of explicit data quality documentation that academic methodology sections require but rarely receive from commercial data sources. The score reflects:
Filtering or flagging data points below a threshold (e.g., data_quality_score < 0.5) and reporting that threshold in your methodology section provides a more defensible data quality statement than "Google Trends data was used."
Public health: Track search volume and Reddit discussion for a disease keyword before and after a public health announcement. Compare the cross-platform response pattern using get_growth across Google, Reddit, YouTube, and news.
Political science: Measure public attention to a political event or candidate using Google Search, Wikipedia page views, and news volume as three independent proxies. Get_growth with multiple sources provides all three in one call.
Economics: Use Amazon and Google Shopping search data as leading indicators of consumer spending intent for a product category. Track the cross-platform signal chain (TikTok -> Google -> Amazon) using weekly time series to study demand formation.
Media studies: Compare information diffusion across platforms for a breaking event - when did TikTok spike, when did Reddit spike, when did Google Search follow? Get_trends for each platform with weekly data answers this directly.
Information science: Study Wikipedia editing and page view patterns for topics in the news using get_trends with source='wikipedia' and correlate with Google Search volume and news coverage volume.
Connect
An API key is required to connect. Get your free key above, then copy the pre-filled config for your client.
Cursor
Cursor Settings → Tools & MCP → Add a Custom MCP Server
"trends-mcp": { "url": "https://api.trendsmcp.ai/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
+ Add to Cursor
Or paste into Mac / Linux — ~/.cursor/mcp.json
Windows — %USERPROFILE%\.cursor\mcp.json
↑ Get your free key above first — the config won't work without it.
Claude Desktop
User → Settings → Developer → Edit Config — add inside mcpServers
"trends-mcp": { "command": "npx", "args": [ "-y", "mcp-remote", "https://api.trendsmcp.ai/mcp", "--header", "Authorization:${AUTH_HEADER}" ], "env": { "AUTH_HEADER": "Bearer YOUR_API_KEY" } }
Mac — ~/Library/Application Support/Claude/claude_desktop_config.json
Windows — %APPDATA%\Claude\claude_desktop_config.json
Fully quit and restart Claude Desktop after saving.
Claude Code (CLI)
claude mcp add --transport http trends-mcp https://api.trendsmcp.ai/mcp \ --header "Authorization: Bearer YOUR_API_KEY"
Windsurf
Settings → Advanced Settings → Cascade → Add custom server +
"trends-mcp": { "url": "https://api.trendsmcp.ai/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
Mac / Linux — ~/.codeium/windsurf/mcp_config.json
Windows — %USERPROFILE%\.codeium\windsurf\mcp_config.json
Or: Command Palette → Windsurf: Configure MCP Servers
VS Code
Extensions sidebar → search @mcp trends-mcp → Install — or paste manually into .vscode/mcp.json inside servers
"trends-mcp": { "type": "http", "url": "https://api.trendsmcp.ai/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } }
Paste into .vscode/mcp.json, or:
Command Palette (⇧⌘P / Ctrl+Shift+P) → MCP: Add Server
Data Sources
All data is normalized to a 0-100 scale for consistent cross-platform comparison.
Tools
Four tools, organized by how you start. With a keyword, track history and growth. Without one, use discovery to see ranked movers or what is live right now.
You already have a keyword.
Chart how it moves over time and compare growth across sources.
No keyword required.
Ranked lists on one source with a growth sort you choose, or a live snapshot of what is trending across platforms.
Outputs
FAQ