MCP Server

How to build a trends dashboard

A trends dashboard is only as good as its data pipeline. Most attempts fail at the data layer - scrapers that break, single-source coverage, or relative-only values that cannot be meaningfully compared. This guide covers how to build a live trends dashboard using Trends MCP as the data source: what to query, how to structure the data, and how to display it with consistent cross-platform comparisons.

Get your free API key

100 free requests per month. No credit card, no setup fee.

API calls served
MR AK JL TS +
Loved by developers
MR
Marco R.
Quant Developer

Replaced my manual Google Trends scraper in an afternoon. The data is clean and the latency is surprisingly low for a free tier.

2 weeks ago
JL
Jamie L.
SEO Lead @ Growth Agency

We use it for keyword trend reports. The free monthly quota keeps us batching queries for weekly digests. Upgrading is there when we need more headroom.

3 weeks ago
SR
Stella R.
Product Designer
3 weeks ago
AK
Aisha K.
Full-stack Developer

Hooked it into my MCP server in like 20 minutes. The JSON response is well-structured and the docs are solid. Exactly what I needed.

5 days ago
DP
Daniel P.
Data Engineer @ Fintech

We pipe weekly series into BigQuery for a few brand cohorts. Compared to maintaining our old Selenium job, this is boring in the best way. Uptime has been solid.

Yesterday
NS
Nina S.
Product Manager, B2B SaaS

Great for slide-ready trend screenshots when leadership asks why we are prioritizing a feature. I wish the dashboard had saved views, but the API side is great.

4 days ago
MA
Miguel A.
Frontend Developer
4 days ago
TW
Tom W.
Indie Maker

Running it from Cursor with the MCP config took one try. I am not a trends person, but my side project now emails me when a niche keyword spikes hard week over week.

1 week ago
RK
Ravi K.
Research Analyst

Using the growth endpoints to sanity-check retail names before I write up notes. Occasionally the normalization differs from what I see in the raw Google UI, but it is consistent run to run.

6 days ago
LC
Laura C.
ML Engineer

Pulling multi-source ranked lists into a notebook is straightforward. Error payloads are actually readable when I fat-finger a parameter, which matters more than people admit.

10 days ago
KN
Keiko N.
Graduate Student
10 days ago
BH
Ben H.
Freelance DevOps

Does what it says. I knocked a star because onboarding assumed I already knew MCP wiring; a copy-paste block for Claude Desktop would have saved me 15 minutes.

2 months ago
EM
Elena M.
Growth PM

We track TikTok hashtag momentum against paid spend in a Looker sheet. Not glamorous work, but it is the first tool my team did not argue about during rollout.

12 days ago
JF
Jordan F.
Backend Developer

Retries are predictable and I have not seen weird HTML in responses (looking at you, scrapers). Would pay for a team key rotation flow, but for now we rotate manually.

18 days ago
SO
Sam O.
Hedge Fund Associate

Quick checks on retail buzz before we dig into filings. Not a silver bullet, but it is faster than opening twelve browser tabs and reconciling by hand.

3 weeks ago
VL
Victor L.
IT Support
3 weeks ago
GV
Greta V.
Content Strategist

Helpful for spotting whether a topic is a one-day meme or sticking around. I still cross-check with Search Console, but this gets me 80% of the signal in one call.

9 days ago
YT
Yuki T.
DevRel Contractor

I demo this in workshops when people ask how to ground LLM answers in something fresher than training data. The MCP angle lands well with engineers who hate glue code.

1 month ago
CD
Chris D.
Agency Tech Lead

Solid for client reporting. Billing is clear enough that finance stopped asking me what line item this is. Minor nit: peak hours can feel a touch slower, still acceptable.

22 days ago
AM
Amir M.
Open Source Maintainer

I wired this behind a small CLI for contributors who want trend context in issues. Keeping the surface area tiny matters for OSS, and the schema has not churned on me yet.

16 days ago
KL
Kendra L.
BI Analyst

Daily pulls for a 30-day window go straight into our internal scoreboard. Stakeholders finally stopped debating whose screenshot of Trends was newer.

8 days ago
BT
Brooke T.
Demand Gen
8 days ago
PG
Priya G.
Startup Founder

We are pre-revenue, so free tier discipline matters. I hit the cap once during a brainstorm where everyone wanted to try random keywords. Learned to batch smarter.

11 days ago
HW
Henrik W.
Solutions Architect

Security review passed without drama: HTTPS, scoped keys, no bizarre third-party redirects in the chain we could find. That is rarer than vendors think.

27 days ago
IZ
Isaac Z.
Mobile Developer

I do not need this daily, but when App Store rank shifts look weird, having Reddit and news context in one place saves me from context switching across six apps.

19 days ago
VA
Vera A.
Journalist / Newsletter Writer

I use it to see if a story is genuinely blowing up or just loud on one platform. It is not a replacement for reporting, but it keeps my ledes honest.

14 days ago
QB
Quinn B.
Staff Engineer

We moved off a brittle Playwright script that broke every time Google shuffled markup. Same data shape every week now, which is all I wanted from life.

3 days ago
AC
Alan C.
Hobbyist Developer
3 days ago
FS
Fatima S.
E-commerce Director

Seasonal demand spikes line up with what we see in Amazon search interest here. Merch team stopped sending me screenshots from random tools that never matched.

5 days ago
OR
Owen R.
Analytics Consultant

Solid for client decks. I docked one star only because I still export to Sheets manually; a direct connector would be nice someday.

7 days ago
MJ
Marcus J.
Game Studio Producer

Steam concurrents plus Reddit chatter in one workflow beats our old spreadsheet ritual before milestone reviews.

13 days ago
LN
Leah N.
UX Researcher

Quick pulse on whether a feature name is confusing people in search before we ship copy. Cheap sanity check compared to a full survey.

17 days ago
DW
Diego W.
SRE

Monitored from Grafana via a thin wrapper. p95 stayed under our SLO budget last month. One noisy day during a holiday but nothing alarming.

24 days ago
TC
Tessa C.
Brand Strategist

Narrative fights in meetings got shorter once we could point at the same trend line everyone agreed on. Sounds silly until you have lived through it.

20 days ago
UH
Uma H.
PhD Candidate, CS

Using normalized series as a weak prior in a forecasting experiment. Citation-friendly timestamps in the payload made reproducing runs less painful.

29 days ago
XE
Xavier E.
IT Manager

Approved for our pilot group after a quick vendor review. Would love SAML, not a blocker for our size.

33 days ago
DK
Daria K.
Operations Consultant
33 days ago
NP
Nina P.
Creator Economy Analyst

YouTube search interest plus TikTok hashtags in one place helps me explain why a sponsor should care about a vertical without hand-waving.

15 days ago
GK
Gabe K.
Automation Engineer

Cron job hits the API before standup; Slack gets a compact summary. Took an afternoon to wire, has been stable for two quarters.

41 days ago
SY
Sofia Y.
Policy Researcher

Useful for public-interest topics where search interest is a rough proxy for attention. I still triangulate with primary sources; this is one signal among several.

26 days ago
RB
Raj B.
Cloud Architect

Runs in a VPC egress-only subnet with allowlisted domains. Fewer exceptions to explain to auditors than our last vendor.

35 days ago
CF
Clara F.
Community Manager

Spotting when a topic is about to flood Discord saves my team from reactive moderation fires. Not perfect, but directionally right often enough.

21 days ago
MZ
Mei Z.
Research Associate
21 days ago
WL
Wes L.
Fractional CMO

For lean teams the ROI story writes itself. I would not build an in-house scraper for this anymore unless compliance forced it.

31 days ago
IK
Ingrid K.
Technical Writer

Examples in the docs match what the MCP actually returns. You would be surprised how rare that is in this category.

6 days ago
JV
Jon V.
Night-shift NOC Tech

Pager stayed quiet. When something upstream flaked once, the error string told me which parameter to fix without opening logs first.

45 days ago
AE
Avery E.
University Lab Manager

Students use it for coursework demos. Budget is tight so free tier matters; we coach them to cache aggressively.

38 days ago
ZM
Zoe M.
Investor Relations Associate

Helps prep talking points when retail interest in our name swings after earnings. Not material disclosure, just context for Q&A prep.

23 days ago
HT
Hassan T.
Web Performance Lead

Response sizes stay small enough for mobile hotspots. I hate APIs that dump megabytes for a sparkline.

4 days ago

What are you working on?

How will you connect?

A trends dashboard sounds simple: chart some trend lines, add growth percentages, ship it. The reality is that most early attempts collapse at the data layer within a few weeks. The scraper breaks. The data from different sources uses incompatible scales. The refresh runs into rate limits. This guide walks through building a dashboard that does not have those problems.

Start with the data architecture, not the UI

The most common mistake is starting with the visualization (choosing a chart library, designing the layout) before solving the data problem. Design the UI last. First, answer:

  1. What sources do you need to cover?
  2. What keywords or topics are you tracking?
  3. How frequently does the data need to refresh?
  4. How will you store and serve the data to the frontend?

Trends MCP as a backend simplifies questions 1-3 significantly. One connection gives your data pipeline access to Google, TikTok, Reddit, YouTube, Amazon, Wikipedia, news, web traffic, app downloads, npm, and Steam - all normalized to a consistent 0-100 scale. You still need to answer question 4 based on your stack.

Query architecture for a dashboard

Time series charts: get_trends

For the main trend line charts, use get_trends with data_mode='weekly'. This returns up to 5 years of weekly data per keyword per source. Query each keyword-source pair you want to display and store the results.

For most dashboards, querying multiple sources for each keyword gives you a richer picture. The response structure is consistent across sources - the same date/value/volume format regardless of whether you are querying Google or TikTok.

Practical note: One get_trends call covers one source. If you want to show Google, TikTok, and Reddit lines for the same keyword, that is three calls. Plan your daily query budget accordingly.

Growth scorecards: get_growth

Growth rate numbers (30-day, 90-day, 1-year) are the most-read data points on any trend dashboard. Use get_growth with source='all' to get cross-platform growth in a single call - this is the most efficient query pattern for dashboard scorecards.

The response includes percentage change, start/end volumes, direction (up/down/flat), and a data quality score for each source. The data quality score is useful for suppressing or flagging low-confidence values in your UI rather than displaying misleading growth figures.

Discovery / new keyword surfacing: get_ranked_trends

Most dashboards have a fixed keyword list and a discovery section. For the discovery section, run get_ranked_trends once per day on your primary source, sorted by wow_pct_change (week-over-week percent change) or yoy_pct_change. This surfaces keywords growing fastest that week, which you can present as "trending this week" cards and allow users to add to their tracked list.

Live trending / breaking signals: get_top_trends

For a "trending right now" section with no fixed keyword list, get_top_trends requires no seed keyword and returns what is actually trending at query time. This is the appropriate source for a live feed of breakout topics rather than monitored keyword lists.

Normalization and display

All Trends MCP values are normalized to 0-100 per platform. This is designed for cross-platform comparison on a shared axis. When plotting multiple sources for the same keyword, you can put them on the same chart without a second Y-axis or any transformation.

What you should not do is mix normalized values and absolute volume estimates on the same axis. If you display absolute volume, keep it to single-source charts. If you display normalized values, you can legitimately compare across sources.

See the normalization methodology page for a full explanation of how the 0-100 scale is calibrated across platforms.

Refresh strategy and caching

Daily refresh for time series: Run a scheduled job (e.g., cron at 2am UTC) that re-fetches the weekly time series for all tracked keywords. Store results in a database or object store. This keeps your charts current without continuous polling.

Real-time for live trending: get_top_trends can be refreshed every 15-60 minutes for a "trending now" section. The underlying data updates frequently for sources like TikTok and Reddit.

Cache growth scorecards for 24 hours: get_growth results change slowly - a 30-day growth figure on Monday and Tuesday is nearly identical. Serving cached results with a timestamp ("updated 6 hours ago") is accurate enough and reduces unnecessary queries.

Common mistakes

Mixing relative and absolute values: Google Trends' native 0-100 scale is relative to the peak in your query window. Trends MCP's 0-100 is normalized differently (consistently calibrated across time and sources). Do not mix these. If you are importing data from multiple sources and some comes from pytrends, the scales are not compatible.

Querying too many keywords on initial load: Dashboard load time degrades quickly if you query all keywords on page load. Pre-fetch and cache data server-side; serve stale-while-revalidate. Users should see charts instantly from cache, with a background refresh.

Ignoring data quality scores: Trends MCP returns a data_quality_score with each data point. A score below ~0.4 usually means sparse data or low platform coverage for that keyword. Displaying these values without flagging them misleads users. Either suppress low-quality data points or show them grayed out with a "limited data" indicator.

Single-source dashboards: If your dashboard only shows Google Trends, it is showing 1 of 15+ available signals. Users who rely on it for decisions are missing the 2-4 week lead time that TikTok and Reddit signals provide before Google Search catches up. Multi-source is not complexity - it is the minimum for a trustworthy trend picture.

Minimal viable implementation

The fastest path to a working trends dashboard:

  1. Set up a Trends MCP connection (one config entry in your AI client or one HTTP endpoint in your backend)
  2. Run get_growth(keyword='your keyword', source='all', percent_growth=['1M', '3M', '1Y']) for your keyword list to populate scorecard numbers
  3. Run get_trends(keyword='your keyword', source='google', data_mode='weekly') for each keyword to build time series charts
  4. Run get_ranked_trends(source='google', sort='wow_pct_change', limit=20) for a discovery section
  5. Store results, build the UI against the cached data

This four-query pattern covers the core of a production-grade trends dashboard. Add sources, additional keywords, and scheduled refresh once the baseline is working.

Add to your AI in 30 seconds

An API key is required to connect. Get your free key above, then copy the pre-filled config for your client.

Cursor

Cursor SettingsTools & MCPAdd a Custom MCP Server

"trends-mcp": {
  "url": "https://api.trendsmcp.ai/mcp",
  "transport": "http",
  "headers": { "Authorization": "Bearer YOUR_API_KEY" }
}

+ Add to Cursor
Or paste into Mac / Linux — ~/.cursor/mcp.json
Windows — %USERPROFILE%\.cursor\mcp.json

↑ Get your free key above first — the config won't work without it.

Claude Desktop

UserSettingsDeveloperEdit Config — add inside mcpServers

"trends-mcp": {
  "command": "npx",
  "args": [
    "-y",
    "mcp-remote",
    "https://api.trendsmcp.ai/mcp",
    "--header",
    "Authorization:${AUTH_HEADER}"
  ],
  "env": {
    "AUTH_HEADER": "Bearer YOUR_API_KEY"
  }
}

Mac — ~/Library/Application Support/Claude/claude_desktop_config.json
Windows — %APPDATA%\Claude\claude_desktop_config.json

Fully quit and restart Claude Desktop after saving.

Claude Code (CLI)

claude mcp add --transport http trends-mcp https://api.trendsmcp.ai/mcp \
  --header "Authorization: Bearer YOUR_API_KEY"

Windsurf

SettingsAdvanced SettingsCascadeAdd custom server +

"trends-mcp": {
  "url": "https://api.trendsmcp.ai/mcp",
  "transport": "http",
  "headers": { "Authorization": "Bearer YOUR_API_KEY" }
}

Mac / Linux — ~/.codeium/windsurf/mcp_config.json
Windows — %USERPROFILE%\.codeium\windsurf\mcp_config.json
Or: Command Palette → Windsurf: Configure MCP Servers

VS Code

Extensions sidebar → search @mcp trends-mcpInstall — or paste manually into .vscode/mcp.json inside servers

"trends-mcp": {
  "type": "http",
  "url": "https://api.trendsmcp.ai/mcp",
  "headers": { "Authorization": "Bearer YOUR_API_KEY" }
}

Paste into .vscode/mcp.json, or:
Command Palette (⇧⌘P / Ctrl+Shift+P) → MCP: Add Server

What you can query

All data is normalized to a 0-100 scale for consistent cross-platform comparison.

What your AI can call

Four tools, organized by how you start. With a keyword, track history and growth. Without one, use discovery to see ranked movers or what is live right now.

Track

You already have a keyword.

Chart how it moves over time and compare growth across sources.

get_trends
Historical time series
Raw normalized data for a single source. Weekly mode returns ~5 years of data; daily mode returns the last 30 days. Each data point includes date, normalized value (0-100), and absolute volume where available. Best for charting, custom calculations, and time series modeling. Note: one source per call.
get_growth
Growth metrics
Point-to-point growth for preset periods (7D, 14D, 1M, 3M, 6M, 1Y, YTD, and more) or custom date ranges. Returns % change, volume, direction, and data quality score. Use source='all' for cross-platform aggregated growth, or pass comma-separated sources like 'amazon, tiktok, youtube' for multi-source comparison in one call.
Discovery

No keyword required.

Ranked lists on one source with a growth sort you choose, or a live snapshot of what is trending across platforms.

get_ranked_trends
Ranked trend lists
Precomputed ranked lists of top trending keywords or companies. Supports keyword, catalyst, company (single), and company (combined) modes. Filter by sector, industry, country, earnings dates, minimum volume, and data quality. Sort by latest value, week-over-week, month-over-month, or year-over-year growth.
get_top_trends
Live trending now
What is trending right now with no keyword required. Covers: Google Trends, TikTok Trending Hashtags, Reddit Hot Posts, Wikipedia Trending, X (Twitter), App Store Top Free & Paid, Google Play, Spotify Top Podcasts, Google News, Top Websites, and Amazon Best Sellers.

What you get back

Normalized value
0-100 scale, consistent across all platforms
Absolute volume
Raw search / view counts where available
Growth %
Period-over-period change with exact dates
Time series
Up to 5 years of weekly data per keyword
Data quality
Coverage score and zero-value detection
Multi-source
get_growth supports 'all' or comma-separated sources in one call

Common questions

The hardest part is not visualization - it is data architecture. Most teams start with a single source (usually Google Trends via pytrends) and hit rate limits within weeks. When they add a second source (say, Reddit), the data scales are incomparable and the visualization becomes misleading. Trends MCP solves both problems: it provides a single endpoint for 15+ sources and normalizes all values to a consistent 0-100 scale, so you can display them on the same chart without misleading your audience.
For most use cases, daily refresh is sufficient. Google Search and most social trend data does not change meaningfully hour to hour except during breaking news events. A daily scheduled query run (e.g., at 2am UTC) covers 95% of dashboard use cases without hitting rate limits. For near-real-time monitoring of breaking events, get_top_trends can be polled more frequently to catch trending spikes - but historical time series from get_trends should still be refreshed daily, not continuously.
Trends MCP normalizes all sources to a 0-100 scale. A value of 70 on Google Search and 70 on TikTok reflect proportionally equivalent interest levels on each platform. This makes them directly displayable on a shared axis without transformation. If you are using raw absolute volume estimates for absolute scale charts, note that platform volumes are not comparable by default - use the normalized values for any multi-source comparative chart.
For a keyword monitoring dashboard: (1) get_trends for each keyword on your primary source, weekly mode, to build the time series chart. (2) get_growth for each keyword across all sources for the scorecard / headline numbers. (3) get_ranked_trends weekly to surface new keywords you are not already tracking. That is three query types that cover 90% of dashboard requirements. Each get_growth call with source='all' returns all platforms in one request, keeping total query count manageable.

Protected by reCAPTCHA — Privacy & Terms