REST API Reference
Integrate real-time trend data directly into your code via a single POST endpoint. Covers Google, YouTube, TikTok, Reddit, Amazon, and more.
Free API access
Get your free API key
100 free requests per month. No credit card, no setup fee.
By clicking “Get my free key” you agree to our Terms & Privacy Policy.
Killed my Google Trends scraper the same afternoon. Clean payloads, low latency, honestly better than I expected on free.
Keyword trend reports for clients. Free tier forces us to batch for weekly digests, which turned out fine. Upgrade path exists when we outgrow it.
~20 minutes to a working MCP hook. JSON shape is sane; docs match what ships.
We pipe weekly series into BigQuery for a few brand cohorts. The old Selenium path was mostly babysitting selectors when Google moved something. This is boring in the best way. Uptime's been solid.
When leadership asks why we're shipping something, these screenshots land. Wish the dashboard had saved views; API's the strong half.
Cursor + MCP, one try. Not a trends person; my side project emails me when a niche term spikes week over week and that's all I wanted.
Growth endpoints before I write up notes. Normalization isn't always 1:1 with the raw Google UI (fine), but it's consistent run to run.
Multi-source ranked lists into a notebook, straightforward. When I typo a param the error payload is readable. Sounds small. It isn't.
Does what it says. I knocked a star because onboarding assumed I already knew MCP wiring; a copy-paste block for Claude Desktop would have saved me 15 minutes.
TikTok momentum vs paid spend in one Looker sheet. Unglamorous, and somehow the first rollout we didn't fight over.
Retries: predictable. No random HTML in responses (scrapers, I'm looking at you). Team key rotation would be nice; we rotate keys manually for now.
Retail buzz before filings. Faster than twelve tabs and a spreadsheet nobody trusts.
Meme or real trend? One call usually tells me. Still spot-check Search Console when it matters.
Workshops: how do you ground LLMs in something fresher than training data? I show this. Engineers who hate glue code actually nod at the MCP bit.
Client reporting. Finance finally stopped asking which line item this maps to. Peak hours can drag a little. Acceptable.
I wired this behind a small CLI for contributors who want trend context in issues. Keeping the surface area tiny matters for OSS, and the schema has not churned on me yet.
30-day window, daily pull, internal scoreboard. The screenshot-from-Trends fight in Slack basically ended.
We are pre-revenue, so free tier discipline matters. I hit the cap once during a brainstorm where everyone wanted to try random keywords. Learned to batch smarter.
Security review: HTTPS, scoped keys, no sketchy redirect hops we could find. Sounds basic. It isn't, in this category.
I do not need this daily, but when App Store rank shifts look weird, having Reddit and news context in one place saves me from context switching across six apps.
Blowing up for real vs loud on one platform. Doesn't replace reporting. Keeps my ledes from lying.
Playwright job died every time Google sneezed. Same shape every week now.
Seasonal demand spikes line up with what we see in Amazon search interest here. Merch team stopped sending me screenshots from random tools that never matched.
Decks: fine. I still export to Sheets by hand so minus one star. Direct connector someday maybe.
Steam concurrents plus Reddit chatter in one workflow beats our old spreadsheet ritual before milestone reviews.
Quick pulse on whether a feature name is confusing people in search before we ship copy. Cheap sanity check compared to a full survey.
Monitored from Grafana via a thin wrapper. p95 stayed under our SLO budget last month. One noisy day during a holiday but nothing alarming.
Narrative fights in meetings got shorter once we could point at the same trend line everyone agreed on. Sounds silly until you have lived through it.
Using normalized series as a weak prior in a forecasting experiment. Citation-friendly timestamps in the payload made reproducing runs less painful.
Approved for our pilot group after a quick vendor review. Would love SAML, not a blocker for our size.
YouTube interest + TikTok hashtags in one view. Makes sponsor conversations less hand-wavy. I can point at something.
Cron before standup, Slack gets a blurb. Afternoon to wire. Two quarters without drama.
Public-interest stuff: search interest is a rough attention proxy. I still hit primary sources. One signal among several, not the whole story.
Runs in a VPC egress-only subnet with allowlisted domains. Fewer exceptions to explain to auditors than our last vendor.
Spotting when a topic is about to flood Discord saves my team from reactive moderation fires. Not perfect, but directionally right often enough.
Lean team. ROI isn't subtle. I wouldn't rebuild our old scraper unless legal made us.
Examples in the docs match what the MCP actually returns. You would be surprised how rare that is in this category.
Pager stayed quiet. When something upstream flaked once, the error string told me which parameter to fix without opening logs first.
Students use it for coursework demos. Budget is tight so free tier matters; we coach them to cache aggressively.
Helps prep talking points when retail interest in our name swings after earnings. Not material disclosure, just context for Q&A prep.
Rolled into our internal CLI. On-call hasn't paged for this integration once.
Good first pass before we pull filings. Not a substitute for fundamentals, just faster triage.
Claude reads the JSON; I fix the content. Split workflow that actually works.
Mostly lines up with what we hear on weekly sales calls. I still check inventory before I trust it for a buy.
Pulled a trend summary into ChatGPT for ad copy angles. Corny but it saved a two-hour brainstorm.
Client asked for a trend slide Thursday 5pm. This existed so I didn't have to fake it.
Windsurf picked up the MCP manifest without me hand-editing JSON. Small win, I'll take it.
Retail attention vs what management said on the call. Useful tension to surface before the write-up.
We stopped exporting from five different tools. One chart now goes in the Monday deck and nobody argues about whose export is newer.
Webhook → trend pull → Slack. Boring pipeline. That's the whole compliment.
Copilot suggested a call pattern that matched your error schema. Saved me a round trip to the docs.
Solid API. Dinged for dashboard polish. I'm not an engineer, I live in the UI half the day.
Serde-friendly enough I didn't write a custom deserializer. High praise from a grumpy systems person.
Response sizes stay small enough for mobile hotspots. I hate APIs that dump megabytes for a sparkline.
What are you working on?
How will you connect?
Connect
Add to your AI in 30 seconds
An API key is required to connect. Get your free key above, then copy the pre-filled config for your client.
curl -s -X POST https://api.trendsmcp.ai/api \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"source":"google search","keyword":"bitcoin"}'
import requests res = requests.post("https://api.trendsmcp.ai/api", headers={"Authorization": "Bearer YOUR_API_KEY"}, json={"source": "google search", "keyword": "bitcoin"}).json() print({r["date"]: r["value"] for r in __import__("json").loads(res["body"])})
const { body } = await fetch("https://api.trendsmcp.ai/api", {
method: "POST",
headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" },
body: JSON.stringify({ source: "google search", keyword: "bitcoin" })
}).then(r => r.json());
console.log(Object.fromEntries(JSON.parse(body).map(r => [r.date, r.value])));
↑ Get your free key above first — the snippets won't work without it.
Authentication
Every request must include your API key as a Bearer token in the Authorization header.
Get a free key at trendsmcp.ai. 100 requests/month, no credit card.
How calls are counted is defined below (same rules for REST and MCP).
curl -X POST https://api.trendsmcp.ai/api \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"source":"google search","keyword":"bitcoin"}'
import requests
res = requests.post(
"https://api.trendsmcp.ai/api",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"source": "google search", "keyword": "bitcoin"}
)
res.raise_for_status()
data = res.json()
const res = await fetch("https://api.trendsmcp.ai/api", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({ source: "google search", keyword: "bitcoin" })
});
const data = await res.json();
What counts as a request
Each successful hit against the data API uses one unit of your monthly quota.
One source + keyword per HTTP request or MCP tool call counts as one request.
One source + keyword per call counts as one request, including every entry in percent_growth in that same call.
Counted per feed (type) and per page. Example with limit 25: offset 0 is one request, offset 25 is another for the same type. Different feeds are separate. If one response includes several feeds, each feed is counted per page on its own.
Data Sources
Two types of sources are available, accessed via different operations. Keyword sources return a historical time series or growth metric for a specific keyword. Live feeds return the top-ranked items on a platform right now — no keyword needed.
Keyword sources — Get Trends & Get Growth
Pass the source value below in your request body alongside keyword.
Scores are normalized to a 0–100 scale where the pipeline supports it.
| source | Description | Keyword format |
|---|---|---|
google search | Google search volume | Any keyword or phrase |
google images | Google image search volume | Any keyword or phrase |
google news | Google News search volume | Any keyword or phrase |
google shopping | Google Shopping search volume | Any keyword or phrase |
youtube | YouTube search volume | Any keyword or phrase |
tiktok | TikTok hashtag volume | Hashtag or topic |
reddit | Subreddit subscribers | Subreddit name only, no r/ prefix |
amazon | Amazon product search volume | Product name or category |
wikipedia | Wikipedia page views | Article title or topic |
news volume | News article mention volume | Any keyword or phrase |
news sentiment | News sentiment score (positive / negative) | Any keyword or phrase |
app downloads | Mobile app download / install estimates (Android track) | Package id or app identifier; some responses include rank or price fields when available |
npm | npm package weekly downloads | Exact package name e.g. react |
steam | Steam concurrent players (monthly) | Game display name e.g. Elden Ring |
Live feeds — Get Top Trends
Pass the type value below with mode: "top_trends". No keyword needed.
Returns the current ranked leaders on that platform (e.g. the top 25 trending hashtags on TikTok right now).
| type | Platform / feed |
|---|---|
Google Trends | Top trending search terms on Google right now |
Google News Top News | Top news stories from Google News |
TikTok Trending Hashtags | Top trending hashtags on TikTok |
YouTube Trending | Top trending videos on YouTube |
X (Twitter) Trending | Top trending topics on X |
Reddit Hot Posts | Hottest posts on Reddit's front page |
Reddit World News | Top posts in r/worldnews |
Wikipedia Trending | Most-viewed Wikipedia articles today |
Amazon Best Sellers Top Rated | Amazon top-rated best sellers across all categories |
Amazon Best Sellers by Category | Amazon best sellers filtered by product category |
App Store Top Free | Top free apps on the iOS App Store |
App Store Top Paid | Top paid apps on the iOS App Store |
Google Play | Top apps on Google Play |
Top Websites | Most-visited websites globally by traffic rank |
Spotify Top Podcasts | Top podcasts on Spotify |
MCP / AI
AI Prompts
When using Trends MCP through an AI assistant (Claude, ChatGPT, Cursor, etc.), include "using TrendsMCP" or "via TrendsMCP" in your prompt so the AI routes to the MCP instead of a web search.
nvidia over the past 5 years."ozempic over the last month."Stanley cup."Bitcoin weekly."air fryer over 5 years."langchain weekly."Elden Ring."Tesla over the last year."wallstreetbets on Reddit."GLP-1 grown over the past 12 months?"Anthropic on Google Search."Duolingo?"weight loss drugs across Google, TikTok, and Amazon."Meta changed over the past 6 months?"AI agents from Jan 2025 to Jan 2026."wallstreetbets on Reddit over the last 30 days?"running shoes."Electronics."
Live leaderboards: MCP tools require one feed (type) per call. The REST Get Top Trends request can omit type to return every feed in one response.
Routing
All requests go to a single POST / endpoint.
The operation is selected by the fields you include in the request body.
| Operation | Required fields |
|---|---|
| Get Trends | source + keyword (no percent_growth) |
| Get Growth | source + keyword + percent_growth |
| Get Top Trends | mode: "top_trends" |
/
Get Trends
Returns a full historical time series for a keyword from a single source.
For Google Search (and other Google verticals where the backend supports it), the default is long-range weekly data; data_mode: "daily" requests a recent daily window (typically about the last 30 days).
Many other sources use a fixed cadence or ignore data_mode. Always interpret the returned dates as source-native.
Quota: 1 request per source + keyword. Full rules
Request body
| Field | Type | Required | Description |
|---|---|---|---|
source |
string | Required | Data source. See Data Sources. |
keyword |
string | Required | Keyword, brand, product, or topic. |
data_mode |
string | Optional | REST only: "weekly" or "daily" where the source supports Google-style modes; otherwise ignored or N/A. MCP tool schemas do not expose this field; omit it when using MCP. |
Response fields
| Field | Type | Description |
|---|---|---|
date | string | ISO 8601 date e.g. "2026-03-21" |
value | number | Normalized trend score, 0–100. |
volume | number | null | Absolute volume where available. null otherwise. |
keyword | string | The keyword queried. |
source | string | The data source used. |
{
"source": "google search",
"keyword": "bitcoin",
"data_mode": "weekly"
}
[
{
"date": "2026-03-21",
"value": 47,
"volume": 25853617,
"keyword": "bitcoin",
"source": "google search"
},
// ... up to 261 weekly data points
]
/
Get Growth
Calculates point-to-point percentage growth for a keyword over one or more time windows.
Supports preset strings ("12M", "YTD", etc.) or exact custom date pairs.
Quota: 1 request per source + keyword (all percent_growth periods in that call). Full rules
Request body
| Field | Type | Required | Description |
|---|---|---|---|
source |
string | Required | Data source. See Data Sources. |
keyword |
string | Required | Keyword, brand, or topic. |
percent_growth |
array | Required | Array of preset strings or custom date objects. See below. |
data_mode |
string | Optional | Same semantics as Get Trends: applies where supported (often Google sources); ignored or N/A for many pipelines. Not exposed on MCP tools. |
Growth period presets
7D14D30D1M2M3M
6M9M12M1Y18M24M
2Y36M3Y48M60M5Y
MTDQTDYTD
Custom date range object
| Field | Type | Description |
|---|---|---|
name | string | Optional label returned in results. |
recent | string | More recent date, YYYY-MM-DD. |
baseline | string | Baseline/comparison date, YYYY-MM-DD. |
Response fields
| Field | Type | Description |
|---|---|---|
search_term | string | The keyword queried. |
data_source | string | The source used. |
results | array | One object per period requested. |
period | string | Period identifier, e.g. "12M" or custom name. |
growth | number | Percentage change, positive or negative. |
direction | string | "increase" or "decrease". |
recent_date | string | ISO 8601 date of the recent data point. |
baseline_date | string | ISO 8601 date of the baseline point. |
recent_value | number | Normalized score at the recent date. |
baseline_value | number | Normalized score at the baseline date. |
volume_available | boolean | Whether absolute volume exists for this source. |
recent_volume | number | null | Absolute volume at the recent date. |
baseline_volume | number | null | Absolute volume at the baseline date. |
volume_growth | number | null | Volume growth %, if available. |
metadata | object | total_data_points, calculations_completed, all_successful. |
{
"source": "google search",
"keyword": "nike",
"percent_growth": ["12M", "3M", "YTD"]
}
{
"source": "amazon",
"keyword": "nike",
"percent_growth": [
{
"name": "Last Year",
"recent": "2025-12-31",
"baseline": "2024-12-31"
}
]
}
{
"search_term": "nike",
"data_source": "google search",
"results": [
{
"period": "12M",
"growth": -12.31,
"direction": "decrease",
"recent_date": "2026-03-21",
"baseline_date": "2025-03-22",
"recent_value": 57,
"baseline_value": 65,
"volume_available": true,
"recent_volume": 24158298,
"baseline_volume": 27548936,
"volume_growth": -12.31
}
],
"metadata": {
"total_data_points": 261,
"calculations_completed": 1,
"all_successful": true
}
}
/
Get Top Trends
Returns ranked items from live platform feeds stored for Trends MCP.
On the REST API, omit type to aggregate every feed in one response (same semantics the handler uses internally).
On MCP, the get_top_trends tool requires an explicit type per call. There is no single MCP invocation that returns all feeds at once.
Quota: per feed (type) and per page (limit / offset). Full rules
Request body
| Field | Type | Default | Description |
|---|---|---|---|
mode | string | Set to "top_trends". | |
type | string | all (REST) | REST: optional; omit for all feeds. MCP: required (one feed per tool call). |
limit | integer | 25 | Max items per feed (up to 200). |
offset | integer | 0 | REST: skip this many rows per feed for pagination (when the backend supports it for that feed). |
Available feeds
Amazon Best Sellers by Category
Amazon Best Sellers Top Rated
App Store Top Free
App Store Top Paid
Google News Top News
Google Play
Google Trends
Reddit Hot Posts
Reddit World News
Top Websites
Spotify Top Podcasts
TikTok Trending Hashtags
Wikipedia Trending
X (Twitter) Trending
YouTube Trending
{
"mode": "top_trends",
"type": "Google Trends",
"limit": 10
}
{
"as_of_ts": "2026-03-26T22:22:25Z",
"type": "Google Trends",
"limit": 10,
"count": 10,
"data": [
[1, "chuck norris"],
[2, "project hail mary"],
[3, "bachelorette cancelled"]
// ... [rank, name] pairs
]
}
Errors
Errors return JSON with an error string and message. HTTP status may be 4xx/5xx depending on the failure; some upstream gaps are reported as data_unavailable (or similar) with a generic message rather than not_found.
| Status | Error code | Meaning |
|---|---|---|
| 400 | missing_parameter | Required field missing from request body |
| 400 | invalid_source | Unrecognized source value (message may list allowed values) |
| 401 | Missing or invalid API key | |
| 404 | not_found | No series or entity matched this keyword/source (when the pipeline classifies it that way) |
| varies | data_unavailable | Pipeline could not return data (temporary gap, unsupported query, or empty upstream). HTTP status is not always 404; read message. |
| 429 | rate_limited | Monthly request limit reached. Upgrade for more. |
| 500 | internal_error | Unexpected server error |
Additional codes can appear as the data layer evolves; treat message as the operator-facing detail.
{
"error": "missing_parameter",
"message": "The 'keyword' parameter is required."
}