AI Chrome Extensions Are Spying on You: Privacy Risks in 2026
52% of AI Chrome extensions collect your data. Two fake extensions stole chat histories from 900,000 users. Here's what the 2026 research reveals and how to protect yourself.
Half of all AI Chrome extensions are collecting your data right now.
That's not speculation. A January 2026 study by Incogni analyzed 442 AI-powered Chrome extensions with a combined 115.5 million downloads. The findings: 52% collect at least one type of user data, 29% collect personally identifiable information, and 22% track your activity down to individual keystrokes.
Meanwhile, two fake AI sidebar extensions with 900,000 combined users were caught exfiltrating complete ChatGPT and DeepSeek conversations to remote servers every 30 minutes.
This isn't a hypothetical risk. It's happening at scale, right now, to millions of users. If you've been following the data privacy conversation in browser automation, the 2026 numbers are worse than expected. Here's what the research actually says, which extensions pose the biggest risks, and what you can do about it.
The Numbers: What 442 AI Extensions Are Actually Doing
The Incogni study examined AI extensions across eight categories, nearly doubling their 2025 sample of 238. The data paints a specific picture of what these tools collect:
| Data Type | % of Extensions Collecting | What This Means |
|---|---|---|
| Website content | 31.4% | They read the pages you visit |
| Personally identifiable info | 29.2% | Your name, email, demographic data |
| User activity | 22% | Keystrokes, scrolling, navigation patterns |
| Authentication info | 18% | Passwords, security questions, PINs |
| Financial data | 7% | Credit card numbers, credit ratings |
The permissions picture is equally concerning. 42% of all AI extensions require the "scripting" permission, which allows injecting code directly into web pages. That single permission potentially affects 92 million users across the extensions studied.
As Incogni's head Darius Belejevas put it: "Some of these tools can read everything you type, see every page you visit, or inject code directly into websites."
The Worst Offenders by Category
Not all AI extension categories carry equal risk. The study ranked them from most to least privacy-invasive:
- Programming and math tools — Highest risk. These access code repositories, cloud notebooks, and spreadsheets where proprietary code and embedded credentials live.
- Meeting assistants and audio transcribers — Access active tabs, audio streams, and meeting interfaces. They combine broad permissions with large data collection volumes.
- Writing assistants — Require access across all websites. Grammarly and QuillBot, both with 2M+ downloads, were flagged as the most potentially privacy-damaging popular extensions. They collect website content, personal communications, keystrokes, scrolling behavior, and navigation events.
- Translators — High potential impact due to read/change permissions across all URLs.
- Audiovisual generators — Least invasive on average.
When Extensions Turn Malicious: Four Recent Incidents
The Incogni data captures what extensions could do with the access they request. These incidents show what happens when that access gets exploited.
900,000 Users Had Their AI Chats Stolen
In January 2026, security researcher Moshe Siman Tov Bustan at OX Security discovered two malicious extensions impersonating a legitimate AI tool called AITOPIA:
- "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI" — 600,000 users
- "AI Sidebar with Deepseek, ChatGPT, Claude, and more" — 300,000 users
Both extensions looked legitimate. They had professional listings, hosted fake privacy policies on polished websites, and asked for consent to collect "anonymous, non-identifiable analytics data."
What they actually did: exfiltrated complete AI chatbot conversations and all Chrome tab URLs to command-and-control servers every 30 minutes. Think about what you've typed into ChatGPT — code with API keys, business strategies, personal questions, financial information. All of it, scraped and sent to unknown servers.
The data was weaponizable for corporate espionage, identity theft, targeted phishing, or sale on underground forums.
A "Featured" VPN Extension Harvested AI Chats from 7 Million Users
Sometimes the threat isn't a fake extension — it's a real one that changes what it does.
Urban VPN Proxy had 7+ million users across Chrome and Edge, a 4.7-star rating, and Google's own "Featured" badge. Its Chrome Web Store listing claimed it would protect users from "entering personal information into AI chatbots."
In July 2025, version 5.5.0 silently added code that did the opposite. Discovered by Koi Security in December 2025, the extension was intercepting every conversation users had with eight AI assistant platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Meta AI.
The data went to BiScience, a data broker that collects browsing history and device identifiers. The extension's parent company, Urban Cybersecurity, is a BiScience affiliate.
The privacy policy technically mentioned data collection — buried in legalese that contradicted the extension's own marketing copy. The extension auto-updated silently on both Chrome and Edge, so users who installed it for privacy were unknowingly having their AI conversations harvested for months before anyone noticed.
8.8 Million Browsers Hit by "Sleeper" Extensions
A campaign tracked by LayerX Security and reported by Malwarebytes in January 2026 revealed something more disturbing: extensions that stay clean for years before turning malicious.
The threat actor, dubbed DarkSpectre, operated three coordinated campaigns:
- ShadyPanda — 5.6 million infected browsers. Long-term surveillance and e-commerce affiliate fraud.
- Zoom Stealer — 2.2 million infected browsers. Targeted 28 video conferencing platforms including Zoom, Teams, and Google Meet for corporate espionage.
- GhostPoster — 1.05 million infected browsers. Used steganography (hiding code inside PNG images) for stealthy execution.
The key detail: 85+ "dormant sleeper" extensions are still sitting in browser stores, maintaining clean code and positive reviews, waiting to activate. Some have been building trust for over 5 years. One extension, "New Tab — Customized Dashboard," waited 3 days post-install before contacting its command-and-control server — just long enough to pass marketplace security reviews.
The malware only activated on about 10% of page loads to avoid detection during testing.
Enterprise Extensions Targeting Workday and NetSuite
In a more targeted attack discovered by Socket in January 2026, five extensions specifically targeted enterprise HR and finance platforms:
- Exfiltrated authentication cookies every 60 seconds
- Blocked 44-56 administrative security pages (password changes, 2FA management, audit logs) so victims couldn't lock attackers out
- Monitored 23 security extensions to detect if the user had protective tools installed
- Disabled browser developer tools to prevent forensic analysis
The install numbers were small (around 2,400 total), but the sophistication was high. These weren't spray-and-pray attacks — they were surgically designed for corporate account takeover.
Why This Keeps Happening
Chrome extensions have a fundamental structural problem: the permission model is too broad and too static.
When you install an extension, you grant permissions once, at install time. There's no ongoing consent. An extension that asks for "Read and change all your data on all websites" retains that access indefinitely — and you'll never see another prompt about it, even if an update changes what the extension does with that access.
The Chrome Web Store review process catches obvious malware, but it can't detect:
- Extensions that stay clean until a future update adds malicious code
- Code hidden inside images (steganography)
- Extensions that only activate malicious behavior on a small percentage of page loads
- Legitimate-looking data collection that technically matches the stated privacy policy
This isn't a Chrome-specific problem. The Malwarebytes report confirmed Firefox and Edge have the same vulnerability — DarkSpectre targeted all three browsers.
How to Evaluate an AI Extension Before Installing
Not every AI extension is dangerous. But the research shows you can't rely on download counts, reviews, or Chrome Web Store presence as safety signals. Extensions with millions of users have been compromised.
Here's a practical evaluation framework:
1. Check What Permissions It Requests
Before installing, click "Privacy practices" on the extension's Chrome Web Store page. Red flags:
- "Read and change all your data on all websites" — The broadest possible access. Only install if you fully understand why the extension needs it.
- "Manage your downloads" and "Modify data you copy and paste" — Rarely needed for AI assistants.
- Scripting permission — Allows code injection into web pages. 42% of AI extensions request this.
2. Check Where Your Data Goes
Read the privacy policy. Specifically look for:
- Where data is stored (local vs. cloud)
- Whether conversations or prompts are sent to third-party servers
- Whether the extension works without an account
- Data retention policies
3. Favor Local-First Extensions
The safest AI extensions are ones where your data never leaves your browser. If an extension processes everything locally and only connects to AI providers when you explicitly send a prompt, the attack surface shrinks dramatically.
This is the approach behind BYOK (Bring Your Own Key) extensions. Instead of routing your prompts through the extension developer's servers, you connect directly to OpenAI, Anthropic, or Google using your own API key. The extension never sees your data — it just provides the interface.
Prompt Anything Pro works this way. You provide your own API keys for OpenAI, Anthropic, or Google AI. Your prompts go directly from your browser to the AI provider. The extension stores your prompt history locally on your device — nothing is ever sent to PlugMonkey's servers.
4. Check the Developer's Track Record
- How long has the developer been publishing extensions?
- Do they have a real company website?
- Is their contact information verifiable?
- Do they respond to support requests?
- Have they been transparent about updates and changes?
5. Minimize Your Extension Count
Every extension is an attack surface. The DarkSpectre campaign showed that even mundane extensions like "New Tab" pages and "Color Enhancer" tools can be weaponized. Audit your installed extensions regularly and remove anything you don't actively use.
What Legitimate Extension Developers Should Be Doing
If you build extensions, the 2026 research makes a strong case for privacy-first architecture:
- Request minimum permissions. Don't ask for "all websites" access if you only need specific domains.
- Process data locally. If the extension can work without sending data to your servers, it should.
- Be transparent about data flows. Publish clear documentation about what data goes where.
- Support BYOK models for AI features. Let users bring their own API keys instead of proxying through your infrastructure.
- Publish a machine-readable privacy manifest. Make it easy for researchers and tools to audit your data practices.
Extensions that treat user data as someone else's property — not as a resource to harvest — will stand out as the industry faces increasing scrutiny. We wrote about this philosophy in more detail in our guide on data ownership best practices for browser extensions.
Key Takeaways
The 2026 AI Chrome extension landscape has a real trust problem:
- 52% of AI extensions collect user data. 29% collect PII. 22% track keystrokes. (Incogni, 2026)
- 900,000 users had their AI chats stolen by two fake extensions that looked completely legitimate. (OX Security)
- 7+ million users of a Google "Featured" VPN extension had their AI conversations across 8 chatbot platforms harvested and sold to a data broker. (Malwarebytes)
- 8.8 million browsers were infected by "sleeper" extensions that waited years before activating malicious behavior. (Malwarebytes)
- 85+ dormant extensions are still in browser stores, waiting to activate.
The pattern is clear: AI extensions request broad permissions, users grant them without scrutiny, and bad actors exploit the gap.
Your next steps:
- Open
chrome://extensionsright now and audit what you have installed - Remove any AI extensions you don't actively use
- For the ones you keep, check their permissions and privacy policies
- For AI prompting, consider a BYOK extension that keeps your data local and connects directly to AI providers with your own keys
Your browser extensions can see everything you do online. Choose them like you'd choose who gets a key to your house.
Don't see the tool you need?
We'll build it for you.
Stop renting your workflow. We build custom browser extensions that automate your specific manual processes, data extraction, and repetitive tasks.
Fixed price. 100% IP Ownership.