TL;DR
Two malicious Chrome extensions masquerading as helpful AI tools—”Chat GPT for Chrome with GPT-5″ and “AI Sidebar”—have been caught stealing ChatGPT, DeepSeek, and Claude conversations from over 900,000 users. This attack, dubbed “Prompt Poaching,” exfiltrates sensitive personal and corporate data to remote servers. If you have these installed, remove them immediately.
THE BREACH: WHAT JUST HAPPENED?
In a startling revelation that kicks off 2026 with a cybersecurity tremor, researchers at OX Security and Secure Annex have uncovered a massive surveillance campaign targeting the booming AI user base. Over 900,000 users across the globe have been actively spied on by browser extensions they trusted to enhance their productivity.
The attack vector is precise and devastating. By impersonating legitimate, high-utility tools like the popular “AITOPIA” sidebar, threat actors managed to bypass initial scrutiny and land on hundreds of thousands of browsers. Once installed, these extensions do not just offer GPT-5 or Claude integration—they silently hook into the browser’s Document Object Model (DOM) to scrape every word you type into AI chatbots and every response you receive.
This is not a simple “data leak.” It is an active, persistent exfiltration campaign designed to harvest proprietary code, legal strategy, medical queries, and personal confessions from millions of daily AI conversations.
IDENTIFY THE MALWARE: CHECK YOUR EXTENSIONS NOW
Before reading further, open your browser’s extension manager (chrome://extensions or edge://extensions) and search for the following two specific IDs. If found, REMOVE IMMEDIATELY.
THREAT #1: “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI”
- User Count: ~600,000+
- Extension ID:
fnmihdojmnkclgjpcoonokmkhjpjechg - Deception: Impersonates the legitimate AITOPIA extension; previously held a “Featured” badge.
THREAT #2: “AI Sidebar with Deepseek, ChatGPT, Claude, and more.”
- User Count: ~300,000+
- Extension ID:
inhcgfpbfdjbjogdfjbclgolkmhnooop - Deception: Offers sidebar functionality for multi-model chatting.


THE “PROMPT POACHING” PHENOMENON EXPLAINED
Security researchers have coined a new term for this specific type of attack: Prompt Poaching.
Unlike traditional keyloggers that record every keystroke indiscriminately, Prompt Poaching is highly targeted. It specifically recognizes when a user is interacting with a Large Language Model (LLM) interface—such as chatgpt.com, deepseek.com, or claude.ai.
The Psychology of the Attack
The genius of Prompt Poaching lies in user trust. Users treat AI chatbots as confidants or co-workers.
- Developers paste proprietary source code to debug errors.
- Executives paste meeting transcripts to generate summaries.
- Lawyers paste case notes to draft arguments.
- Doctors paste patient symptoms to brainstorm diagnoses.
The attackers know that the data entered into these prompt boxes is often far more valuable than standard search queries or social media posts. It is unfiltered, high-context intelligence.
HOW THE SPYWARE WORKS
OX Security researcher Moshe Siman Tov Bustan provided a granular analysis of the malware’s operation. Here is the technical breakdown of the kill chain:
The Hook (Permission Abuse)
Upon installation, the extensions request broad permissions, specifically “Read and change all your data on the websites you visit.” While this is common for sidebar extensions (which technically need to “read” the page to appear on top of it), these extensions weaponize this access.
DOM Injection & Scraping
The malware contains a content script that runs on every page load. It monitors the URL.
- Trigger: If the URL matches
*://chatgpt.com/*,*://deepseek.com/*, or*://claude.ai/*. - Action: The script injects an event listener into the chat interface’s specific DOM elements (e.g., the
divcontaining the chat history). It does not just grab the text; it maintains a shadow copy of the conversation context.
Obfuscation via “Analytics”
To hide its tracks, the malware asks users to consent to “anonymous, non-identifiable analytics data” to improve service. This is a lie. The “analytics” packets actually contain the full plaintext of the conversations.
Exfiltration (The 30-Minute Heartbeat)
The extensions do not stream data instantly, which might trigger network alerts. Instead, they batch the stolen data locally and exfiltrate it every 30 minutes to Command and Control (C2) servers.
- Known C2 Domains:
chatsaigpt[.]com,deepaichats[.]com - Infrastructure: The attackers utilized Lovable, an AI-powered web dev platform, to host convincing privacy policy pages (
chataigpt[.]pro) to make the operation look legitimate to automated scanners.
THE “LEGITIMATE” THREAT: SIMILARWEB, STAYFOCUSD, AND URBAN VPN
While the malicious extensions above are outright malware, a greyer, perhaps more disturbing trend has emerged among legitimate, “Featured” extensions. Secure Annex has identified that several well-known tools have engaged in behavior dangerously close to Prompt Poaching.
The Urban VPN Precedent
In late 2025, Urban VPN Proxy (millions of installs) was caught capturing AI chats even when the VPN was turned off. This set the stage for the current crisis.
The Similarweb & Stayfocusd Controversy
Reports indicate that Similarweb (1M+ users) and Stayfocusd (600k+ users) have updated their Terms of Service (TOS) to allow for the collection of “AI Inputs and Outputs.”
- The “Opt-In” Trap: A January 1, 2026, update to Similarweb’s extension added a full TOS popup. It explicitly states that data entered into AI tools is collected to “provide in-depth analysis.”
- The Mechanism: Similarweb reportedly uses DOM scraping or hijacks native browser APIs like
fetch()andXMLHttpRequest()to gather conversation data. - The Defense: These companies claim the data is anonymized and used for market intelligence (e.g., “What are people asking AI?”). However, for a user expecting privacy, the distinction between “market intelligence” and “spying” is negligible.


WHAT DID THEY STEAL?
The volume of data exfiltrated is staggering. Based on the 30-minute exfiltration cycles and the user base size, we are looking at millions of conversation logs.
The “Crown Jewels” of Stolen Data:
- PII (Personally Identifiable Information): Names, addresses, and phone numbers often pasted into prompts for formatting.
- Intellectual Property: Entire blocks of proprietary code, algorithm logic, and product roadmaps.
- Financial Data: M&A discussions, quarterly report drafts, and raw financial data pasted for Excel formatting.
- Browser History: Beyond chats, the extensions also exfiltrated all Chrome tab URLs, giving attackers a complete map of a user’s digital life, including internal corporate intranets (e.g.,
jira.company.com,docs.internal).
STEP-BY-STEP REMOVAL & SANITATION GUIDE
If you suspect you are infected, follow this strict sanitation protocol immediately.
STEP 1: Removal
Do not just disable the extension. Remove it.
For Google Chrome:
- Type
chrome://extensionsin your address bar. - Locate “Chat GPT for Chrome…” or “AI Sidebar…”.
- Click Remove.
- CRITICAL: Check for any other extension you do not recognize. If in doubt, remove it.
For Microsoft Edge:
- Type
edge://extensions. - Locate the offender.
- Click Remove.
STEP 2: Clear Site Data
The malware may have stored local caches of data or malicious cookies.
- Go to
chrome://settings/clearBrowserData. - Select “Advanced”.
- Check “Cookies and other site data” and “Cached images and files”.
- Time range: All time.
- Click Clear data.
STEP 3: Credential Rotation
If you have ever pasted passwords, API keys, or sensitive credentials into ChatGPT while this extension was active, you must consider them compromised.
- Change your OpenAI/Anthropic/DeepSeek passwords.
- Revoke any API keys generated or used during the infection period.
- If you use a password manager extension, ensure it was not tampered with (reinstalling it is a safe bet).
CORPORATE IMPACT: THE NIGHTMARE FOR CISOs
For Chief Information Security Officers (CISOs), this is a “Code Red” event.
“Organizations whose employees installed these extensions may have unknowingly exposed intellectual property, customer data, and confidential business information,” warns OX Security.
The “Shadow IT” Browser Problem
Employees often install extensions to boost productivity without IT vetting. This incident highlights the massive gap in endpoint security.
- Action Item for IT Teams: Use Google Admin Console or Microsoft Intune to force-block the specific IDs:
fnmihdojmnkclgjpcoonokmkhjpjechgandinhcgfpbfdjbjogdfjbclgolkmhnooop. - Policy Update: Implement “Allowlisting” only for extensions. Blocking bad ones is “Whack-a-Mole”; allowing only good ones is the only secure posture in 2026.
THE FUTURE OF BROWSER SECURITY (MANIFEST V3 & BEYOND)
Why does this keep happening?
Google has been transitioning to Manifest V3, a new specification for extensions intended to limit their ability to execute remote code. However, hackers are adapting. By moving the malicious logic to “analytics” collection and C2 server parsing, they bypass client-side code scanners.
Prediction for late 2026: We expect Google and Microsoft to introduce “AI Data DLP” (Data Loss Prevention) directly into the browser, flagging extensions that attempt to read data from known AI domains. Until then, the user is the firewall.
FREQUENTLY ASKED QUESTIONS (FAQs)
Q: I have the “Chat with all AI models” extension by AITOPIA. Am I safe? A: You must be careful. The malware impersonates AITOPIA. Check the Extension ID. If it is legitimate, it should match the official AITOPIA ID (verify on the official store page). If it matches the malicious IDs listed above, you are infected.
Q: Can this malware steal my banking passwords? A: While the malware focuses on AI chats, it has “Read all data” permissions. If you visited a banking site and the malware decided to scrape DOM elements there, it is technically possible, though AI chats were the primary target. Resetting banking passwords is a prudent precaution.
Q: Does using a VPN protect me from this? A: No. This is a browser extension. It sits inside your browser, before the data is encrypted and sent through the VPN tunnel. It sees what you see on the screen.
Q: I used “Anonymous” mode on ChatGPT. Did they still get my data? A: Yes. The extension scrapes the text displayed on your screen. ChatGPT’s server-side privacy settings (like “Temporary Chat”) do not stop a client-side screen scraper.
Q: How can I safely use AI sidebars? A: Stick to extensions developed by the AI companies themselves (e.g., official OpenAI integrations) or verified publishers with transparent business models. If a tool is free, powerful, and from an unknown developer, you are the product.
This is a developing story. Subscribe to our security alerts for updates on the “AI Extension Trap.”








