Could Your AI Chats Be Stolen By Trusted VPNs?

Date:

When Trusted VPNs Turn Into Silent AI Data Harvesters

Security researchers have uncovered alarming behavior in Urban VPN Proxy. The browser extension captures AI chat conversations from users’ browsers. This exfiltration occurs even when VPN features are disabled.

The implications for individual users are serious. Private prompts, personal notes, and sensitive queries can be transmitted without consent. For enterprises, the risk magnifies when employees input proprietary information into AI tools.

Urban VPN’s monitoring is not limited to browsing traffic. Scripts activate whenever users interact with popular AI platforms like ChatGPT, Claude, Gemini, and Grok. Both prompts and AI responses are intercepted and exported.

The discovery underscores a growing concern with browser extensions. High ratings and a Google Featured badge give users a false sense of security. Organisations must consider this an unmanaged risk layer for sensitive data.

How Hidden Scripts Let Urban VPN Harvest Every AI Conversation

Urban VPN injects hidden scripts into users’ browsers to monitor AI interactions. These scripts activate whenever a user opens AI chat platforms. They operate independently of the VPN connection itself.

Each major AI platform has a dedicated script. ChatGPT uses chatgpt.js, Claude uses claude.js, and Gemini uses gemini.js. These scripts are tailored to intercept content and metadata specific to each platform.

The extension overrides key browser network APIs to capture typed prompts. It also monitors AI responses, packaging the information for transmission. Data is continuously sent to Urban VPN’s backend systems.

Users may not notice any difference in normal browsing. All data collection occurs silently and without user intervention. Even when the VPN is off, AI interactions remain vulnerable.

Scripts are capable of monitoring metadata alongside the content. They track session details, timestamps, and interaction frequency. This allows the extension to build a complete picture of user behavior.

The mechanism is persistent across sessions and platforms. Updates to the extension have expanded AI monitoring capabilities over time. Earlier telemetry functions evolved into full conversation capture as AI adoption increased.

This technical design highlights why the risk is significant. Enterprises cannot rely on traditional VPN security for protection. Browser extensions like Urban VPN create a hidden attack surface for sensitive AI data.

When Badges and Ratings Give a False Sense of Online Security

Urban VPN holds a Google Chrome Featured badge that implies trustworthiness. High user ratings reinforced the perception that the extension is safe. Many users installed it without suspicion.

The badge signals that the extension passed Google’s manual review process. It suggests compliance with user experience and design standards. Users often equate badges with security assurances.

Marketing for Urban VPN highlights an “AI protection” feature. This feature claims to detect sensitive data before sharing. In reality, it operates separately from the scripts that collect data.

The exfiltration scripts remain active regardless of user settings. AI conversation data is transmitted continuously to backend servers. Users cannot opt out of this collection.

Many organizations rely on perceived safety of browser extensions. A Featured badge may reduce scrutiny of the extension. Enterprises face risks when employees enter proprietary information into AI tools.

The separation between marketing claims and underlying behavior shows a troubling gap. Users assume protection while exposure continues unnoticed. This gap emphasizes the need for independent security assessments.

Ultimately, badges and high ratings cannot replace oversight and vigilance. Even well rated extensions may engage in unethical data practices. Users and enterprises must evaluate extensions critically before trust is granted.

How Millions of AI Conversations Were Secretly Exposed Online

Koi Security’s research revealed that Urban VPN captured hundreds of millions of AI conversations. The data was pooled across multiple browser extensions from the same publisher. The total user base affected exceeds eight million accounts.

Urban VPN is operated by Urban Cyber Security Inc., a company linked to BiScience. BiScience is a data broker known for collecting and monetizing user information. Their past practices included tracking clickstream data at scale.

The AI monitoring capability was introduced gradually through extension updates. Earlier versions focused on browsing telemetry. Over time, the scripts evolved to capture full AI conversations.

Captured data includes both prompts and AI-generated responses. Metadata such as timestamps and session details is also collected. This provides the company with a detailed map of user interactions.

BiScience monetizes data through SDKs and products like AdClarity and Clickstream OS. Re-identifiable information can be used for targeted advertising and analytics. This raises privacy concerns for both individuals and organizations.

The scale of exfiltration highlights a systemic risk in browser extensions. Even extensions marketed as privacy tools can operate as data collection mechanisms. Organizations must treat these tools as part of the attack surface.

This breach underscores the growing danger of unmanaged AI tools. Sensitive and proprietary information can be exposed without warning. Vigilance and careful evaluation are essential to protect both personal and enterprise data.

Protecting AI Workflows Requires Vigilance Against Hidden Threats

Unmanaged browser extensions pose serious risks to both personal and enterprise AI workflows. Tools that appear trustworthy can secretly capture sensitive information. Users must remain cautious when installing and using extensions.

Organizations should treat all browser extensions as part of the attack surface. Security reviews and monitoring are essential to prevent data exfiltration. Policies should govern which extensions are permitted and how they are used.

Vigilance includes educating staff about the risks of AI interactions in browsers. Sensitive prompts, code snippets, and proprietary details must be protected. IT teams should enforce controls and audit extension activity regularly to mitigate exposure.

Ultimately, careful management and awareness are critical to safeguarding AI data. Even highly rated or featured extensions can act as hidden surveillance tools. A proactive approach ensures AI workflows remain secure and private.

Share post:

Subscribe

Popular

More like this
Related

Can AI Make Fake Art Appear Completely Genuine Today?

The New Face of Art Forgery Driven by Artificial...

How Did AI Transform Jobs Across the Globe in 2025?

The AI Surge Is Reshaping Careers in Unexpected Ways The...

Do Teens with High Emotional Intelligence Distrust AI?

How Emotional Skills Shape Teens’ Relationship with Artificial Intelligence Artificial...

Can Tether Change How AI Learns to Think?

Why AI Needs Smarter Data to Learn Beyond Memorization Artificial...