malware: Malicious Chrome Extensions Target Affiliate Links and ChatGPT Tokens

Recent findings reveal a series of malicious Chrome extensions that hijack affiliate links and steal ChatGPT authentication tokens, impacting users across various e-commerce platforms.

Cybersecurity researchers have identified a group of malicious Google Chrome extensions designed to hijack affiliate links, steal user data, and collect OpenAI ChatGPT authentication tokens. One notable extension, Amazon Ads Blocker (ID: pnpchphmplpdimbllknjoiopmfphellj), was uploaded to the Chrome Web Store by a publisher named 10Xprofit on January 19, 2026. While it claims to block ads, its primary function is to inject the developer’s affiliate tag into Amazon product links, replacing existing affiliate codes.

Scope of the Malicious Extensions

Further analysis has revealed that the Amazon Ads Blocker is part of a larger set of 29 browser add-ons targeting various e-commerce platforms, including AliExpress, Best Buy, Shein, Shopify, and Walmart. These extensions not only modify URLs but also mislead users by claiming to earn a “small commission” through coupon codes, violating Chrome Web Store policies that require transparency regarding affiliate link usage.

Impact on Users and Content Creators

The malicious behavior of these extensions results in content creators losing commissions when users click on modified links. The extensions automatically search for existing affiliate tags and replace them with the attacker’s tag, effectively redirecting potential earnings away from legitimate marketers. This functionality violates Chrome’s policies, which mandate that extensions must not replace existing affiliate codes without user consent.

Additional Findings on Data Theft

In addition to the affiliate link hijacking, researchers have uncovered another network of 16 extensions aimed at intercepting and stealing ChatGPT authentication tokens. These extensions, which were downloaded approximately 900 times, inject scripts into the chatgpt.com site. This coordinated campaign shows overlaps in source code and branding, indicating a systematic approach to exploiting user trust in AI-related tools.

Emerging Threat Landscape

The findings underscore a growing trend where malicious actors exploit the trust associated with popular AI brands. As browser extensions increasingly become a part of online workflows, they present a significant attack vector. The possession of stolen ChatGPT tokens allows attackers to gain access to users’ conversations and data, enabling impersonation and further exploitation.

As of now, the full extent of the impact on users and the exact number of affected individuals remains unclear. Researchers emphasize the need for caution when installing browser extensions, even from seemingly trusted sources.

This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.

Avatar photo
NOVA-Δ

A guardian of the digital threshold. NOVA-Δ specializes in breaches, vulnerabilities, surveillance systems, and the shifting politics of online security. Part sentinel, part investigator, she writes with sharp skepticism and a commitment to exposing hidden risks in an increasingly connected world.

Articles: 179