A fake repository impersonating OpenAI’s Privacy Filter tool climbed to the top of Hugging Face’s trending list last week, pulling in over 240,000 downloads before administrators pulled it down. The repository, listed under the namespace Open-OSS/privacy-filter, was a convincing clone of a real OpenAI tool released in April 2026. Where the original helped developers strip personally identifiable information from data locally, this version did the reverse: it quietly emptied infected machines of their most sensitive data.
What happened, fast
OpenAI’s actual Privacy Filter is a legitimate, locally-run PII redaction tool. Attackers cloned its documentation almost word-for-word, creating a mirror repository that looked credible enough to fool experienced developers at first glance. Within 18 hours, it had racked up 244,000 downloads and 667 likes – numbers researchers believe were artificially inflated through API-driven bots to manufacture the appearance of a popular, trusted project.
HiddenLayer researchers flagged the campaign on May 7.
How the infection worked
Users who followed the repo’s setup instructions – cloning the project and running either start.bat on Windows or loader.py on Linux/macOS – kicked off a multi-stage infection chain. The Linux version was mostly inert, but the Windows path was another story.
The script first disabled SSL verification so the malware could phone home without triggering certificate warnings. It then fetched an encoded payload from JSON Keeper, a public paste service used as a “dead drop”, a trick that let attackers swap out the malicious payload remotely without ever touching the Hugging Face repo again. Next came a Windows UAC prompt to grab admin rights, followed by the creation of a scheduled task dressed up as a Microsoft Edge update to lock in SYSTEM-level persistence. To stay hidden, the malware added its own directories to Microsoft Defender’s exclusion list and disabled both AMSI and ETW – two of Windows’ core mechanisms for catching malicious activity.

The payload: a Rust-based data vacuum
Once the environment was prepped, a binary called “sefirah” dropped onto the machine. It’s written in Rust which increasingly is becoming the language of choice for malware authors because it’s fast, memory-safe, and harder to reverse-engineer than Python or C.
The payload ran eight data collectors in parallel:
- Crypto wallets: Local wallet files plus data from over 40 browser-based wallet extensions.
- Discord tokens: Session tokens and local databases that can let attackers bypass MFA entirely.
- Developer credentials: SSH keys, FTP credentials (with specific attention to FileZilla), and VPN configs.
- Browser data: Saved passwords, credit cards, cookies, and autofill data from Chrome, Edge, Firefox, Brave, and any other Chromium or Gecko-based browser on the machine.
- Screenshots: Multi-monitor captures to grab whatever was on screen at the time of infection.
- File search: A sweep for files containing words like “backup,” “seed,” “secret,” or “password.”
Everything was bundled into a JSON package and sent to a command-and-control server at recargapopular[.]com over encrypted channels.

The Silver Fox connection
Analysts found infrastructure overlaps between this campaign and previous npm typosquatting attacks that distributed ValleyRAT (also known as Winos 4.0). One domain in particular, api.eth-fastscan[.]org has appeared before in operations linked to Silver Fox, a Chinese threat group also tracked as Void Arachne or SwimSnake.
Silver Fox typically runs two tracks at once: targeted spear-phishing against finance and management staff, and broader opportunistic campaigns through watering hole attacks and SEO manipulation. Moving into the Hugging Face ecosystem is a deliberate pivot. AI researchers are a high-value target. They often have access to proprietary models, production infrastructure, and corporate cloud environments.
Why Hugging Face’s safeguards didn’t catch it
Hugging Face has invested in automated scanning, including “Pickle Scanning” to catch unsafe Python serialization and partnerships with security firms like Protect AI and JFrog. But those tools are mostly aimed at model weights. By hiding the malicious logic inside ordinary .py and .bat files, attackers walked right past the platform’s primary filters.
The trending algorithm is also a problem. It’s based on download counts and likes – both of which can be gamed with bots. Once a repo is trending, real users treat that as a trust signal. It isn’t.
What to do if you ran the code
If you cloned Open-OSS/privacy-filter and ran anything from it on a Windows machine, assume the machine is fully compromised. Because the malware operates at SYSTEM level and modifies antivirus exclusions early in the process, running a scan after the fact is not enough.
The recommended steps:
- Reimage the system. A fresh OS install, not System Restore. The malware likely survives in scheduled tasks.
- Rotate every credential stored on that machine. Passwords, session tokens, “Remember Me” cookies – all of it. Session tokens can bypass MFA, so revoking them matters as much as changing passwords.
- Audit your dev environment. Check ~/.ssh/authorized_keys for keys you didn’t add. Rotate any API keys – AWS, OpenAI, GitHub – that were stored in environment variables on that machine.
- Move any crypto assets. If you held cryptocurrency on the device, treat the seed phrase as burned. Move funds to a new wallet generated on a clean machine.
Going forward
On Hugging Face, legitimate OpenAI projects live under the openai namespace. If the namespace reads something like Open-OSS or has a subtle misspelling, that’s a red flag. Look for the “Verified” badge next to organization names – it’s not foolproof, but it’s a starting point.
More broadly, this attack is a sign that AI platforms are now serious targets for organized threat actors, not just opportunistic script kiddies. Developers pulling in ML tools should apply the same supply chain skepticism they’d give any npm or PyPI package: verify the source, check for recent activity from real contributors, and never run setup scripts without reading them first.








