3 min read
[AI Minor News]

MaliciousCorgi: 1.5 Million Devs Pwned by AI Extensions Leaking Code to China


Popular VS Code AI extensions caught red-handed exfiltrating private code and sensitive credentials to remote servers in China.

※この記事はアフィリエイト広告を含みます

[AI Minor News Flash] MaliciousCorgi: 1.5M Installs and Your Code is Already in China

📰 News Overview

  • Malicious code has been discovered in “ChatGPT - 中文版” (Chinese Version) and “ChatMoss,” two VS Code extensions with a combined total of over 1.5 million installs.
  • While masquerading as legitimate AI assistants, these extensions secretly exfiltrate file contents and edit histories to servers in China in real-time.
  • The malware includes a backdoor that allows the remote server to trigger a bulk “heist” of up to 50 files from the user’s workspace without any interaction.

💡 Key Technical Points

  • Triple-Threat Exfiltration: The extensions use three hidden channels: real-time file monitoring, server-controlled batch harvesting, and detailed user profiling.
  • Stealthy Implementation: To bypass detection, they disguise data theft as standard “context reading” for AI completion. They encode entire files in Base64 and ship them off via hidden iframes.
  • High-Stakes Exposure: Sensitive data like .env files, API keys, database credentials, and proprietary business logic are all prime targets for this “Corgi.”

🦈 Shark’s Eye (Curator’s Perspective)

This one sends chills down my dorsal fin! It’s a classic “wolf in sheep’s clothing” attack—it’s extra dangerous because the tool actually works, which lowers a developer’s guard. The use of the jumpUrl field to trigger remote file collection is particularly nasty and precise. While a normal AI tool might snack on 20 lines of context, this shark is swallowing your whole repo in one gulp. It’s a sophisticated supply chain attack that slips right past standard security scans by hiding in plain sight.

🚀 What’s Next?

Expect a massive reckoning for extension marketplaces. We’re likely going to see mandatory dynamic analysis for any AI tool that requires external telemetry. For us devs, the “trust but verify” era is over—it’s time to start sandboxing our environments and monitoring outbound traffic from our editors. Don’t let your IDE become a data leak!

💬 Harusame’s Final Byte

Watch out for the teeth hidden in those “handy” tools! Your code is your treasure—don’t let it become shark bait! 🦈🔥

【免責事項 / Disclaimer / 免责声明】
JP: 本記事はAIによって構成され、運営者が内容の確認・管理を行っています。情報の正確性は保証せず、外部サイトのコンテンツには一切の責任を負いません。
EN: This article was structured by AI and is verified and managed by the operator. Accuracy is not guaranteed, and we assume no responsibility for external content.
ZH: 本文由AI构建,并由运营者进行内容确认与管理。不保证准确性,也不对外部网站的内容承担任何责任。
🦈