This article discusses an adversarial machine learning algorithm that uses large language models (LLMs) to generate novel variants of malicious JavaScript code at scale. The algorithm iteratively transforms malicious code to evade detection while maintaining its functionality. The process involves rewriting prompts such as variable renaming, dead code insertion, and whitespace removal. The technique significantly reduced detection rates on VirusTotal. To counter this, the researchers retrained their classifier on LLM-rewritten samples, improving real-world detection by 10%. The study highlights both the potential threats and opportunities presented by LLMs in cybersecurity, demonstrating how they can be used to create evasive malware variants but also to enhance defensive capabilities. Author: AlienVault
Related Tags:
WormGPT
T1027.001
T1059.007
T1588.002
T1588.001
T1587.001
T1027
Korea
Republic of
T1140
Associated Indicators:
3F0B95F96A8F28631EB9CE6D0F40B47220B44F4892E171EDE78BA78BD9E293EF
03D3E9C54028780D2FF15C654D7A7E70973453D2FAE8BDEEBF5D9DBB10FF2EAB
http://jakang.freewebhostmost.com/korea/app.html