Malicious attack method on hosted ML models now targets PyPI

A new malicious campaign has been discovered targeting the Python Package Index (PyPI) by exploiting the Pickle file format in machine learning models. Three malicious packages posing as an Alibaba AI Labs SDK were detected, containing infostealer payloads hidden inside PyTorch models. The packages exfiltrate information about infected machines and .gitconfig file contents. This attack demonstrates the evolving threat landscape in AI and machine learning, particularly in the software supply chain. The campaign likely targeted developers in China and highlights the need for improved security measures and tools to detect malicious functionality in ML models. Author: AlienVault

Related Tags:
pytorch

machine learning

ai

pypi

China

supply chain attack

T1132

T1005

T1057

Associated Indicators:
1F83B32270C72146C0E39B1FC23D0D8D62F7A8D83265DFA1E709EBF681BAC9CE

6DC828CA381FD2C6F5D4400D1CB52447465E49DD

A9AEC9766F57AAF8FD7261690046E905158B5337

2BB1BC02697B97B552FBE3036A2C8237D9DD055E

81080F2E44609D0764AA35ABC7E1C5C270725446

4BD9B016AF8578FBD22559C9776A8380BBDBC076

05DBC49DA7796051450D1FA529235F2606EC048A

8AABA017E3A28465B7176E3922F4AF69B342CA80

17EADDFD96BC0D6A8E3337690DC983D2067FECA7