#### [Security](/security/)It’s only a matter of time before LLMs jump start supply-chain attacks======================================================================’The greatest concern is with spear phishing and social engineering’——————————————————————–[Jessica Lyons](/Author/Jessica-Lyons ‘Read more by this author’) Sun 29 Dec 2024 // 18:20 UTC [](https://www.reddit.com/submit?url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dreddit&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks) [](https://twitter.com/intent/tweet?text=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dtwitter&via=theregister) [](https://www.facebook.com/dialog/feed?app_id=1404095453459035&display=popup&link=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dfacebook) [](https://www.linkedin.com/shareArticle?mini=true&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dlinkedin&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&summary=%27The%20greatest%20concern%20is%20with%20spear%20phishing%20and%20social%20engineering%27) [](https://api.whatsapp.com/send?text=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dwhatsapp) Interview Now that criminals have realized there’s no need to train their own LLMs for any nefarious purposes – it’s much cheaper and easier to steal credentials and then jailbreak existing ones – the threat of a large-scale supply chain attack using generative AI becomes more real.No, we’re not talking about a fully AI-generated attack from the initial access to the business operations shutdown. Technologically, the criminals aren’t there yet. But one thing LLMs are getting very good at is assisting in social engineering campaigns.And this is why Crystal Morin, former intelligence analyst for the US Air Force and cybersecurity strategist at Sysdig, anticipates seeing highly successful supply chain attacks in 2025 that originated with an LLM-generated spear phish. ![](https://pubads.g.doubleclick.net/gampad/ad?co=1&iu=/6978/reg_security/front&sz=300×50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2Z3Gcf5K1mWbKOu9PlCiInwAAAJY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0)When it comes to using LLMs, ‘threat actors are learning and understanding and gaining the lay of the land just the same as we are,’ Morin told *The Register*. ‘We’re in a footrace right now. It’s machine against machine.’ ![](https://pubads.g.doubleclick.net/gampad/ad?co=1&iu=/6978/reg_security/front&sz=300×50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33Z3Gcf5K1mWbKOu9PlCiInwAAAJY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0)Sysdig, along with other researchers, in 2024 documented an uptick in criminals using stolen cloud credentials to access LLMs. In May, the container security firm documented attackers [targeting Anthropic’s Claude LLM model](https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/).While they could have exploited this access to extract LLM training data, their primary goal in this type of attack appeared to be selling access to other criminals. This left the cloud account owner footing the bill — at the hefty price of $46,000 per day related to LLM consumption costs. ![](https://pubads.g.doubleclick.net/gampad/ad?co=1&iu=/6978/reg_security/front&sz=300×50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44Z3Gcf5K1mWbKOu9PlCiInwAAAJY&t=ct%3Dns%26unitnum%3D426raptor%3Dfalcon%26pos%3Dmid%26test%3D0)Digging deeper, the researchers discovered that the [broader script](https://github.com/kingbased/keychecker) used in the attack could check credentials for 10 different AI services: AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.> We’re in a footrace right now. It’s machine against machineLater in the year, Sysdig spotted attackers attempting to use stolen credentials to enable LLMs.The threat research team calls any attempt to illegally obtain access to a model ‘LLMjacking,’ and in September reported that these types of attacks were ‘on the rise, with a [10x increase in LLM requests](https://sysdig.com/blog/growing-dangers-of-llmjacking/) during the month of July and 2x the amount of unique IP addresses engaging in these attacks over the first half of 2024.’Not only does this cost victims a significant amount of money, according to Sysdig, but this can run more than $100,000 per day when the victim org is using newer models like Claude 3 Opus.Plus, victims are forced to pay for people and technology to stop these attacks. There’s also a risk of enterprise LLMs being weaponized, leading to further potential costs.### 2025: The year of LLM phishing?In 2025, ‘the greatest concern is with spear phishing and social engineering,’ Morin said. ‘There’s endless ways to get access to an LLM, and they can use this GenAI to craft unique, tailored messages to the individuals that they’re targeting based on who your employer is, your shopping preferences, the bank that you use, the region that you live in, restaurants and things like that in the area.’In addition to helping attackers overcome language barriers, this can make messages sent via email or social media messaging apps appear even more convincing because they are expressly crafted for the individual victims.’They’re going to send you a message from this restaurant that’s right down the street, or popular in your town, hoping that you’ll click on it,’ Morin added. ‘So that will enable their success quite a bit. That’s how a lot of successful breaches happen. It’s just the person-on-person initial access.’She pointed to the Change Healthcare ransomware attack – for which, we should make very clear, there is no evidence suggesting it was assisted by an LLM – as an example of one of 2024’s hugely damaging breaches.In this case, a ransomware crew [locked up](https://www.theregister.com/2024/02/29/alphv_change_healthcare/) Change Healthcare’s systems, [disrupting](https://www.theregister.com/2024/02/22/change_healthcare_outage/) thousands of pharmacies and hospitals across the US and accessing private data belonging to around [100 million people](https://www.theregister.com/2024/10/27/senator_domain_registrars_russia_disinfo/). It took the healthcare payments giant [nine months](https://www.theregister.com/2024/11/20/change_healthcares_clearinghouse_services/) to restore its clearinghouse services following the attack.> It will be a very small, simple portion of the attack chain with potentially massive impact’Going back to spear phishing: imagine an employee of Change Healthcare receiving an email and clicking on a link,’ Morin said. ‘Now the attacker has access to their credentials, or access to that environment, and the attacker can get in and move laterally.’When and if we see this type of GenAI assist, ‘it will be a very small, simple portion of the attack chain with potentially massive impact,’ she added.While startups and existing companies are releasing security tools and that also use AI to detect and prevent email phishes, there are some really simple steps that everyone can take to avoid falling for any type of phishing attempt. ‘Just be careful what you click,’ Morin advised.### Think before you clickAlso: pay close attention to the email sender. ‘It doesn’t matter how good the body of the email might be. Did you look at the email address and it’s some crazy string of characters or some weird address like name@gmail but it says it’s coming from Verizon? That doesn’t make sense,’ she added.LLMs can also help criminals craft a domain with different alphanumerics based on legitimate, well-known company names, and they can use various prompts to make the sender look more believable.Even voice-call phishing will likely become harder to distinguish because of AI used for voice cloning, Morin believes.* [Cast a hex on ChatGPT to trick the AI into writing exploit code](https://www.theregister.com/2024/10/29/chatgpt_hex_encoded_jailbreak/)* [OpenAI claims its software can clone your voice from 15 seconds of you talking](https://www.theregister.com/2024/04/01/openai_voice_clone/)* [Microsoft dangles $10K for hackers to hijack LLM email service](https://www.theregister.com/2024/12/09/microsoft_llm_prompt_injection_challenge/)* [Don’t fall for a mail asking for rapid Docusign action — it may be an Azure account hijack phish](https://www.theregister.com/2024/12/19/docusign_lure_azure_account_takeover/)’I get, like, five spam calls a day from all over the country and I just ignore them because my phone tells me it’s spam,’ she noted.’But they use voice cloning now, too,’ Morin continued. ‘And most of the time when people answer your phone, especially if you’re driving or something, you’re not actively listening, or you’re multitasking, and you might not catch that this is a voice clone – especially if it sounds like someone that’s familiar, or what they’re saying is believable, and they really do sound like they’re from your bank.’We saw a preview of this during the run-up to the 2024 US presidential election, when [AI-generated robocalls](https://www.theregister.com/2024/01/23/robocaller_biden_new_hampshire/) impersonating President Biden urged voters not to participate in the state’s presidential primary election.Since then, the FTC issued a [$25,000 reward](https://www.theregister.com/2024/01/05/ftc_voice_cloning_solution/) to solicit ideas on the best ways to combat AI voice cloning and the FCC [declared](https://www.theregister.com/2024/02/08/sorry_scammers_the_fcc_says/) AI-generated robocalls to be illegal.Morin doesn’t expect this to be a deterrent to criminals.’If there’s a will, there’s a way,’ she opined. ‘If it costs money, then they’ll figure out a way to get it for free.’ ® [Whitepaper: Top 5 Tips For Navigating Your SASE Journey](https://go.theregister.com/tl/2386/-14369/top-5-tips-for-navigating-your-sase-journey?td=wptl2386bt) Share [](https://www.reddit.com/submit?url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dreddit&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks) [](https://twitter.com/intent/tweet?text=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dtwitter&via=theregister) [](https://www.facebook.com/dialog/feed?app_id=1404095453459035&display=popup&link=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dfacebook) [](https://www.linkedin.com/shareArticle?mini=true&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dlinkedin&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&summary=%27The%20greatest%20concern%20is%20with%20spear%20phishing%20and%20social%20engineering%27) [](https://api.whatsapp.com/send?text=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dwhatsapp) #### More about* [AI](/Tag/AI/)* [Cybercrime](/Tag/Cybercrime/)* [Large Language Model](/Tag/Large%20Language%20Model/) More like these × ### More about* [AI](/Tag/AI/)* [Cybercrime](/Tag/Cybercrime/)* [Large Language Model](/Tag/Large%20Language%20Model/)* [Phishing](/Tag/Phishing/)* [Security](/Tag/Security/) ### Narrower topics* [2FA](/Tag/2FA/)* [Advanced persistent threat](/Tag/Advanced%20persistent%20threat/)* [Amazon Bedrock](/Tag/Amazon%20Bedrock/)* [Anthropic](/Tag/Anthropic/)* [Application Delivery Controller](/Tag/Application%20Delivery%20Controller/)* [Authentication](/Tag/Authentication/)* [BEC](/Tag/BEC/)* [Black Hat](/Tag/Black%20Hat/)* [BSides](/Tag/BSides/)* [Bug Bounty](/Tag/Bug%20Bounty/)* [ChatGPT](/Tag/ChatGPT/)* [CHERI](/Tag/CHERI/)* [CISO](/Tag/CISO/)* [Common Vulnerability Scoring System](/Tag/Common%20Vulnerability%20Scoring%20System/)* [Cybersecurity](/Tag/Cybersecurity/)* [Cybersecurity and Infrastructure Security Agency](/Tag/Cybersecurity%20and%20Infrastructure%20Security%20Agency/)* [Cybersecurity Information Sharing Act](/Tag/Cybersecurity%20Information%20Sharing%20Act/)* [Data Breach](/Tag/Data%20Breach/)* [Data Protection](/Tag/Data%20Protection/)* [Data Theft](/Tag/Data%20Theft/)* [DDoS](/Tag/DDoS/)* [DEF CON](/Tag/DEF%20CON/)* [Digital certificate](/Tag/Digital%20certificate/)* [Encryption](/Tag/Encryption/)* [Exploit](/Tag/Exploit/)* [Firewall](/Tag/Firewall/)* [Gemini](/Tag/Gemini/)* [Google AI](/Tag/Google%20AI/)* [GPT-3](/Tag/GPT-3/)* [GPT-4](/Tag/GPT-4/)* [Hacker](/Tag/Hacker/)* [Hacking](/Tag/Hacking/)* [Hacktivism](/Tag/Hacktivism/)* [Identity Theft](/Tag/Identity%20Theft/)* [Incident response](/Tag/Incident%20response/)* [Infosec](/Tag/Infosec/)* [Infrastructure Security](/Tag/Infrastructure%20Security/)* [Kenna Security](/Tag/Kenna%20Security/)* [Machine Learning](/Tag/Machine%20Learning/)* [MCubed](/Tag/MCubed/)* [NCSAM](/Tag/NCSAM/)* [NCSC](/Tag/NCSC/)* [Neural Networks](/Tag/Neural%20Networks/)* [NLP](/Tag/NLP/)* [Palo Alto Networks](/Tag/Palo%20Alto%20Networks/)* [Password](/Tag/Password/)* [Quantum key distribution](/Tag/Quantum%20key%20distribution/)* [Ransomware](/Tag/Ransomware/)* [Remote Access Trojan](/Tag/Remote%20Access%20Trojan/)* [REvil](/Tag/REvil/)* [RSA Conference](/Tag/RSA%20Conference/)* [Spamming](/Tag/Spamming/)* [Spyware](/Tag/Spyware/)* [Star Wars](/Tag/Star%20Wars/)* [Surveillance](/Tag/Surveillance/)* [Tensor Processing Unit](/Tag/Tensor%20Processing%20Unit/)* [TLS](/Tag/TLS/)* [TOPS](/Tag/TOPS/)* [Trojan](/Tag/Trojan/)* [Trusted Platform Module](/Tag/Trusted%20Platform%20Module/)* [Vulnerability](/Tag/Vulnerability/)* [Wannacry](/Tag/Wannacry/)* [Zero trust](/Tag/Zero%20trust/) ### Broader topics* [Self-driving Car](/Tag/Self-driving%20Car/) #### More aboutShare [](https://www.reddit.com/submit?url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dreddit&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks) [](https://twitter.com/intent/tweet?text=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dtwitter&via=theregister) [](https://www.facebook.com/dialog/feed?app_id=1404095453459035&display=popup&link=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dfacebook) [](https://www.linkedin.com/shareArticle?mini=true&url=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dlinkedin&title=It%27s%20only%20a%20matter%20of%20time%20before%20LLMs%20jump%20start%20supply-chain%20attacks&summary=%27The%20greatest%20concern%20is%20with%20spear%20phishing%20and%20social%20engineering%27) [](https://api.whatsapp.com/send?text=https://www.theregister.com/2024/12/29/llm_supply_chain_attacks/%3futm_medium%3dshare%26utm_content%3darticle%26utm_source%3dwhatsapp) POST A COMMENT #### More about* [AI](/Tag/AI/)* [Cybercrime](/Tag/Cybercrime/)* [Large Language Model](/Tag/Large%20Language%20Model/) More like these × ### More about* [AI](/Tag/AI/)* [Cybercrime](/Tag/Cybercrime/)* [Large Language Model](/Tag/Large%20Language%20Model/)* [Phishing](/Tag/Phishing/)* [Security](/Tag/Security/) ### Narrower topics* [2FA](/Tag/2FA/)* [Advanced persistent threat](/Tag/Advanced%20persistent%20threat/)* [Amazon Bedrock](/Tag/Amazon%20Bedrock/)* [Anthropic](/Tag/Anthropic/)* [Application Delivery Controller](/Tag/Application%20Delivery%20Controller/)* [Authentication](/Tag/Authentication/)* [BEC](/Tag/BEC/)* [Black Hat](/Tag/Black%20Hat/)* [BSides](/Tag/BSides/)* [Bug Bounty](/Tag/Bug%20Bounty/)* [ChatGPT](/Tag/ChatGPT/)* [CHERI](/Tag/CHERI/)* [CISO](/Tag/CISO/)* [Common Vulnerability Scoring System](/Tag/Common%20Vulnerability%20Scoring%20System/)* [Cybersecurity](/Tag/Cybersecurity/)* [Cybersecurity and Infrastructure Security Agency](/Tag/Cybersecurity%20and%20Infrastructure%20Security%20Agency/)* [Cybersecurity Information Sharing Act](/Tag/Cybersecurity%20Information%20Sharing%20Act/)* [Data Breach](/Tag/Data%20Breach/)* [Data Protection](/Tag/Data%20Protection/)* [Data Theft](/Tag/Data%20Theft/)* [DDoS](/Tag/DDoS/)* [DEF CON](/Tag/DEF%20CON/)* [Digital certificate](/Tag/Digital%20certificate/)* [Encryption](/Tag/Encryption/)* [Exploit](/Tag/Exploit/)* [Firewall](/Tag/Firewall/)* [Gemini](/Tag/Gemini/)* [Google AI](/Tag/Google%20AI/)* [GPT-3](/Tag/GPT-3/)* [GPT-4](/Tag/GPT-4/)* [Hacker](/Tag/Hacker/)* [Hacking](/Tag/Hacking/)* [Hacktivism](/Tag/Hacktivism/)* [Identity Theft](/Tag/Identity%20Theft/)* [Incident response](/Tag/Incident%20response/)* [Infosec](/Tag/Infosec/)* [Infrastructure Security](/Tag/Infrastructure%20Security/)* [Kenna Security](/Tag/Kenna%20Security/)* [Machine Learning](/Tag/Machine%20Learning/)* [MCubed](/Tag/MCubed/)* [NCSAM](/Tag/NCSAM/)* [NCSC](/Tag/NCSC/)* [Neural Networks](/Tag/Neural%20Networks/)* [NLP](/Tag/NLP/)* [Palo Alto Networks](/Tag/Palo%20Alto%20Networks/)* [Password](/Tag/Password/)* [Quantum key distribution](/Tag/Quantum%20key%20distribution/)* [Ransomware](/Tag/Ransomware/)* [Remote Access Trojan](/Tag/Remote%20Access%20Trojan/)* [REvil](/Tag/REvil/)* [RSA Conference](/Tag/RSA%20Conference/)* [Spamming](/Tag/Spamming/)* [Spyware](/Tag/Spyware/)* [Star Wars](/Tag/Star%20Wars/)* [Surveillance](/Tag/Surveillance/)* [Tensor Processing Unit](/Tag/Tensor%20Processing%20Unit/)* [TLS](/Tag/TLS/)* [TOPS](/Tag/TOPS/)* [Trojan](/Tag/Trojan/)* [Trusted Platform Module](/Tag/Trusted%20Platform%20Module/)* [Vulnerability](/Tag/Vulnerability/)* [Wannacry](/Tag/Wannacry/)* [Zero trust](/Tag/Zero%20trust/) ### Broader topics* [Self-driving Car](/Tag/Self-driving%20Car/) #### TIP US OFF[Send us news](https://www.theregister.com/Profile/contact/)[#### Infosec experts divided on AI’s potential to assist red teamsCANALYS FORUMS APAC Yes, LLMs can do the heavy lifting. But good luck getting one to give evidenceSecurity10 days -| 11](/2024/12/20/gen_ai_red_teaming/?td=keepreading) [#### Don’t fall for a mail asking for rapid Docusign action — it may be an Azure account hijack phishRecent campaign targeted 20,000 folk across UK and Europe with this tactic, Unit 42 warnsCyber-crime11 days -| 17](/2024/12/19/docusign_lure_azure_account_takeover/?td=keepreading) [#### Phishers cast wide net with spoofed Google Calendar invitesNot that you needed another reason to enable the ‘known senders’ settingCyber-crime12 days -| 17](/2024/12/18/google_calendar_spoofed_in_phishing_campaign/?td=keepreading) [#### A rethink of parental leave policyIT workers and programmers set to benefit as Sandvik implements HR rebootSponsored Feature](/2024/12/04/a_rethink_of_parental_leave/?td=keepreading) [#### US bipartisan group publishes laundry list of AI policy requestsChair Jay Obernolte urges Congress to act — whether it will is another matterAI + ML10 days -| 9](/2024/12/19/house_ai_policy_requests/?td=keepreading) [#### OpenAI plans to ring in the New Year with a for-profit pushWe have altered the deal, pray we don’t alter it any furtherAI + ML2 days -| 26](/2024/12/27/openai_for_profit_push/?td=keepreading) [#### Take a closer look at Nvidia’s buy of Run.ai, European Commission toldUpdated Campaign groups, non-profit orgs urge action to prevent GPU maker tightening grip on AI industrySystems13 days -| 3](/2024/12/16/probe_nvidias_buy_of_runai/?td=keepreading) [#### Boffins trick AI model into giving up its secretsAll it took to make an Google Edge TPU give up model hyperparameters was specific hardware, a novel attack technique … and several daysResearch11 days -| 22](/2024/12/18/ai_model_reveal_itself/?td=keepreading) [#### Ransomware scum blow holes in Cleo software patches, Cl0p (sort of) claims responsibilityBut can you really take crims at their word?Security13 days -| 1](/2024/12/16/ransomware_attacks_exploit_cleo_bug/?td=keepreading) [#### How cops taking down LockBit, ALPHV led to RansomHub’s meteoric riseCut off one head, two more grow back in its placeCyber-crime1 day -| 2](/2024/12/28/lockbit_alphv_disruptions_ransomhub_rise/?td=keepreading) [#### How Androxgh0st rose from Mozi’s ashes to become ‘most prevalent malware’Botnet’s operators ‘driven by similar interests as that of the Chinese state’Cyber-crime5 days -| 1](/2024/12/24/androxgh0st_botnet_mozi/?td=keepreading) [#### Suspected LockBit dev, facing US extradition, ‘did it for the money’Dual Russian-Israeli national arrested in AugustCyber-crime6 days -| 17](/2024/12/23/lockbit_ransomware_dev_extradition/?td=keepreading)
Related Tags:
NAICS: 54 – Professional
Scientific
Technical Services
NAICS: 481 – Air Transportation
NAICS: 62 – Health Care And Social Assistance
NAICS: 541 – Professional
Scientific
Technical Services
NAICS: 622 – Hospitals
NAICS: 518 – Computing Infrastructure Providers
Data Processing
Web Hosting
Related Services
NAICS: 92 – Public Administration
NAICS: 922 – Justice
Public Order
Safety Activities
NAICS: 51 – Information
Associated Indicators:
null