Skip to content

Newsflash: Large Language Models Can Be Easily Weaponized

Photo: Leon Neal
Photo: Leon Neal (Getty Images)

New cybersecurity research from IBM suggests that large language models can be easily weaponized to aid in cybercriminal schemes. Researchers hypothesize ways in which bad actors could access a business’s LLM and exploit it to leak sensitive proprietary information or weaponize the algorithm to cause further havoc within the organization.