A new LLM jailbreak technique could allow users to exploit AI models to detail how to make weapons and explosives, and Claude, Llama and GPT are at risk.
![Female software developer working at computer with screen light reflecting in spectacles.](https://trendingdash.com/wp-content/uploads/2024/04/A-new-LLM-jailbreak-technique-could-allow-users-to-exploit.jpg)
Researchers at Anthropic have warned of a new large language model (LLM) jailbreaking technique that could be exploited to force ...
Read more