25.13.16
This website uses cookies to ensure you get the best experience on our website. Learn more

LLM Red Teaming

Luis Josue Cruz Mier

LLM Red Teaming is a hands-on learning path for cybersecurity professionals to explore offensive security techniques targeting Large Language Models. It covers LLM fundamentals, ethical AI practices, vulnerability enumeration, jailbreaks, prompt injection, output handling flaws, supply chain risks, and excessive model agency. Ideal for those with a background in penetration testing or ethical hacking looking to specialize in AI security.

Skills / Knowledge

  • LLM Web App Attacks
  • LLM Supply Chain Attacks
  • LLM Red Teaming

Issued on

June 23, 2025

Expires on

Does not expire