Software Engineer at INE.
Speaker
Software Engineer at INE.
G Khartheesvar is a Software Engineer at INE specializing in web and mobile application security, network penetration testing, and cloud security. With a strong foundation in both Red Teaming and Blue Teaming, he works across offensive and defensive domains to identify vulnerabilities while building effective detection and response strategies. He has hands-on experience assessing real-world systems, focusing on uncovering weaknesses in applications, networks, and cloud environments through practical attack simulations and defensive improvements.
He has presented at Black Hat Asia 2023 and served as a trainer at RootCon 19, delivering technical sessions to security professionals. He is also a core contributor to the open-source project ThreatSeeker, focused on threat detection and analysis using Windows event logs. Holding a Master’s degree in Computer Science and Engineering, he contributes to the cybersecurity community through research, tool development, and knowledge sharing.
LLMGoat is a locally hosted, interactive security environment where you can safely exploit real vulnerabilities in LLM-powered applications, see the impact firsthand, and walk away actually understanding why these attacks work. It covers all 10 vulnerabilities from the OWASP LLM Top 10, things like prompt injection, system prompt leakage, data poisoning, and unbounded consumption, each built as its own live scenario with a real chatbot you can break.
The idea behind LLMGoat is simple: reading about LLM vulnerabilities is one thing, but watching a chatbot hand over confidential credentials because you asked nicely, or seeing a live database get wiped through a single chat message, hits differently. Every challenge in LLMGoat is designed around a realistic scenario. This includes a university HR bot, a financial advisor, and a research assistant. So the risks feel concrete rather than theoretical. The environment runs entirely on your own machine using open-source models, so there are no API costs, no data leaving your system, and no setup friction beyond pulling a model and starting the server. Whether you are a developer who wants to build safer AI features, a security professional exploring a new attack surface, or a student learning about AI risks for the first time, LLMGoat gives you a place to actually get your hands dirty.