BSides Kerala 2026 Speakers

Litesh Ghute

Software Engineer (R&D) INE.

Speaker
Speaker Bio

Software Engineer (R&D) INE.

Litesh Ghute is a software Engineer, security researcher, and offensive security practitioner specializing in web, network, cloud, and mobile application security. Currently at INE, he works as a Full-Stack Developer, Lab Builder, and Security Researcher, designing hands-on labs that simulate real-world vulnerabilities to help learners build practical offensive security skills. He has a strong background in penetration testing and vulnerability research, with multiple responsible disclosures including CVEs, and expertise across mobile, web, and network security with a focus on real-world attack paths.

He is also a core developer of widely used open-source projects AWSGoat and GCPGoat, which provide intentionally vulnerable cloud environments for hands-on security training in AWS and GCP ecosystems, making them valuable resources for the cybersecurity community.He has also has presented his research at BlackHat Asia and served as a trainer at RootCon 17 and 19, delivering hands-on sessions on Cloud Security and Real-World Offensive Security. He holds a B.Tech (Hons.) in Computer Science and an MBA in Data Science & IT, combining technical depth with practical insight in cybersecurity.

Tool Demo at BSides Kerala 2026

Tool Demo

LLMGoat: Offensive LLM Security Environment

Hacker Ground Intermediate 30 minutes

LLMGoat is a locally hosted, interactive security environment where you can safely exploit real vulnerabilities in LLM-powered applications, see the impact firsthand, and walk away actually understanding why these attacks work. It covers all 10 vulnerabilities from the OWASP LLM Top 10, things like prompt injection, system prompt leakage, data poisoning, and unbounded consumption, each built as its own live scenario with a real chatbot you can break.

The idea behind LLMGoat is simple: reading about LLM vulnerabilities is one thing, but watching a chatbot hand over confidential credentials because you asked nicely, or seeing a live database get wiped through a single chat message, hits differently. Every challenge in LLMGoat is designed around a realistic scenario. This includes a university HR bot, a financial advisor, and a research assistant. So the risks feel concrete rather than theoretical. The environment runs entirely on your own machine using open-source models, so there are no API costs, no data leaving your system, and no setup friction beyond pulling a model and starting the server. Whether you are a developer who wants to build safer AI features, a security professional exploring a new attack surface, or a student learning about AI risks for the first time, LLMGoat gives you a place to actually get your hands dirty.

Date
9 May 2026
Time
12:00 - 12:30 PM IST
Venue
Hacker Ground
Format
Tool Demo
BSides Kerala 2026