Novo’s AI chatbot, Lena, was tricked by a 400-character prompt into leaking session cookies—allowing attackers to impersonate support agents and access private chats. Such breaches show how quickly AI systems, even trusted ones, can expose data or enable control theft.
A single breach could erode customer trust, invite regulatory fines, and expose sensitive data. For industries like finance, healthcare, and legal, the stakes are even higher. That’s why understanding and securing prompt injection, adversarial inputs, and model poisoning is essential.
In our recent webinar “The Real Cost of LLM Security Risks and How to Reduce Them”, our AI/ML expert Amit Kumar Jena explained the most pressing vulnerabilities in Large Language Models and share practical strategies to prevent LLM threats like prompt injection and jailbreak, protect sensitive data, and deploy secure, compliant AI agents you can trust.
Amit Kumar Jena | Head – AI/ML Solutions
Amit leads the AI team at Kanerika, where he develops practical strategies to help organizations implement AI solutions and maximize the value of their data assets. With extensive experience in Python development, Amit specializes in statistical modeling, machine learning, and natural language processing. His technical expertise includes data preparation methodologies, predictive analytics, and advanced regression techniques
How-to Optimize Your IT Budget and Accelerate Copilot Adoption
We use cookies to give you the best experience. Cookies help to provide a more personalized experience and relevant advertising for you, and web analytics for us.
Our team will review your request and share the webinar link shortly.
Limited seats available!