Karan Mankodi is a passionate technical leader at Akamai Technologies with over 11 years of experience in cybersecurity, cloud computing, and consulting. As a Senior Service Line Manager, he leads a large post-sales consulting team dedicated to the Financial Services industry, overseeing the successful delivery of security and business solutions for a client portfolio valued at over $100 million.
Karan is a recognized thought leader on cyber threats and innovation. He has authored articles for Akamai's renowned "State of the Internet" report, focusing on attack trends in financial services, and is a regular speaker at industry events, including FS-ISAC Americas Threat Briefings and the Cloud Security Alliance. He holds a Master's in Management from Harvard University, a Master's in Information Systems from Northeastern University, and is a Google Cloud Certified Technical Leader.
Who’s Really in Control? Cloud Security in the Age of AI Agents
We are entering an era where AI agents are no longer experimental side projects. They are being woven into business processes, publishing systems, and cloud applications that operate at global scale. These agents can take action, reason across multiple steps, and interact with APIs and data sources on behalf of people and organizations. This creates incredible opportunities for efficiency and innovation, but it also introduces a new class of security challenges that many teams are not yet prepared to handle.
The central question is one of control. How do we ensure that AI agents are working for us, and not against us? Attackers have already begun exploring ways to manipulate agents through prompt injection, data poisoning, and malicious task chaining. An adversarial agent has the potential to move laterally across cloud services, leak sensitive data, or amplify misinformation with unprecedented speed. At the same time, organizations struggle with governance: how to align the behavior of agents with privacy requirements, compliance frameworks, and ethical use guidelines.
In this session, I will break down the emerging risks that come with deploying AI agents in cloud environments. We will explore scenarios where adversarial agents can compromise trust, misuse publishing pipelines, and even exploit the very defensive tools meant to secure them. I will share lessons learned from observing the evolution of automated bots into autonomous agents, and why traditional detection and policy controls are no longer enough.
Most importantly, we will look at practical approaches to governing and defending AI agents. This includes designing guardrails for agent behavior, applying adversarial testing methods, and building resilience into cloud applications that rely on agent-driven automation. The goal is not to avoid AI agents, but to deploy them with confidence by understanding where the risks lie and how to mitigate them.
Attendees will leave with a clear view of:
The differences between bots and agents, and why this shift matters for cloud security.
The most common adversarial techniques targeting AI agents today.
Governance models and defensive strategies that can reduce the risk of loss, leakage, or abuse at scale.
AI agents represent one of the most exciting frontiers in technology, but also one of the most challenging for security leaders. By addressing the question of who is really in control, we can begin to shape a future where these systems are not just powerful, but trustworthy.
CloudVibing: How GenAI impacts cloud operations
AI functionality is having a profound effect on Cloud Security Operations. From DevOps through Offensive and Defensive operations, GenAI is reshaping how our industry functions and operates. To be clear, I am pro GenAI. Fast, efficient, and scalable, GenAI offers a tremendous value to the Cloud Security industry and will allow us to perform our duties at larger scales and faster time constraints, but the security mindset must remain at the forefront.
Within this talk, I will discuss the Pros and Cons of how vibe coding can be an effective tool for use to better protect ourselves from malicious operations. We will start with a brief overview of how tools like Claude and OpenAI operate, what their limitations are and how those limitations can result in potential security risks for a production environment. The risks we will discuss being, Time constraints, Misconfigurations and Vulnerabilities, and Widow Functions. Then I will give a brief overview of how these risks most likely populate when Vibe Coding the end all be all Infrastructure as Code template your business needed yesterday.
I will wrap the talk with an overview of how the Russian Nexus threat group Void Blizzard successfully used AI to create more effective phishing campaigns. Then I will discuss how GenAI is being used by Security Vendors to provide enhanced filtering, remediation and even autofix functionality. The key takeaways from this talk will be a firm understanding of Vibe Coding and its strengths and limitations, an understanding on how malicious threat actors are using GenAI in their operations, and finally, how security vendors are using GenAI to provide more actionable cloud telemetry. You will also learn a few tricks for how to strengthen your AI coding effectiveness by building your own AI Memory Cache to assist your LLM platform of choice. If you’re interested in the strengths, obstacles, pitfalls, and some solutions for how GenAI is affecting the cloud industry, then this talk is for you!
Nathaniel Quist is the manager of the Cortex Cloud Threat Intelligence team, researching threat actor groups who target and leverage public cloud platforms, tools, and services. He and his team actively focused on identifying the scope of the threat, the malware, and the techniques these threat actor groups use during their operations.
Nathaniel has worked within Government, Public, and Private sectors. He holds a Master of Science in Information Security Engineering (MSISE) from The SANS Institute, where he focused on Network and System Forensics, Malware Reversal, and Incident Response. He is the author of multiple blogs, reports, and whitepapers published by Palo Alto Networks’ Unit 42 and Prisma Cloud and the SANS InfoSec Reading Room.
Chris Hoesly has spent 10+ years in various engineering and sales roles across the data security software landscape. With a blend of product management for go-to-market sales, ensuring the delivery of cutting-edge security offerings and providing consultative executive guidance, Chris focuses on helping organizations adapt to the ever-changing data security industry. Now, he serves as the Regional Vice President - Security Solution Engineering at BigID helping customers and prospects transform their data security strategies and deliver business value.
Connecting the Dots Between Data and AI
In today's rapidly evolving digital landscape, the intersection of data governance and artificial intelligence presents both unprecedented opportunities and complex challenges for enterprises. BigID emerges as a pioneering force in this space, fundamentally transforming how organizations understand, secure, and govern their data ecosystems while simultaneously addressing the emerging risks associated with AI adoption.
AI Governance as the Cornerstone of Secure and Ethical Cloud AI
Artificial Intelligence is transforming business at breakneck speed, embedding itself into everything from cloud-based productivity suites to customer-facing applications. But behind the promise of innovation lies a rapidly escalating cybersecurity and compliance problem: organizations are racing ahead with AI adoption without adequate governance. In fact, as of late 2024, fewer than half of companies had established any policies for employee or business use of AI. The result is an ecosystem rich with opportunity but also riddled with unchecked risk.
This session—“AI Governance as the Cornerstone of Secure and Ethical Cloud AI”—takes a clear-eyed look at the key challenge of AI governance in cloud environments. At its heart, governance is not about slowing innovation, but about enabling it safely. AI governance frameworks establish transparency, accountability, fairness, and security across the AI lifecycle, ensuring that systems align with both organizational values and regulatory requirements. Without them, companies expose themselves to cascading risks: privacy breaches (as in the Grindr analytics scandal), unethical use of sensitive data (as with DeepMind’s patient data controversy), and security vulnerabilities that adversarial actors can exploit.
Yoko Washington-Ruiz is