Brent Maynard is the Senior Director of Security Technology and Strategy at Akamai Technologies, where he leads efforts to shape the future of cybersecurity across global enterprises and cloud environments. With more than two decades of experience in the field, Brent has built and guided high-performing teams across financial services, retail, and cloud providers, consistently driving innovation in threat detection, response, and security operations.
His career reflects a unique balance of technical depth and executive leadership. Brent is a patent holder for automated security investigations, and his contributions have advanced the industry’s ability to modernize SOCs and improve analyst efficiency at scale. He has designed and launched products that redefine how organizations approach adversarial defense, incident response, and AI-driven security operations. These efforts not only strengthen enterprise resilience but also enable security teams to keep pace with rapidly evolving threats.
Brent’s expertise extends beyond the private sector. He has served as a trusted advisor to the intelligence community and federal law enforcement, partnering on high-profile cybercrime and financial fraud investigations. In roles supporting agencies such as the U.S. Secret Service, FBI, and NCIS, he has contributed to dismantling organized criminal networks, mitigating supply chain compromises, and addressing nation-state threats. This work has given him a rare perspective on how public and private sectors must collaborate to defend critical infrastructure.
As a recognized thought leader, Brent frequently speaks at major industry events including AWS re:Invent, Black Hat, RSA, FS-ISAC, and RH-ISAC. His talks blend technical insight with strategic perspective, offering audiences both actionable takeaways and a forward-looking view of the cybersecurity landscape. He is known for his ability to connect complex topics such as AI governance, adversarial machine learning, and cloud security strategy with real-world challenges that CISOs and security practitioners face today.
In addition to his professional work, Brent has been an active contributor to the broader cybersecurity community, helping to build local DEF CON groups and mentoring the next generation of investigators and security leaders. His background includes a degree in Digital Forensics, extensive hands-on work in security operations centers, and leadership roles at organizations including Charles Schwab, Nordstrom, Akamai, Microsoft, and Amazon Web Services.
Brent brings a perspective shaped by both the trenches of investigation and the boardroom of strategy. Whether modernizing SOC workflows, guiding AI adoption, or advising on incident response at the highest levels, he has consistently focused on empowering organizations to stay ahead of emerging threats. His career reflects a commitment to advancing security not only as a discipline, but as a cornerstone of trust in a world where technology, cloud, and AI are increasingly inseparable.
Who’s Really in Control? Cloud Security in the Age of AI Agents
We are entering an era where AI agents are no longer experimental side projects. They are being woven into business processes, publishing systems, and cloud applications that operate at global scale. These agents can take action, reason across multiple steps, and interact with APIs and data sources on behalf of people and organizations. This creates incredible opportunities for efficiency and innovation, but it also introduces a new class of security challenges that many teams are not yet prepared to handle.
The central question is one of control. How do we ensure that AI agents are working for us, and not against us? Attackers have already begun exploring ways to manipulate agents through prompt injection, data poisoning, and malicious task chaining. An adversarial agent has the potential to move laterally across cloud services, leak sensitive data, or amplify misinformation with unprecedented speed. At the same time, organizations struggle with governance: how to align the behavior of agents with privacy requirements, compliance frameworks, and ethical use guidelines.
In this session, I will break down the emerging risks that come with deploying AI agents in cloud environments. We will explore scenarios where adversarial agents can compromise trust, misuse publishing pipelines, and even exploit the very defensive tools meant to secure them. I will share lessons learned from observing the evolution of automated bots into autonomous agents, and why traditional detection and policy controls are no longer enough.
Most importantly, we will look at practical approaches to governing and defending AI agents. This includes designing guardrails for agent behavior, applying adversarial testing methods, and building resilience into cloud applications that rely on agent-driven automation. The goal is not to avoid AI agents, but to deploy them with confidence by understanding where the risks lie and how to mitigate them.
Attendees will leave with a clear view of:
The differences between bots and agents, and why this shift matters for cloud security.
The most common adversarial techniques targeting AI agents today.
Governance models and defensive strategies that can reduce the risk of loss, leakage, or abuse at scale.
AI agents represent one of the most exciting frontiers in technology, but also one of the most challenging for security leaders. By addressing the question of who is really in control, we can begin to shape a future where these systems are not just powerful, but trustworthy.
CloudVibing: How GenAI impacts cloud operations
AI functionality is having a profound effect on Cloud Security Operations. From DevOps through Offensive and Defensive operations, GenAI is reshaping how our industry functions and operates. To be clear, I am pro GenAI. Fast, efficient, and scalable, GenAI offers a tremendous value to the Cloud Security industry and will allow us to perform our duties at larger scales and faster time constraints, but the security mindset must remain at the forefront.
Within this talk, I will discuss the Pros and Cons of how vibe coding can be an effective tool for use to better protect ourselves from malicious operations. We will start with a brief overview of how tools like Claude and OpenAI operate, what their limitations are and how those limitations can result in potential security risks for a production environment. The risks we will discuss being, Time constraints, Misconfigurations and Vulnerabilities, and Widow Functions. Then I will give a brief overview of how these risks most likely populate when Vibe Coding the end all be all Infrastructure as Code template your business needed yesterday.
I will wrap the talk with an overview of how the Russian Nexus threat group Void Blizzard successfully used AI to create more effective phishing campaigns. Then I will discuss how GenAI is being used by Security Vendors to provide enhanced filtering, remediation and even autofix functionality. The key takeaways from this talk will be a firm understanding of Vibe Coding and its strengths and limitations, an understanding on how malicious threat actors are using GenAI in their operations, and finally, how security vendors are using GenAI to provide more actionable cloud telemetry. You will also learn a few tricks for how to strengthen your AI coding effectiveness by building your own AI Memory Cache to assist your LLM platform of choice. If you’re interested in the strengths, obstacles, pitfalls, and some solutions for how GenAI is affecting the cloud industry, then this talk is for you!
Nathaniel Quist is the manager of the Cortex Cloud Threat Intelligence team, researching threat actor groups who target and leverage public cloud platforms, tools, and services. He and his team actively focused on identifying the scope of the threat, the malware, and the techniques these threat actor groups use during their operations.
Nathaniel has worked within Government, Public, and Private sectors. He holds a Master of Science in Information Security Engineering (MSISE) from The SANS Institute, where he focused on Network and System Forensics, Malware Reversal, and Incident Response. He is the author of multiple blogs, reports, and whitepapers published by Palo Alto Networks’ Unit 42 and Prisma Cloud and the SANS InfoSec Reading Room.
Chris Hoesly has spent 10+ years in various engineering and sales roles across the data security software landscape. With a blend of product management for go-to-market sales, ensuring the delivery of cutting-edge security offerings and providing consultative executive guidance, Chris focuses on helping organizations adapt to the ever-changing data security industry. Now, he serves as the Regional Vice President - Security Solution Engineering at BigID helping customers and prospects transform their data security strategies and deliver business value.
Connecting the Dots Between Data and AI
In today's rapidly evolving digital landscape, the intersection of data governance and artificial intelligence presents both unprecedented opportunities and complex challenges for enterprises. BigID emerges as a pioneering force in this space, fundamentally transforming how organizations understand, secure, and govern their data ecosystems while simultaneously addressing the emerging risks associated with AI adoption.
AI Governance as the Cornerstone of Secure and Ethical Cloud AI
Artificial Intelligence is transforming business at breakneck speed, embedding itself into everything from cloud-based productivity suites to customer-facing applications. But behind the promise of innovation lies a rapidly escalating cybersecurity and compliance problem: organizations are racing ahead with AI adoption without adequate governance. In fact, as of late 2024, fewer than half of companies had established any policies for employee or business use of AI. The result is an ecosystem rich with opportunity but also riddled with unchecked risk.
This session—“AI Governance as the Cornerstone of Secure and Ethical Cloud AI”—takes a clear-eyed look at the key challenge of AI governance in cloud environments. At its heart, governance is not about slowing innovation, but about enabling it safely. AI governance frameworks establish transparency, accountability, fairness, and security across the AI lifecycle, ensuring that systems align with both organizational values and regulatory requirements. Without them, companies expose themselves to cascading risks: privacy breaches (as in the Grindr analytics scandal), unethical use of sensitive data (as with DeepMind’s patient data controversy), and security vulnerabilities that adversarial actors can exploit.
Yoko Washington-Ruiz is