Matteo Rebeschini is a Boulder-based cybersecurity expert with over a decade of experience helping organizations secure their most critical assets in the cloud-native era. As a Sr. Field Engineer at Chainguard, he works with enterprises to build security into the software they depend on. Before Chainguard, Matteo spent more than six years at Elastic as a Global Security Specialist, helping some of the world’s largest organizations design and implement scalable solutions for threat detection and incident response.
Known in Colorado’s security community for his engaging talks, Matteo bridges the gap between technical complexity and executive strategy. A frequent speaker at industry events, he equips CISOs and security leaders to understand modern software risk, strengthen partnerships with engineering, and build trust into every component they deploy. When he’s not speaking or securing software supply chains, Matteo can be found traveling, climbing peaks, trekking, and embracing any outdoor adventure.
Securing Open Source Powered AI
Open source software now makes up more than 90% of modern codebases, powering everything from enterprise cloud platforms to the AI models transforming industries. This ubiquity accelerates innovation, but it also hides a dangerous truth: most organizations don’t actually know if the code they run is safe. Popularity, a clean vulnerability scan, or a “trusted” repository does not guarantee protection. High-impact incidents like Log4Shell and the XZ Utils backdoor have shown how a single vulnerable or malicious dependency can silently infiltrate business-critical systems. And when vulnerabilities surface, security leaders face a costly choice: accept prolonged exposure or enter the “patch-and-chase” cycle that drains engineering capacity and burns out teams. This high-impact session reframes open source risk for the age of AI and cloud.
Attendees will:
Gain a clear understanding of how modern software development and containerized delivery work, without needing an engineering background.
Learn the real risks of unverified open source in cloud and AI applications.
Discover how proactive verification, minimal hardened components, and transparent supply chains reduce both security risk and engineering toil.
By understanding these challenges, CISOs and security professionals can not only strengthen security posture but also improve collaboration with engineering, turning a historically tense relationship into a strategic partnership.
AI Governance at Scale: Balancing Innovation, Regulation, and Responsibility
For C-suite executives around the world, the whirlwind rise of artificial intelligence (AI) has raised the stakes on AI adoption. These business leaders rightly view AI with both enthusiasm and skepticism — yes, AI has tremendous potential to transform business, but it also carries tremendous risk.
The greatest risk of all maybe inaction, watching your competitors take advantage of this paradigm-shifting technology while getting left behind. When innovation is a must, the question is not an “if” but a “how” — how to innovate safely and with intention. The answer? Creating the structure and guardrails to prioritize and use AI responsibly. An AI governance program builds upon foundations within your organization, such as your existing data governance and risk management programs. It puts people at the center of your approach, equipping your team with the resources and controls to foster safe, compliant, and ethical AI use. In the following session, we’ll explore the bedrock of principles and practical frameworks of AI governance. Learn how to tap into the power of AI without sacrificing operational integrity or stakeholder trust. Agent-based AI systems are gaining momentum in enterprise environments, promising greater autonomy and productivity while introducing an entirely new class of risks.
This session introduces the unique security challenges posed by agentic architectures and why traditional security measures aren’t equipped to handle them. As AI agent ecosystems continue to mature, the need for standardized and robust AI governance and observability has become more apparent. Non-deterministic nature of GenAI creates the need to measure the quality and safety of responses which become part of observability.
Join Peter Holcomb, Founder of Optimo IT, as he breaks down the principles of scalable AI security governance. Drawing on real-world experience advising AI startups, healthcare systems, and high-growth tech firms, Peter will outline a practical framework for aligning security, compliance, and innovation. Topics will include AI-specific threat modeling, risk-based governance controls, AI observability, and how to prepare for laws and standards like ISO 42001, Colorado's SB-205 and the EU AI Act.
Peter Holcomb, CISSP, CISM is the Founder & CEO of Optimo IT, a Denver-based AI-driven Managed Intelligence Provider. With two decades of security leadership, AI governance, and scalable IT operations, Peter helps scaling startups and mid-market enterprises build resilient, compliant, and future-ready environments. As a fractional CISO, Peter translates board-level risk into actionable roadmaps that align security investments with dynamic business targets. He has designed governance and control frameworks that secure AI/ML initiatives against bias, prompt injection, and emerging regulatory gaps, including Colorado’s landmark SB-205 Artificial Intelligence Act. Peter has guided organizations to successfully obtain SOC 2, ISO 42001, and HIPAA certifications, often under tight funding or federal-contract timelines. He is currently a vCISO to AI innovators such as TestSavant.ai, Hackerverse.ai, and Abacus Intelligence, safeguarding financial data and sensitive intellectual property.
Craig Dennis is
AI Identity Crisis: Can I be proud of this?
The Art (and Weirdness) of Feeling Proud of an AI
As a parent, you prompt and your kids do the work—and you feel proud. What happens when the "kid" is an AI? You give a prompt, it makes something brilliant—or bonkers—and you still feel it: that weird, undeniable pride. It’s a feeling we'd better get used to.
In this interactive talk, we’ll explore co-creation with AI using an SDK to move our slides and your phones for real-time interaction. A talking robot hand will help us navigate this strange new world.
AI & the TPRM Implosion
Just when we thought we had solved TPRM (we were wrong, but whatevs) AI came in and blew it all up. AI has impacted TPRM in three ways:
Your third-parties are using AI in their business (and may not be telling you!): How do you find out that they're using AI and judge the risks their specific AI use of AI poses to your organization?
Overpromises from vendor solutions that are ‘powered by AI’: How do you know what's real?
Bad actors are using AI in creative and scary ways, using all the tools you use and a bunch you don’t/can’t/won’t, and posing new risks to your vendors: How do you know they're using AI securely?
Join us for this fun session where we will expose the hype, the real, and the future of AI in, around, and throughout your TPRM program.
Jeffrey Wheatman is a strategic thought leader with deep expertise in cybersecurity, risk management, and the evolving role of artificial intelligence in the enterprise. Widely regarded as an expert in guiding organizations through the complexities of modern cyber risk, Jeffrey helps companies integrate AI-driven insights into their cybersecurity programs to enhance decision-making, threat detection, and resilience.
Over the course of his career, Jeffrey has worked with organizations to plan, grow, and transform their cyber risk management programs, ensuring ongoing viability in an era where digital ecosystems and AI-powered adversaries are constantly changing the threat landscape. His work has been instrumental in helping businesses leverage advanced technologies while balancing innovation and risk.
In his current role as Cyber Risk Strategist at Black Kite, Jeffrey is focused on raising awareness of the enterprise impacts of third-party cyber risk across both digital and traditional supply chains. He helps organizations understand how AI and automation can be used to continuously assess and monitor supplier ecosystems, while also supporting the broader vision of Black Kite’s leadership team and investors.
Previously, Jeffrey served as a VP, Advisor with Gartner, a global strategic advisory firm. There, he partnered with clients to build next-generation security programs, assess risk, and implement forward-looking strategies that included reporting, metrics, executive engagement, and the alignment of technology, AI initiatives, and business priorities.