Securing the AI SDLC: AI Strategy & Governance and Secure AI Design | The AI TrustOps Masterclass Ch 2

Featuring

  • Snyk

About This Webinar

As AI technologies become a core part of your software, you need a robust strategy and a secure design methodology to manage the new risks. This chapter focuses on the first two pillars of the AI TrustOps framework: AI Strategy & Governance and Secure AI Design. We will show you how to establish a clear accountability model for AI initiatives, document your risk posture, and build a cross-functional governance team. You'll also learn how to integrate AI-native risk indicators—like bias, explainability, and hallucinations—into your systems architecture and proactively model new threat vectors introduced by AI and ML models.

  1. Scott Bekker

    Host Scott Bekker Webinar Moderator Future B2B

  2. Clinton Herget

    Featuring Clinton Herget Field CTO Snyk

What You'll Learn

  1. Understand how to align AI goals with business objectives and create a cross-functional governance team to manage new risks.
  2. Learn to consider AI-native risks like bias and hallucinations as a fundamental part of your architecture.
  3. Discover how to proactively identify and manage AI threats, including conducting threat modeling for new AI/ML assets and ensuring data integrity by design.