Skip to content

mikeprivette/ai-security-shared-responsibility

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
AI Security Shared Responsibility Model Matrix

AI Security Shared Responsibility Model

Static Badge License: MIT Version

Clear security ownership for every AI deployment model

Quick StartFrameworkDeployment ModelsSecurity DomainsAbout


The Problem

AI is transforming industries at unprecedented pace, but security ownership remains unclear. Organizations deploying AI systems—from simple ChatGPT usage to complex custom models—lack clarity on who's responsible for what.

This gap creates risk. Without clear ownership boundaries, critical security tasks fall through the cracks. Data governance, model security, and compliance requirements become nobody's responsibility—until something goes wrong.

The shared responsibility model solved this for cloud computing. Now AI needs the same clarity.

What This Is

A framework for understanding security responsibilities across AI deployments. Like cloud computing's shared responsibility model, this framework maps who owns what across 8 deployment models and 16 security domains.

Whether you're using ChatGPT, building custom models, or deploying autonomous agents, this framework shows exactly what you're responsible for—and what your providers handle.

Quick Start

If you are a... Start here Focus on
Security Leader Responsibility Matrix Understanding your obligations across all AI initiatives
AI Practitioner Deployment Models Identifying which model fits your use case
Architect Security Domains Comprehensive security coverage areas
Getting Started This Section Step-by-step implementation guide

Why This Framework vs Others

Think of this as your Day 1 framework—what you need before diving into technical specifications.

Framework Best For When to Use Limitation
🎯 This Framework Initial alignment & planning Before deployment decisions Less technical depth
NIST AI RMF Comprehensive risk management Mature AI programs Assumes AI maturity
CSA Models Cloud-specific implementations Azure/AWS deployments Too narrow for full AI landscape
Microsoft Approach Azure ecosystem Technical implementation Vendor-specific

Other frameworks assume you already know your deployment model and have organizational alignment. This framework helps you build that alignment first.

The Framework

Core Components

Component What It Covers Key Insight
8 Deployment Models From SaaS to on-premises, agents to assistants Each model has distinct security boundaries
16 Security Domains Traditional + AI-specific (marked with ★) New domains like agent governance are critical now
Responsibility Matrix Complete 8x16 mapping Visual guide to all responsibilities

Key Principles

  • No deployment is responsibility-free - Even SaaS requires customer security efforts
  • Control = Responsibility - More control means more security obligations
  • Shared requires coordination - Both parties must fulfill their parts
  • New domains matter now - Agent governance isn't a future problem

Getting Started

  1. 📍 Identify your AI deployment model(s) using the deployment models guide
  2. ✅ Check the responsibility matrix for your obligations
  3. 📋 Review the security domains to understand coverage areas
  4. 🎯 Plan improvements based on identified gaps

8 Deployment Models

Comprehensive coverage from simple SaaS to complex autonomous systems:

Cloud-Based Models

  1. SaaS AI Models - ChatGPT, Claude, Gemini (Public & Private)
  2. PaaS AI Models - Azure OpenAI, AWS Bedrock, Google AI Platform
  3. IaaS AI Models - Custom models on cloud infrastructure

Self-Managed & Specialized

  1. On-Premises AI Models - Local LLMs, air-gapped systems
  2. SaaS Products with Embedded AI - Salesforce Einstein, MS Copilot
  3. Agentic AI Systems - Autonomous multi-agent configurations
  4. AI Coding Assistants - GitHub Copilot, Cursor, Claude Code
  5. MCP-Based Systems - Persistent memory & context systems

→ Full deployment models guide

16 Security Domains

Comprehensive coverage across traditional and emerging AI security areas:

Traditional Domains (1-12)

  • Application Security
  • AI Ethics and Safety
  • Model Security
  • User Access Control
  • Data Privacy
  • Data Security
  • Monitoring and Logging
  • Compliance and Governance
  • Supply Chain Security
  • Network Security
  • Infrastructure Security
  • Incident Response

Emerging AI Domains (13-16)

  • Agent Governance - Control of autonomous AI agents
  • Code Generation Security - AI-generated code protection
  • Context Pollution Protection - Preventing false information injection
  • Multi-System Integration Security - Cross-system AI orchestration

→ Full security domains guide

Securing an AI system is a multi-faceted challenge that requires attention to various domains and usage states. As the deployment models evolve, so too will these focus areas.

Contributing

This framework improves with real-world input. Looking for:

  • Implementation experiences
  • Framework improvements
  • Templates and tools

See CONTRIBUTING.md for details or open an issue to start a discussion.

Evolution

The framework has grown from 4 to 8 deployment models and added 4 emerging security domains based on how AI security has evolved over the past year.

About

Created by Mike Privette, founder of Return on Security.

Questions? Open an issue to start a discussion.

License

MIT - See LICENSE file.

About

AI Security Shared Responsibility Model

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published