Most enterprises cannot answer "how many AI systems do we have and what do they do?" Without an inventory, organizations cannot assess risk, demonstrate compliance, or govern AI use — the EU AI Act makes this a legal requirement.
Discovery processes identify AI systems across the organization — from production ML models and GenAI applications to embedded AI in vendor products and employee-adopted AI tools. Each system is classified into risk tiers (unacceptable, high, limited, minimal) following the EU AI Act's risk-based framework. Registration in a centralized inventory captures system purpose, data used, decision scope, deploying team, and risk classification rationale. Periodic review ensures new AI deployments are captured and classifications stay current.
AI inventory platforms, risk classification engines, AI discovery scanners, and regulatory-mapping tools.
No prerequisites recorded yet.
Assessment of risks specific to AI systems procured from or embedded in vendor products — model risk, data governance, bias, transparency.
Governed processes for developing, deploying, monitoring, updating, and retiring AI models with audit trails, version control.
Systematic evaluation of risks posed by individual AI systems — bias, safety, privacy, security, reliability — with documented impact analysis.