Submit

AI Inventory & Risk Classification

AI Governance, Responsible AI

A comprehensive registry of all AI systems in use or development, classified by risk tier per EU AI Act and organizational risk appetite.

Problem class

Most enterprises cannot answer "how many AI systems do we have and what do they do?" Without an inventory, organizations cannot assess risk, demonstrate compliance, or govern AI use — the EU AI Act makes this a legal requirement.

Mechanism

Discovery processes identify AI systems across the organization — from production ML models and GenAI applications to embedded AI in vendor products and employee-adopted AI tools. Each system is classified into risk tiers (unacceptable, high, limited, minimal) following the EU AI Act's risk-based framework. Registration in a centralized inventory captures system purpose, data used, decision scope, deploying team, and risk classification rationale. Periodic review ensures new AI deployments are captured and classifications stay current.

Required inputs

  • Discovery sweep across IT, business units, and vendor ecosystem
  • EU AI Act risk classification criteria (Annex III use cases)
  • System metadata template (purpose, data, scope, owner, vendor)
  • Organizational AI acceptable-use policy defining internal thresholds

Produced outputs

  • Centralized AI system inventory with risk classification per system
  • Gap identification of unregistered or unclassified AI deployments
  • Risk-tier distribution analytics informing governance resource allocation
  • EU AI Act registration-ready documentation per high-risk system

Industries where this is standard

  • Financial services under model risk management (SR 11-7) requirements
  • Healthcare deploying clinical AI under FDA SaMD regulatory oversight
  • EU-operating companies under EU AI Act inventory obligations (Article 6/Annex III)
  • Government agencies under OMB AI governance mandates
  • Technology companies managing dozens to hundreds of internal AI systems

Counterexamples

  • Inventorying only ML models built in-house while ignoring GenAI tools, vendor-embedded AI, and RPA-with-ML creates a partial registry that misses the majority of AI risk surface.
  • Classifying all AI as "minimal risk" to minimize governance burden may be contradicted by EU AI Act Annex III use cases — misclassification carries fines of up to 7% of global revenue.

Representative implementations

  • EU AI Act entered into force August 2024; prohibited-AI and literacy obligations applied from February 2025; GPAI obligations from August 2025; high-risk requirements from August 2026.
  • ServiceNow's AI governance module enables organizations to build EU AI Act-compliant inventories with automated risk classification against Annex III use-case criteria.
  • A global insurance company discovered 340+ AI systems across the enterprise during its first inventory — 3× the count leadership expected — including 89 vendor-embedded AI components.

Common tooling categories

AI inventory platforms, risk classification engines, AI discovery scanners, and regulatory-mapping tools.

Share:

Maturity required
Low
acatech L1–2 / SIRI Band 1–2
Adoption effort
Medium
months, not weeks