Update 2023-06-16
AI is seeing a constant rush of new AI models getting published. Yesterday classification models and anomaly detections, today Large Langauge Models, tomorrow maybe something completely different. Essentially, the “AI” is contained in those models.
AI models are machine learning artifacts, trained on large amounts of data. In the enterprise they become a value on its own, an “asset”. Without model there is no AI-based product.
Still, AI models are not directly regulated by the emerging “EU Artificial Intelligence Act” – Why is that?
From Model to System
While AI models in itself have a value, they aren’t usable on their own. They are just stored and passivated, similar to software. A software is executed “at runtime” on a operating system and related hardware. For AI, the runtime is called “inference”. The model is loaded and can then be used via an interface.
The main effort in AI is put into the training. But as we deal with regulation or calculation of runtime operational costs, inference can quickly become the even more important aspect.
At inference time the AI model with all software and hardware around it becomes a usable “system”. And it is only this whole system that will be regulated, potentially.
Definition, Part 1: “AI System”
The EU AI Act currently defines “AI System” like so:
“artificial intelligence system” (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
It’s all parts combined, not only one (albeit essential) part. The EU AI act considers the model a few times, but nevertheless also considers training data, tests and other important system-related aspects.
But above sentences don’t fully define ‘AI system’ yet. The definition will only be complete when considering Annex I, too.
Definition, Part 2: Annex I
What’s added in this annex, titled “ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES”?
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
Section (a) is what can be expected, clearly this is about AI. Sections (b) and (c) might be a little bit more surprising, but that’s a topic on its own.
AI as a System
The EU Act’s definition explicitly mentions that an AI system’s goals must be defined by humans. This is another aspect of “system”: The purpose is clearly defined. It can be reasoned about if the goal is matched or not by the system. Importantly, it will also determine if the AI is under regulation at all, and if yes, in which category it falls (e.g. “high-risk”). This assessment determines if an organisation needs to put more time and money into getting their system certified, or not.
The system will exhibit further properties the model cannot provide on its own like assuring its users are authorized, scaling under high load by putting more model instances to work, protection against others tampering with the model and many more.
What does that mean for your own AI Systems?
It’s very important to accompany the development of an AI system with supporting activities from kick-off to deployment. Those activities are for example documentation, checklists, archiving of training data etc. This way, an AI model becomes a conformant AI system.
Disclaimer: The EU AI Act is emerging, not yet in effect, and might still see substantial changes. We’ll update this article as new information becomes available. If you’re looking for legal advice, ask a lawyer.