Top 3 priorities when building an AI-based product

Easy measures you should consider when building an AI system that is complying with governance & regulation

October 2, 2023 - 6 min.

Let’s take a look at some initial measures which are supporting governance but will also help with AI regulation.

Why bother?

Adding AI to your offerings or even developing an innovative AI product from scratch is exciting. But at the very moment you’re making your AI public or distribute it to partners and customers, it’s out there in the open. From that point in time onwards, it needs to adhere to the quality standards that your customers and the public expect. Some things will come up that you didn’t have time for or weren’t even aware of while in development.

In real life, your AI will be attacked and scrutinized more than you anticipated. If it lacks in quality, and you cannot fix it, you might be out of business swiftly.

Since you can’t prepare in advance for everything that will go wrong, there are still some measures you can take to prepare.

Setting up policies and processes to prevent bad things from happening is an important part of your company’s AI governance.

Obligations

AI is a high-tech system. Any such system that is used either directly or indirectly by end users needs to adhere to the state-of-art. For more info about “state-of-the-art” see our article on the topic.

While state-of-the-art is rather loosly defined and evolving over time, there’s also regulation, which imposes hard rules layed down by law. For AI, such rules are currently worked on by regulators in Europe and the U.S., but not yet binding law (as of autumn 2023). Still, the top 3 prios below anticipate those regulations and will be helpful in either case.

And finally, as a business, any company needs to assess risks regularily and put measures in place to deal with them. This also applies to AI risks.

Like every product that is used by consumers, the FTC keeps an eye on your AI products, too. They for example might complain when your “AI-based” product actually doesn’t contain AI. You should also avoid advertising any properties of you AI-driven product that are not actually there or not working properly. Otherwise, customers or competition might complain at the FTC.

1. Keep all training data and models

Say your customers contacts you and reports that your AI model exihibits certain biases, a common problem with AI. You verify the claim and it’s possible to reproduce the problem with your copy of the model. Now what?

Bias is introduced at training time. So you need at least to take a look into your training data. But the problem might as well be part of some upstream dataset that your actual training data had been derived from. In order to reliably doing that you need to be able to access and identify those exact original datasets. Therefore, keep all the data you used for productive models. Maintain a good record of data and model lineage, so that the input data is at least referenced. If datasets change often, keep a copy of each dataset revision that is used in AI training. Open Source products like mlflow can help with that.

Only this way you can identify and rectify the root causes of any model bug introduced with the training data.

2. Documentation: Create a Model Card or System Card

Minimal documentation is helpful and will be read. Lengthy manuals with dozens of pages typically won’t. For AI, a brief documentation style – called ‘Model Card’ – has been proposed and is increasingly used in practice. A Model Card can be as short as a one-pager. It is structured similar to a data sheet and captures all the important information about a model.

The Model Card will start with an identification of the model (name, release number, creation date), whom to contact, a copyright notice, the license under which the model is distributed and such vital properties. If you derived the model from parent models or used certain types of AI, you should put information about that in the card.

This is directly followed by the purpose and intended use of the model. This should give users a good understanding what they can expect the model to deliver, and what probably won’t work. You can even explicitly exclude certain usages.

Some Model Cards give details about the training data used. This is a form of transparency that improves the trust and gives a glimpse into quality of the underlying data as well as the effort you put into compiling the model’s inputs.

Furthermore, some metrics and other properties should be added to the document as well. It’s not a problem if they are technical and will only be understood by Data Scientists.

For complete AI systems, i.e. the productive AI Model(s) together with all the components that make it consumable, a System Card might be good to have, too. A System Card is a more extensive documentation than a Model Card. It captures other critical parts of your product.

But at least a Model Card should be created for every model.

3. Monitor your production system

When designing an AI model, it’s not possible to anticipate every possible use – or misuse – by a creative end user beforehand. Therefore, it is essential to observe production systems all the time! That means keeping at least a log of all inputs together with the outputs - something you might be doing already to improve training of future models. Regular, maybe daily, reviews of inference logs should quickly surface any bugs or abuse. Enterprises might become liable if they fail to act when malicious responses are created by their AI models. So having those records enables your production team to spot any issues and start reacting early.

Even having the “big red emergency button” might be advisable, to switch off your AI system when things start to get really wrong. It might be better to return an error to all users than creating more harm to them and your business.

Some AI systems even employ Artificial Intelligence itself to solve this problem: By adding another AI model that acts as a real-time watch dog over the main usage. This way, you don’t have to do it yourself.

Bonus: What else?

Of course, there’s always more that can be done. A risk management audit, more tests, data quality verifications and so on. Applying AI risk frameworks help to systematically cover all relevant potential issues. An incident response plan can help when an AI system fails.

Setting up the top 3 measures (store data, have a Model Card, monitor production) are the foundations for all future governance enhancements. And whatever regulation will come up with, all three will support you in creating lawful AI.

We take the risk out of your AI.
Do you need to make your AI products conformant with regulation? algo consult specializes in helping companies to master regulation. Let's go through the process together.