UK needs system to record AI misuse and errors, think tank says

The CLTR recommends that the new government create a system to record AI failures in public services and consider building a center where all AI-related issues can be collected. He says such a system is vital for the technology to be used successfully.

The report cites 10,000 AI “safety incidents” recorded by media outlets since 2014, which are currently listed in a database compiled by the Organization for Economic Co-operation and Development (OECD).

Some very recent examples recorded in the database include approximately 200,000 people being incorrectly flagged by a government algorithm as potentially guilty of benefit fraud, a faked video of Keir Starmer allegedly abusing staff and the fact that Network Rail has been monitoring passengers through facial recognition software at stations.

The report argues that current UK regulation is piecemeal and lacks an effective framework for incident reporting. If this situation continues, the Department of Science, Industry and Technology (DSIT) will not have visibility or be able to act quickly enough in the event of incidents in fundamental models, such as biased results, incidents where the use of AI by the government failure (as in the case of benefit fraud), misuse of AI to create and distribute disinformation, or incidents where AI is misused to encourage harm to people.

CLTR sets out three reasons why a central reporting mechanism is the best way forward in such a complex mosaic of risks. First, security must be monitored in a real-world context to allow regulation and AI deployment to be corrected as necessary. Second, a central hub to expedite responses and investigations of major incidents. Finally, such a center could serve as an early warning system for larger scale damage that may arise in the future.

Talking with The GuardianTommy Shaffer Shane, policy manager at CLTR and author of the report, said:

“Incident reporting has played a transformative role in mitigating and managing risk in safety-critical industries such as aviation and medicine. But it is largely missing from the regulatory landscape being developed for AI. This is leaving the UK government blind to incidents that are emerging from the use of AI, inhibiting its ability to respond.”