The UK government is using deep learning algorithms, under the catch-all umbrella of AI, to help its various sections make decisions in welfare benefit claims, ascertaining cases of fraud, and even the scanning of passports. That’s all probably of no surprise whatsoever but as one investigation suggests, it’s opening a massive can of worms for all concerned.
If you’re wondering what kind of AI is being talked about here, then think about upscaling. The systems employed by the government aren’t too dissimilar from those developed by Nvidia for its DLSS Super Resolution technology.
The data model for that is trained by feeding it millions of very high resolution frames, from hundreds of games. So when the algorithm is then fed a low resolution image, it can work out how the frame is most likely to appear once it’s been upscaled.
DLSS upscaling uses a fairly standard routine to make the jump from 1080p to 4K, for example. It then runs the AI algorithm to correct any errors in the image. But like all such systems, the quality of the end result depends massively on what you feed into the algorithm and what the dataset was trained on.
An investigation by the Guardian into the use of AI by the UK government highlights what happens when there are problems with both of those aspects. For example, the publication reports that the Home Office was using AI to read passports at airports, to help flag up potential fake marriages for further investigation.
The Guardian says an internal Home Office evaluation shows the algorithm is highlighting a disproportionate number of people from Albania, Greece, Romania, and Bulgaria. If the dataset was trained on data that itself is already over-emphasising particular traits in the survey, then the AI will be just as biased in its calculations.
News reports of government organisations getting things seriously wrong because of the over-reliance on AI aren’t rare. The hype surrounding the potential of artificial intelligence has led to the likes of ChatGPT being treated as being one of the most important inventions right now, and yet it can easily produce some highly questionable and shocking results.
The UK government naturally defends the use of AI and in the case of welfare benefit claims, says that the final decision is made by a person. But does that person base their decision on the algorithm’s output or do they go back and check everything again? If it’s the latter, the use of AI has been a total waste of time and money.
But if it’s the former, and the AI has been trained on information that’s already biased, then that final, ultimate decision made by a real living human being will be biased too. Even seemingly innocent use scenarios are affected by this, such as identifying which people are more at risk if a pandemic occurs, as the wrong people could be selected or those most in need are skipped entirely.
Such is the potential for deep learning to be used in all things, for good and bad, that no government is going to turn its back on it now. What’s needed is greater transparency behind the algorithms used, along with allowing experts access to the code and dataset to ensure that the systems are used fairly and appropriately.
In the UK, such a move has already taken place, but when you are simply ‘encouraged to complete an algorithmic transparency report for every algorithmic tool’, there’s not much incentive or legal pressure for any organisation to do so.
This may change in time but until then, I’d like to see a widespread training program for all government employees who are using AI in their roles. Not on how to use it, but on understanding its limitations, so that people are in a better position to question an algorithm’s output.
We’re all biased, one way or another, but we have to remember, so is AI.