Regulating AI: Frankenstein's Monster or Guardian Angel?

 

Article by the Minister of Truth

Regulating and controlling AI may seem like trying to nail a jelly to a wall. But that isn’t stopping the world’s governments and leading tech companies from having a crack at it.

The European Union, for example, has bravely stepped into the ring, drafting an EU Artificial Intelligence Act that attempts to assign one of three risk categories to AI apps. The ‘unacceptable risk’ category, for example, would include government-run social scoring algorithms as used in China. It’s not hard to appreciate how intrusive and coercive such programs are. You’d never dare utter even a mildly critical political opinion on social media again. Just the way repressive regimes like it.

‘High-risk’ would include CV-scanning tools that rank job applicants, for example. These would have to conform to specific legal requirements, presumably to ensure that the rankings are not in breach of protected characteristics, like age, race, sex or religion. The inherent biases within some early large Language Model Generative AI programs has highlighted the need for close scrutiny of such tools. If AI has learned from racist or sexist data, it will reflect those biases in its answers, unless countermeasures are put in place at the design stage.

 

AI Generated Image

The EU’s third risk category is, er, everything else not included in the first two categories. Which sounds pretty-broad brush to me. While libertarian zealots will welcome such a light-touch approach as a sign that legislators have no desire to stifle innovation and competition, others will see this as a tacit acknowledgement that the horse has well and truly bolted and cannot easily be recaptured.

On the plus-side, the EU has at least attempted to enshrine those touchy-feely liberal and progressive democratic values, such as freedom of thought and speech, into law. For example, tech companies will have to notify us when we’re interacting with a chatbot, no matter whether it has passed the Turing-test with flying colours and we can’t tell the difference between human and machine. The same notifications will have to be given when we’re looking at AI-generated content, interacting with emotion recognition systems, or having our biometrics scanned.

It may seem odd, particularly to younger generations groomed by social media companies to haemorrhage personal data in return for cute cat videos, but we have privacy and data protection rights. That data is our data and companies cannot just use it in any way they please. Scraping facial images from CCTV and social media to build facial recognition systems without consent is already banned in the EU, as is social scoring and predictive policing based on algorithms alone (watch the film Minority Report to understand the full dangers of subverting the ‘innocent until proven guilty’ principle).

So it’s comforting to know that these obligations imposed on AI practitioners will be legally binding - with hefty fines of between 1.5% and 7% of global sales for non-compliance - even if there’s doubt about how effectively the EU will be able to enforce them.

The point about data and who owns it is important for copyright, too. People spend a lot of time and money trying to protect the things they invent and original work they produce. A rampaging AI foundation model shouldn’t be allowed to hoover up whatever data it likes, in breach of patents, copyright, ownership and privacy laws. At least the EU is attempting to grasp this nettle, even as law enforcement authorities argue that surveillance is necessary to protect these freedoms. As ever, there is a trade-off between freedom and security.

But what is a little odd is the EU’s idea that the strictest rules will only apply to the most powerful AI models, as measured by the computing power needed to train and operate them. How do you accurately measure this, and can they rely on the big tech companies to be completely frank? Surely an incentive to fudge the truth is built in to this approach?

AI Generated Image

That said, a legally binding approach to AI means that anyone who wants to do business in the world’s second-largest economic bloc will have to toe the line. This contrasts with the US’s more voluntary approach to AI regulation, encouraging ideas such as watermarking, content labelling and transparency to help users know what they’re dealing with.

The UK – now no longer in the EU – seems be following the same ‘pro-innovation’ light-touch approach as the US, using five core principles: safety, security and robustness; appropriate transparency and explainability; accountability and governance; and contestability and redress.

Now you don’t have to be a legal whizz to recognise that the word ‘appropriate’ is doing some heavy lifting there, giving businesses masses of wiggle room to challenge definitions. Regulators are going to have their work cut out.

In February 2024 the UK government claimed it was “investing more in AI safety than any other country in the world” and announced a £100-plus million investment “to help realise new AI innovations and support regulators’ technical capabilities”. It has set up the Frontier AI Taskforce – a state-funded body staffed by AI experts from academia and national security organisations “dedicated to the safety of advanced AI” – as well as the Artificial Intelligence Safety Institute, which will evaluate new AI applications. But, unlike the EU, it has stopped short of new legislation.

Given the huge impact AI could have on the UK economy and national security, this may appear to be a curiously relaxed approach to take.

Meanwhile, the big tech companies are busy forming alliances and setting up talking shops to show the world’s governments that they take this issue seriously, all while furiously competing with each other to dominate a sector potentially worth $1 trillion a year. No private sector companies ever want the heavy hand of government stifling their freedom to make money.

 

The question we have to ask ourselves is which of these companies is likely to be the most responsible and responsive to the legitimate concerns of its customers?

Beyond Control?

A final thought. OpenAI, the organisation behind ChatGPT, has set up ‘superalignment team’ tasked with working how to stop a super-intelligent AI running amok and fulfilling the prophecies of every dystopian sci-fi movie ever made.

It’s great that the tech utopians recognise the risks but not so comforting when you realise even they’re not sure yet how to control the beast, or even how it does what it does so effectively. How can we regulate and control something its creators can’t even explain?

AI has the potential to help us achieve great things in so many fields. But it also has the potential to cause great harm in the hands of warmongers, repressive regimes and criminals. Inevitably, one thinks of Mary Shelley’s Frankenstein, whose famous monster says: “I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe. If I cannot satisfy the one, I will indulge the other”.

In other words, it could go either way. Regulation is the world’s attempt to keep AI on the side of the angels.

Guest UserAI