AI governance challenges and UK approach analysed in govt report

  • 31 August 2023
AI governance challenges and UK approach analysed in govt report

An interim report published today by the Science, Innovation and Technology Committee has highlighted twelve essential challenges of AI governance that must be addressed and raised concerns about the UK-specific approach.

The Committee launched an enquiry on 20 October 2022 to examine the impact of AI on different areas of society and the economy, whether and how AI and its different uses should be regulated, and the UK Government’s AI governance proposals.

AI governance challenges

The report identifies a series of challenges of AI governance for policymakers, many of which are relevant for the health technology industry. These are:

  1. The Bias challenge: AI can introduce or perpetuate biases that society finds unacceptable.
  2. The Privacy challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
  3. The Misrepresentation challenge: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
  4. The Access to Data challenge: The most powerful AI needs very large datasets, which are held by few organisations.
  5. The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
  6. The Black Box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
  7. The Open-Source challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
  8. The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
  9. The Liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harm done.
  10. The Employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
  11. The International Coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
  12. The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.

Chair of the Committee Rt Hon Greg Clark MP said: “Artificial Intelligence is already transforming the way we live our lives and seems certain to undergo explosive growth in its impact on our society and economy.

“AI is full of opportunities, but also contains many important risks to long-established and cherished rights – ranging from personal privacy to national security – that people will expect policymakers to guard against.

“Our interim report identifies twelve challenges that must be addressed by policymakers if public confidence in AI is to be secured.”

The UK-specific approach and the risks involved

In March of this year, the UK Government set out its proposed “pro-innovation approach to AI regulation” in the form of a white paper and while it did not intend to introduce AI-specific regulation immediately, it said that it anticipated the need to legislate to introduce “a statutory duty on our regulators requiring them to have due regard to the principles” of AI governance.

The Committee states in today’s report that this “commitment alone – in addition to any further requirements that may emerge – suggests that there should be a tightly-focused AI Bill in the new session of Parliament”.

The Committee adds in the report: “Our view is that this would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader.

“We see a danger that if the UK does not bring in any new statutory regulation for three years it risks the Government’s good intentions being left behind by other legislation – like the EU AI Act – that could become the de facto standard and be hard to displace.”

Clark added: “The UK’s depth of technical expertise and reputation for trustworthy regulation stand us in good stead and our Committee strongly welcomes the AI Safety Summit taking place at Bletchley Park in November.

“However, if the Government’s ambitions are to be realized and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.

“We will study the Government’s response to our interim report, and the AI white paper consultation, with interest, and will publish a final set of policy recommendations in due course.”

Back in June, the importance of governance and regulation of AI and health technology was analysed in detail and backed up the idea that adopters and regulators of AI in the health industry need a clear statutory framework and direction in order to deploy AI confidently and safely.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Ambient voice technology to draft patient letters piloted for NHS use

Ambient voice technology to draft patient letters piloted for NHS use

Great Ormond Street Hospital for Children is leading a pan-London, 5,000 patient assessment of the use of ambient voice technology.
Concerns raised that NHS digital plans could exclude older adults

Concerns raised that NHS digital plans could exclude older adults

Concerns have been raised that government NHS plans, including having a single patient record through the NHS App, will exclude older people.
AI software improves odds of good maternity care by 69%, say researchers

AI software improves odds of good maternity care by 69%, say researchers

Women are more likely to receive good care during pregnancy when AI and other clinical software tools are used, researchers have found.