Evidence from the ground contributes to Government Inquiry

Topic: Governance
The AI Centre submitted evidence for the Government Inquiry on AI governance.
The AI Centre submitted evidence for the Government Inquiry on AI governance.

In the novel AI ecosystem, the AI Centre for Value Based Healthcare is uniquely positioned to provide real-world experience to bridge the gap between ethical principles and the aims of government policy. With our consortium partners spanning the whole product lifecycle – from research and development to deployment – we are well placed to comment on the use of AI in a clinical setting. Our expertise in this area is further strengthened by our Senior Research Data Governance Manager, Robin Carpenter, who is pioneering ethics and data governance models to cater for this new product lifecycle.

Data ethics and governance is a space that is ever-changing, but one constant is the need for transparency and dialogue with the public.

Robin Carpenter, Senior Research Data Governance Manager, AI Centre

In November 2022 the Government submitted an inquiry:

The use of artificial intelligence (AI) has increased significantly in recent years. It offers a range of potential benefits such as quicker analysis of large datasets allowing more accurate information, forecasts and predictions, and more personalised public services. However, there are a number of concerns, such as the possibility of biased algorithms, a lack of transparency and unexplained decision-making. The Government is expected to publish a white paper on AI governance later this year to address these issues.

Click here for the official government inquiry webpage.

Our evidence 

As part of our commitment to transparency, and collaborative learning amongst peers, we have submitted a document detailing our real-world learnings from the industry. You can explore the full document here. For a quick overview:

How effective is current governance of AI in the UK?   

Current governance of AI in healthcare is inadequate across the AI lifecycle. Healthcare AI development needs best practice in the form of guidance; best practice built into the infrastructure, as we have done with FLIP; and more resources to put the governance into practice. Furthermore, deployment and monitoring of this AI needs a clinical governance strategy.   

What measures could make the use of AI more transparent and explainable to the public?  

Explainability and transparency can be supported in three ways. First, by having a national dialogue to clarify the definition of AI, because the public need to understand what AI is if they are to understand what AI does. Second, by defining explainability, and third by enforcing public involvement.   

How should decisions involving AI be reviewed and scrutinised in both public and private sectors? 


Reviewing healthcare AI involves evaluating evidence generated on its use. However, generating evidence to evaluate AI in the NHS currently requires expensive, ad hoc infrastructure, which is a problem that can be addressed by our platform, AIDE.   

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?  


Regulation and regulatory bodies already exist, but the ecosystem that supports them needs to be more comprehensive and mature before regulation is efficiently enforced.  

To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?  


The framework for data protection is fit for purpose, but Article 22 of UKGDPR (2021) needs more guidance. Article 22 addresses automated processing and profiling. Also, the Medical Device Regulation needs updating.   

What lessons, if any, can the UK learn from other countries on AI governance?   

As each regulator was not specifically set-up for AI, there may be utility in a UK AI Board that regulators can submit AI questions to.

Submitting evidence to the government can be a valuable tool for ensuring the safe and effective use of AI. As the regulatory landscape continues to evolve, it is important for stakeholders to stay informed and engaged in the development of policies and guidelines related to governance of AI. By maintaining this transparency we can help address any public “nervousness” surrounding the use of AI in healthcare.