Nicola Byrne: ‘Digital formats will always have an inherent risk of attack’
- 2 January 2025
As a practising psychiatrist, Dr Nicola Byrne, the National Data Guardian for health and social care in England, is acutely aware of the significance of health data for patients.
“Working in psychiatry, people are sharing personal information that they’ve perhaps never shared with anyone before, so that gives you a deep awareness of how important confidentiality is,” Byrne told Digital Health News.
Ahead of her appearance at Rewired 2025, we spoke to Byrne to find out more about her views on the federated data platform (FDP), cyber security and the forthcoming NHS 10 year health plan.
How does your career in psychiatry relate to your role as the National Data Guardian?
I was previously a chief clinical information officer and that gave me an enthusiasm for using data to improve people’s care.
Being a clinician is helpful in that it gives you a sense that healthcare is a relationship business first and foremost, and nothing good happens unless there are good working relationships between patients, colleagues and organisations. That holds true whether you’re talking about people’s individual care and treatment, or whether you’re talking about large-scale digital innovations and implementation.
Fundamentally, as a clinician, I get a blast of frontline reality, and that’s useful to make sure that I remain grounded in how things will work in practice, whether that’s policies or guidelines. It’s important to keep that perspective.
What privacy issues around patient data does the use of AI in healthcare bring up?
There are always going to be challenges about maintaining people’s confidentiality and privacy, but also maintaining their trust that data uses are safe, appropriate, ethical and for the public benefit. With AI, the question is how the technology potentially minimises or amplifies risks.
What one organisation or researcher may assume is in the public interest, the public won’t necessarily agree with
The thing I’d like to focus on with AI is the question of how humans relate to the technology and who is in the driving seat. What decisions are humans making about it? What decisions and choices are we making about what problems we seek to use it for, how it’s designed and how it’s deployed? Those are essentially human problems.
How do you build in doubt and uncertainty into the use of these tools so they can be questioned and challenged when that’s appropriate? ‘In the public benefit’ is a term that gets used as if that’s always self-evident, but we know from research that it isn’t self-evident, and what one organisation or researcher may assume is in the public interest, the public won’t necessarily agree with.
Some major cyber attacks affected the NHS in 2024. How can we reassure the public that their health data is safe?
Firstly, I think we need to be very careful about what we’re reassuring the public about, because the reality is that cyber attacks can and do occur, and false reassurance is no reassurance. The reality is that on occasion, cyber criminals will cause harm to our system and put people’s data at risk. But I think you can seek to reassure the public that realistic and effective safeguards are being put in place, and the system continues to learn as these threats evolve.
Something like the NHS Data Security and Protection Toolkit, is an important part of being able to demonstrate how that particular organisation is seeking to improve its protections against attacks and its resilience for when those attacks occur.
The second point is making sure that we always hold the risks and benefits alongside each other when it comes to digitally available data. Anything we want to do in terms of making people’s health information available to the right people or improving care and treatments in the future requires data to be in digital formats. That will always have that inherent risk of attack, but if we were to go back to a world of paper, we would lose so many opportunities to improve things that people’s care and treatment would be compromised.
I think it’s a question of balancing these risks alongside each other, and I hope that if you do that, it’s possible to have quite a mature conversation with the public.
Cyber attacks can and do occur, and false reassurance is no reassurance
Do you think there is enough transparency about how data is being used within the FDP?
My reflection is that NHS England took the concerns I raised and the concerns of other stakeholders, very seriously. For example, the engagement portal they developed was a very important transparency step.
I think it’s important to continue to be transparent about how the programme is evolving. That’s not simply about how people’s data is being used but transparency around what the outcomes of the programme are, how it’s progressing, what benefits are being seen and how that value is realised.
As much as possible, I continue to encourage the system to have these conversations out in the open, because I think that helps build both public, but also crucially professional, trust in the platform.
Earlier this year it was reported that data from the UK BioBank had been used for race science research. How can we ensure that health data is being used for the right research purposes?
Noting that UK Biobank refuted the story, I think the important thing is it illustrates how deeply people care about how their data is used. That holds true whether you’re talking about data that’s been pseudonymised or even anonymised. People still care about how it’s used because it’s come from them.
It comes back to the point I was making earlier about good governance and accountability for decision making about data access. Who gets to decide who accesses data, and for what purposes? Are the public involved in that, and are there always experts involved who have expertise in the sensitivities of health and care data?
Are you involved in helping to form the NHS 10 year health plan that will be published in 2025?
I’m not currently actively involved in the tenure itself. However, I’m actively involved in the initiatives and work that it will entail from my independent standpoint. For example, I’m delighted to be quite closely involved in the initiatives that have been announced such as the single patient record.
I’ve been involved in the large-scale public engagement work on that and I’m interested to see how that gets translated into a policy that public and professionals can trust.
Do you think people will choose to opt-out of having a single patient record?
I am eager to review the findings from the large-scale public engagement exercise on the single patient record. We do know that many people are surprised to discover that their health and care information is not routinely shared across organisations to support their care and treatment.
Most expect those providing their care to have seamless access to this information, so I anticipate significant public support for a single patient record.
Can you give us a taste of what you’ll be speaking about at Rewired 2025?
I’m going to be talking about the importance of public and professional trust – building it, maintaining it, and not catastrophically losing it when it comes to usage of people’s data. I’ll extend that into thinking about AI tools and what is required to build trust with patients and the public.
Byrne will be speaking at Rewired 2025 at the NEC in Birmingham, 18-19 March 2025. Register here. The event is co-headline sponsored by The Access Group and Microsoft. Alcidion, Nervecentre, Solventum and Cynerio are also sponsors.