Doctor Using Digital Health Interface on Laptop Modern Medical Technology with Cloud Data Access

By Jurgita Lapienytė, Editor-in-Chief at Cybernews

A recent Sky News investigation has revealed that doctors in the UK are using unapproved artificial intelligence software to record and transcribe patient meetings. 

This has prompted warnings from NHS England about potential breaches of data protection rules and risks to patient privacy. 

It does raise cybersecurity concerns that shouldn’t be ignored if healthcare institutions care about the privacy and safety of their patients, and their own reputation. 

Data Protection and Compliance Risks

The core issue centers on the use of AI tools, such as Ambient Voice Technology (AVT), that do not meet minimum national standards for data security and clinical safety. 

Unapproved software may lack robust encryption, secure data storage, and strict access controls – essential features for protecting sensitive patient information. 

NHS has explicitly warned that such unauthorized tools could violate data protection regulations, potentially exposing patient data to unauthorized access or misuse.

Potential Cybersecurity Threats

Cybersecurity threats are fast evolving across various sectors, and healthcare isn’t an exception. According to Cybernews’s Business Digital Index, 65% of the 100 largest US hospitals and health systems have had a recent data breach. Do UK’s health institutions have stronger safeguards?  

When everyone should be focusing on doing more than a bare minimum in user data protection, using unapproved software comes as a surprise. 

What could go wrong? 

Unapproved software may transmit or store recordings in insecure cloud environments, increasing the risk of data breaches.

Without rigorous vetting, third-party AI providers may have access to sensitive conversations, raising concerns about data monetization or unauthorized sharing.

Non-compliant tools may not maintain detailed logs of who accesses patient data, complicating investigations in the event of a security incident.

Balancing Innovation and Security

While AI-driven solutions like AVT offer clear benefits – such as reducing administrative burdens and allowing clinicians to focus more on patient care – these advantages must be weighed against the imperative to safeguard patient confidentiality. 

The NHS has acknowledged the potential of AVT but emphasizes the need for strict compliance with clinical safety and data security standards.

Best Practices for the Future

It should be written in stone that only AI tools that meet national cybersecurity and data protection standards should be deployed in clinical settings.

Patients should be informed about the use of AI in their consultations and assured that their data will be handled securely.

Regular audits are also important. Ongoing monitoring and auditing of AI systems can help detect and mitigate vulnerabilities before they are exploited.

Cybersecurity Is a Must

The growing use of AI in healthcare is inevitable and offers significant promise. However, the adoption of unapproved software creates real risks to patient privacy and data security. Healthcare providers must prioritize compliance with established standards and ensure that cybersecurity remains unnegotiable in patient care.