Voisin Consulting Life Sciences blog icon
Blog

Digital Health Series – Part 5: Key Challenges for Software and AI as a Medical Device

Post thumbnail Digital Health Series – Part 5: Key Challenges for Software and AI as a Medical Device

In our previous posts, we explored software tools used in digital health and set the foundations around potential applications of digital technology in healthcare. Our focus shifted then towards the regulatory landscape for SaMD, including Artificial Intelligence applications, in three key markets, the United States (US), the European Union (EU) and the United Kingdom (UK). As we conclude this series of articles, we summarize some key challenges that remain to be tackled through the collaborative effort of the industry and the regulators, to ensure that innovation can reach the general public in a safe and efficient manner.

Regardless of the specific method implemented, the aim of software and Artificial Intelligence technologies is to maximise the use of collected data for more efficient healthcare decision making. AI can be used to analyse data throughout all stages of the product life cycle but it can also be incorporated into, or combined with, other medical device instruments and hardware containing sensors, wearable devices, and surgical medical devices.

The covid-19 pandemic has encouraged healthcare experts to consider digital models as alternatives to improve the management and response to future public health threats. However, this raises many issues of privacy protection. For example, open-source web solutions in the form of contact tracing applications, whether centralised (i.e. Pan-European Privacy-Preserving Proximity Tracing, PEPP-PT) or decentralised (i.e., Bluetooth tracking) require careful management of sensitive data [1].

Although the field of AI has been experiencing explosive growth over the last years, its direct implementation in healthcare is not yet widespread, as there are still key regulatory obstacles which must be overcome ; some examples are included in the list below. This is because historically regulatory agencies have predominantly regulated the development of physical devices that have a significantly slower cadence of changes implemented, have overall easier to understand operating principles and predictable outputs.

Software can be developed as stand-alone product or service, medical device accessory, or be incorporated into hardware. Consequently, it is incredibly important for regulators to clearly understand the innerworkings of these algorithms, pre-empt points of possible failure, and explore how these may impact various stages of a healthcare product’s lifecycle.

Key challenges associated with AI as a medical device

High Quality Datasets

AI and other software are capable of analysing big datasets of personal health data such as, images, recordings, genetic tests, or other laboratory results. By learning from the data, they can be utilised in a variety of manners in healthcare: to assist in risk assessment, for modelling, or to improve prognosis. However, in order to perform optimally, AI algorithms require for continuous training and validation.

Data Sharing

For widespread implementation of such software tools, data will need to be extensively shared. Personal healthcare data is highly sensitive, and thus the anonymisation and informed consent are a critical part of the process. This will also entail the re-assessment of risks and available cybersecurity measures, which are able to keep up with software evolution.

Data Bias

Healthcare delivery often varies by factors such as race, ethnicity, and socio-economic status. It has also been observed that historical data were not focused on minorities including gender-diverse communities. Therefore, it is possible that biases already present in healthcare systems and research may inadvertently be introduced into the algorithms. While data bias already impacts healthcare decisions, it is essential to explore methods to eliminate bias in AI/ML-based systems, given the opacity of how some of these function.

Locked vs. Adaptive Algorithms

Many innovators and healthcare professionals are concerned about autonomous algorithms which evolve as they collect data over time. Initially, the FDA only approved “locked” algorithms, which do not use new data to optimise and alter their performance. This is because autonomous or “adaptive” systems would require repeated and periodic FDA approval over time, and this would dramatically increase product risk. However, “locked” algorithms do not fully exploit the key advantage of AI’s ability to learn from data and provide improved decision making.

Data & Algorithm Transparency

In relation with the previous challenge, it is also critical that software developers provide regulators transparency in algorithm annotations and the datasets utilised during development. For this, details on data collection methodology, data labelling, and annotation labelling, should be provided. Moreover, there should also be transparency as to the correct interpretation of models, so that results are made clear to regulators, end-users, and other humans interacting with the algorithm.

Educating End-Users of AI

To ensure software healthcare solutions are harnessed in a safe and efficient manner, it is important to educate all stakeholders on the advantages and limitations of such AI-based tools. In the US, such programs are already made available to clinicians by the National Library of Medicine. When it comes to lay users, thorough usability studies are more important than ever, given that familiarity with newer technologies varies significantly; therefore, training and the user interface required for the adoption of such technologies needs to be appropriate to the target population.

 

Despite the challenges covered above, it appears that AI-based software is developing rapidly in healthcare applications with a major image-based component intended for diagnostic or analytical uses. Disrupted fields include radiology, pathology, dermatology, and ophthalmology. To manage the implementation of such software tools, it is important not only to educate stakeholders but also to provide support in the form of task force committees to implement these advanced digital tools. Additionally, standards such as “BS/AAMI 34971:2023 Application of ISO 14971 to machine learning in artificial intelligence. Guide” help in the direction of identifying and implementing appropriate controls for ML medical device software, reducing some of the unknowns in terms of regulatory expectations.

Whilst AI as a Medical Device presents some unique challenges, additional concerns need to be addressed for the majority of SaMD. Some include:

  1. Lack of international harmonisation on qualification, classification, and regulation overall for such products. While IMDRF’s harmonisation guidelines seem to help in this direction, so far different regions interpret the output of such efforts differently.
  2. The timelines for concluding a conformity assessment of a medical device or a clinical trial are not coordinated with the speed software products change; these are lengthy processes more tailored towards physical devices that are changing infrequently. Current validation processes require the version being evaluated to be the one that is then made available to the market. By the time any of these activities are completed, the manufacturer would have to repeat part or all of the aforementioned validation steps, since they would have developed new/updated features and functionalities. Note that this is not taking into consideration adaptive AI.
  3. Cybersecurity is an area of increased concern for regulators as well as the industry since vulnerabilities of devices with network access could result in significant, even if indirect, clinical risks for patients in particular by preventing a medical device to deliver its medical outcomes.
  4. Similarly, Information Security is a field of increased scrutiny due to the substantial number of sensitive/ personal data collected, manipulated, and possibly shared by devices for medical, administrative or research purposes. Robust controls and transparency in the use of the data is more important than ever.
  5. While distributing software devices internationally is a lot easier compared to physical devices, localisation is really crucial for the successful adoption and safety of such products. This includes not only language requirements or differences amongst patient populations, but also tailoring to the intricacies of each healthcare system, from roles to standard practices in each region.
  6. App stores are meant to hold the role of distributors and sometimes importers of medical device software apps, yet responsibilities associated with such economic actor roles are yet to be fully implemented.[2]

Finally, researchers have proposed that regulatory agencies begin to view software as medical device though a “system” view. Regardless of whether the software is developed in a stand-alone manner or not, it will be incorporated into an ecosystem of human users, interpreters, insurers, and other stakeholders – and this ecosystem must be evaluated in its entirety. However, this would involve regulatory agencies moving past the regulation of products alone and providing guidelines and jurisdiction over medical practices at large.[3]

 

References:

[1] N. Naik et al., 2022

[2] Sadare, O., Melvin, T., Harvey, H. et al. Can Apple and Google continue as health app gatekeepers as well as distributors and developers?. npj Digit. Med. 6, 8 (2023). https://doi.org/10.1038/s41746-023-00754-6 https://www.nature.com/articles/s41746-023-00754-6

[3] S. Gerke et al. (2020) The need for a system view to regulate artificial intelligence/machine learning-based software as medical device, NPJ Digital Medicine, 3:53 https://doi.org/10.1038/s41746-020-0262-2

 

Published on: March 14, 2024

 

blog post by

VCLS team Andromachi Kaltampani medical devices
Andromachi Kaltampani, M.Eng. M.Sc.
Director, Medical Device
As a Director of Medical Devices, Andromachi is responsible for defining and implementing regulatory strategies for the development and registration…
View profile
Christophe Amiel
Christophe Amiel, M.Sc.
Senior Director, Medical Devices & Digital Life Sciences
As a Senior Director at VCLS, Christophe is responsible for leading the Medical Device and Digital Life Science Group. The…
View profile