The Medical Device Coordination Group (MDCG) recently published a new guidance entitled “Guidance on Clinical Evaluation (MDR) / Performance Evaluation (IVDR) of Medical Device Software (MDSW)”. Whilst not binding, this document provides a very solid framework to fulfill the requirements set out in the European regulation for Medical Device Regulation (MDR) 2017/745 and In Vitro Diagnostic Medical Devices Regulation (IVDR) 2017/746. In this blog, we summarize the main aspects of this guidance and focus on MDR and share our own experience on this topic.
With the introduction of the “famous and dreaded” Rule 11, the MDR has significantly changed the landscape classification of stand-alone software. Rule 11 indeed states:
“Software intended to provide information which is used to make decisions with diagnosis or therapeutic purposes is classified as class IIa, except if such decisions have an impact that may cause:
- Death or an irreversible deterioration of a person’s state of health, in which case it is in class III; or
- Serious deterioration of a person’s state of health or surgical intervention, in which case it is classified as class IIb.
Software intended to monitor physiological processes is classified as class IIa, except if it is intended for monitoring of vital physiological parameters, where the nature of variations of those parameters is such that it could result in immediate danger to the patient, in which case it is classified as class IIb.
All other software is classified as Class I.”
Consequently, we believe a vast majority (if not all!) medical device software classified until now (rightly or wrongly…) as Class I devices under the Medical Device Directive (MDD) 93/42 will likely be up classified with this new regulation.
In the light of this significant impact (notably on the Notified Bodies) the regulator has offered a grace period for existing Class I device to comply with this MDR provided there are no significant changes in the design and intended purpose of these MD software. Besides the COVID19 crisis has led the EU to prolong the transitional period by one year, postponing until 26 May 2021 the application of the MDR and the repeal of the MDD.
The booming of Artificial Intelligence has led many new actors specialized in data science and with little knowledge (if any) of medical device regulations to recently enter the domain of healthcare applications, some of them envisaging or being already in the process of developing medical device software based on Machine Learning and other AI technologies. They are now facing the challenges of complying with all regulatory requirements that apply to medical devices. In our experience, this may be perceived as a daunting task with some unknowns.
In this context, this new guidance from the MDCG is of great interest to existing and new manufacturers of e-health/m-health technologies as it states and sets the expectations from a regulator standpoint in terms of clinical evaluation of software as a medical device.
This guidance applies to software for which the manufacturer claims a specific medical intended purpose. As such the manufacturer shall produce clinical evidence to demonstrate the safety and the clinical benefits claimed for the device. While rather intuitive for a therapeutic drug, defining the clinical benefits of a digital medical device may not be trivial in some situations, as the software may be one of series of elements that will impact the patients and so have very indirect effects. However, being clear on the device’s clinical claims is the preamble to develop a sound and relevant clinical evaluation.
Clinical Evaluation should not be seen as a one-shot task mandatory in the certification phase. On the contrary, clinical evaluation is an ongoing process that should take place all along the life cycle of the MDSW, even after CE Marking of the device. Once the product is on the market, the manufacturer should regularly assess if any new data is available or needed to re-evaluate the safety and clinical benefits of its device (encompassing vigilance events, information from analog MD SW from the competition, etc.). As a consequence, the guidance recommends a developing process dedicated to Clinical Evaluation embedded in the quality management system of the company.
The clinical evaluation consists of three components: valid clinical association, technical performance and clinical performance. Those three components are pure common sense.
As a first step, you want to make sure that the outcome of your software device could be associated with the targeted physiological state or clinical condition. This can be usually achieved by literature search, professional guidelines or manufacturer’s proprietary studies. In this phase, it is key to work with method to have appropriate appraisal and analysis of data supporting clinical association.
The second step is rather intuitive to software developers. It consists of demonstrating the ability of the MDSW to accurately, reliably and precisely generate the intended output. Typically, the verification and validation activities will be used for such demonstration. It should be noted that the cybersecurity of the device should be addressed in that stage.
The last step is the clinical performance which consists of demonstrating that the MDSW has a positive impact in link with its clinical claims. The clinical performance should also address the usability of the device. In a nutshell, this step should allow demonstrating that when the product is put in the hands of the targeted end-users, they are able to use it and it has a benefit for the patients. Clinical performance assessment should be considered at each new release of the software. This strengthens the value of having the clinical evaluation integrated into your process to make sure it is not overlooked in the rush of the delivery of a new release. The clinical performance may be perceived as a challenge by software companies that may have a hard time assessing the level of evidence required for their specific product and its claims, notably with respect to the need for large clinical studies. The guidance makes recommendations regarding the type of claims and the need for the prospective or retrospective study. Specifically, if the MDSW is used for the determination of a patient’s future state (e.g. predisposition, prognosis, prediction) or if the output of the MDSW impacts clinical outcomes (e.g. treatment efficacy) or patient management decisions, then a prospective study may be required. The guidance provides also a detailed list of sources that can be used at each step and provide examples of products with associated demonstrations for each step.
In essence, there is a need for a continuous update of the clinical evaluation with data obtained from the implementation of the manufacturer’s Post Market Clinical Follow-up (PMCF) plan. They promote the use of real-world performance.
This guidance intends to build the bridge between “knowing and doing”, in a relatively short format (21 pages). It provides a good overview of the theory and gives concrete tips to put into practice a sound and efficient clinical evaluation that should be beneficial for the manufacturers and ultimately to the patients. A must-read for all developers of software medical devices!