A short guide to regulation for disruptive technologies – Lexology


Introduction

Regulation, by necessity, introduces rigidity to otherwise flexible processes. Done proportionately, this can be an efficient societal device for preventing harm. At the same time, inherent regulatory rigidity creates particular challenges when the nature of the regulatory target changes quickly or unexpectedly.

Disruptive technologies in life sciences - a very dynamic field of activity are a good example of this. Disruptive technologies challenge the way a sector operates, and it is self-evident that (in most cases) this will also have an impact on the relevant normative framework. This effect is most visible in areas which have a direct impact on human life and wellbeing, as these areas are tightly (and often, rather specifically) regulated, and a failure to control a technology appropriately may lead to undesirable outcomes.

The dual purposes of preventing harm through proportionate regulation and maintaining trust in innovation mean that it is all the more important to ensure that regulation is adequately responsive and flexible to react to a disruptive technology. This can be a difficult line to tread, particularly in fields where research and development is also morally or ethically contentious.

We will illustrate the context and challenge of regulating disruptive technologies by discussing two specific case studies: artificial intelligence, and cell and gene therapy. In both cases, we suggest that the current regulatory framework in the UK strikes an appropriate balance between precaution and freedom of research, allowing for innovation subject to strict controls and licensing frameworks. There are, however, numerous challenges which need to be considered and addressed as these technologies advance. Regulators, policy makers and innovators working in this sector must continue to work together to ensure that responsible science is allowed to flourish.

Artificial intelligence

The science of making machines do things that would require intelligence if done by people (Definition of artificial intelligence from the Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, 1955.)

Artificial intelligence (AI) technologies hold the potential to significantly improve health and care, providing faster and more accurate diagnosis, speedier treatments, and facilitating medical breakthroughs through drug discovery.

This is particularly the case in contexts where the pattern-recognition strengths of AI can be deployed to their fullest potential. Tasks such as the correct identification of tumour cells, recognition of areas of concern in medical imaging, and the processing of large amounts of genomic data can be carried out with much greater speed and accuracy by algorithms that learn from previous datasets, and develop their own datasets from which to learn from in the future. The ability to check a patients image or test result against all other available and comparable datasets is, at first glance, far superior to a clinicians ability to make an assessment on the basis of his or her experience.

At the same time, this does give rise to risk. For example, there is an inherent (and proven) risk that an algorithm which learns on the basis of historic human-generated data also takes on the biases that human decision-making has inevitably introduced. So how does regulation play a part in addressing this risk?

The first point to make is that no one body is solely responsible for regulating the adoption of AI technologies in the UK healthcare sector. A number of different regulatory bodies have a remit to oversee aspects of AI, including the Medicines and Healthcare products Regulatory Agency (MHRA) and the Information Commissioners Office (ICO). In addition, there are nonregulatory bodies which also play an important role, including the National Institute for Health and Care Excellence (NICE) and NHSX. However, no one institution has overall responsibility for policing, for example, the prevention of bias in AI algorithms. The most effective way of addressing this risk at present is to avoid exclusively automated decision-making so that the use of AI technologies in the clinical setting will focus instead on assisted decision-making and triage. The application of this approach will come down to individual healthcare payors and providers: in the absence of any direct regulation, it is left to them to decide how best to mitigate risk, and whether and if so how to apply nonbinding codes of conduct, such as the Department of Health and Social Cares code of conduct for data-driven technologies which seek to address the risk.

Reliance on nonbinding codes of conduct as a substitute for regulation may not be ideal and can result in a lack of certainty. Equally, overlapping codes, rules and regulations also pose a risk, for example, as to how NICEs evidence standards framework for digital health technologies interacts with MHRA regulations concerning software as a medical device in relation to clinical evidence. The risk is lack of clarity; the mitigation is raising awareness.

Another challenge arises where regulation designed for a specific purpose is used for a new purpose, for example the application of MHRA regulations designed for traditional medical devices to software incorporating algorithms. A recent state of the nation survey on the use of AI in health and care revealed that half of all software developers were not intending to seek CE mark classification, with the most commonly cited reason being that they did not believe the medical device classification was applicable. It is essential that the sector raises awareness of these requirements, albeit that they are complex and sometimes impenetrable.

One significant area of concern is how existing laws relating to negligence, liability and insurance apply to the clinical use of AI whether in assisting decision-making about a patients treatment, or in the operation of medical devices. Currently, claims are almost always brought against the treating clinician or healthcare provider, but for a clinician using big data analysis as well as his or her own experience, where does the division of responsibility lie? If a patient is injured as a result of a malfunction in an AI-driven device, does liability lie with the manufacturer of the device, the programmer who wrote the code which operates the device, the clinical team, the hospital or all of the above? It remains to be seen whether this will give rise to novel constellations of liability, such as an increase in manufacturers liability or a change in statutory and wider insurance requirements.

One of the major areas of opportunity for AI-based technologies is biomedical research where the strengths of speed and range have huge potential. The extrapolation of the potential of certain compounds against huge databases of similar compounds is commercially powerful. The ability to quickly check clinical trial design against public registries of published results to avoid unnecessary duplication of human-based experimentation is ethically desirable. But as innovators seek to improve drug discovery using AI, it will be important to continue to keep under review laws relating to intellectual property and how they apply to AI-based technologies.

Cell and gene therapy

The area of cell and gene therapy is of particular significance, and great potential, in regenerative medicine. It has seen a decade-long genesis since its inception, and it does not immediately strike one as a field that meets the definition of a disruptive technology. At the same time, however, it provides a good illustration of how a technology may mature for a long time, or be repurposed in an unexpected way, before it becomes disruptive.

The field has come a long way since the first systematic trials in 1989, and by now, there are 17 FDA-approved cell and gene therapy products. Over and beyond technical questions of the safety of the vectors used for the manipulation of cells, there are few remaining ethical and legal issues in relation to somatic cell gene therapy for particularly debilitating conditions (i.e. where the manipulation does not lead to heritable genetic characteristics).

From a regulatory and ethical perspective, however, cell and gene therapy becomes more complex where germline gene therapy is used. The modification of the human germline is subject to significant debate and, in some jurisdictions, strongly prohibitive regulation. The advent of disruptive technologies such as CRISPR/Cas9 prime editing techniques, with their associated precision and purported safety, have already reignited the debate around the prohibition of germline manipulation, with some commentators calling for a relaxation of the regulation while others demand either a global ban or at least a moratorium.

Although the United Kingdom has a reputation of being a liberal jurisdiction for research, it is in fact very tightly regulated and only potentially permissive. UK law reects a compromise: we permit research (including research involving germline gene editing), but we subject such research to strict scrutiny, licensing and oversight, and we criminalise unlicensed research. That being said, the legislation is drafted in such a way as to facilitate a broad variety of research, including (again, potentially) the introduction of novel techniques, and few procedures are prohibited. Overall, this framework helps allay public and political concern about what is often controversial research and provides a degree of protection for researchers operating under a licence, facilitating innovation. Such a robust framework is particularly valuable when it comes to considering how best to address the clinical application of germline genome modication. In circumstances where UK law is comprehensive and clear in its application to gene editing, there is no merit or purpose in a moratorium or further restriction on the use of this technology as some have demanded.

Concluding remarks

The UK has a mature and robust regulatory framework governing research and development in life sciences. We have a successful history in regulating numerous disruptive and controversial new technologies, such as stem cell research, the creation of human-animal hybrids, the clinical use of preimplantation genetics, and mitochondrial donation all testaments to the strength of this framework and its capacity to adapt to accommodate new technologies. This success, however, has been built upon a vital foundation of open and accessible dialogue between innovators, parliamentarians, policy makers and the public, and it is to be hoped that a similar transparency will be maintained in the future. Such dialogue will also ensure that if there are gaps or restrictions in regulation that need to be addressed to avoid stifling innovation, these can be pre-empted.

Go here to see the original:
A short guide to regulation for disruptive technologies - Lexology

Related Posts