Artificial intelligence (AI) is increasingly able to perform tasks that normally require human intelligence, such as understanding natural language, interpreting images, and learning from data. AI has the potential to transform activity across many sectors of the economy, including life sciences, by enhancing innovation, productivity, and quality. However, it brings with it risks and legal challenges that need to be addressed through regulation, and novel intellectual property issues around use of databases and ownership of inventions.
We see AI being deployed in many ways across the life sciences sector. Medical devices and healthtech/medtech solutions are being developed with AI carrying out key tasks in analysing diagnostic images, for example, or guiding health treatment pathways. AI can also help pharma and medical device companies accelerate their research and development, improve their clinical trials, and optimise their manufacturing and distribution. AI can help to generate novel drug candidates, design drug molecules, and predict drug interactions, using generative models and deep learning. AI can also improve the efficiency and accuracy of clinical trials, by selecting patients and trial sites, and monitoring the trial outcomes and adverse events.
The use of AI in life sciences raises a variety of legal issues, and challenges existing legal structures in new ways. I will focus in this article on regulatory compliance and ownership issues.
Regulation of AI-based products and services
The novel ways of operating and innovating using AI present particular difficulties for regulators. Elements of how the algorithms function may not be fully understood. The technology will often evolve through use. Normal regulatory systems may not work effectively in this context, and regulators find themselves with a challenge to keep up.
Approaches to regulation vary, with the EU taking a cross-sector approach. The EU recently passed its AI Act, which will see a risk-based assessment of AI across all sectors, with differential regulation applied based on risk category, once it takes effect in two years’ time. As with other EU legislation, this legislation emphasises safety and human rights, while seeking to promote innovation through measures such as regulatory sandboxes. The AI Act will interact with the EU’s existing medical devices system in order to ensure the safe development and use of AI in healthtech/medtech.
Other countries are adopting a more focused sector-based stance. The UK, for example, is addressing the use of AI in the context of regulation applicable to different sectors. A current review of medical device legislation will address AI as part of the wider update, and involves both innovators and users in developing appropriate legislative change and guidance to support safe and effective innovation.
In the US, sector-based regulation has seen an innovation-friendly approach, with a wide range of medical devices approved. However, an October 2023 Presidential Executive Order on AI sets out a more comprehensive approach. This mandates the development of standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. It will also require the establishment of a National AI Safety Board to oversee the development and deployment of AI systems, and it directs the National Institute of Standards and Technology (NIST) to develop a framework for managing AI risks.
Innovators will need to monitor these developments with a view to building in compliance as their products and services evolve.
Ownership and control: training data and inventions
AI innovation raises a set of novel issues around ownership of intellectual property.
First, the extensive use of databases of information are often essential to the development of new products and services. Control of this underlying information can be difficult, as much of it is in the public domain, or is held in databases that may be difficult to protect. The New York Times has recently begun a copyright infringement lawsuit against Microsoft and OpenAI, in a Manhattan federal court, challenging the extensive use of its material in building their AI systems. In the life sciences arena, publicly available information and resources form the basis of AI analysis, and the outcome of cases like these will be important in determining whether this can continue without a new financial and ownership model. Unpublished databases can also become subject to use in this way, whether through inadequate protection measures or unauthorised access.
Second, the protection of innovations developed using AI presents its own problems. Where an invention is made by a researcher or research team, the patent process operates to assess novelty and inventive step, leading to protection for innovations that meet the necessary standards. Patent offices and courts vary considerably in their attitude to inventions made using AI, with some permitting patents to be granted, and others ruling them out. Issues around inventiveness and ownership are reaching the courts. The UK Supreme Court, for example, has recently ruled against a patent applicant who sought to name an AI machine as the patent inventor.
These cases demonstrate that well-trodden paths for intellectual property licensing and ownership cannot be relied upon by innovators using AI. New approaches will be required to reduce the risk of costly litigation and best protect research investment.
I have focused in this article on two fast-moving legal areas likely to affect the use of AI in life sciences applications. Developers will need to stay on top of these issues in order to address risk and maximise their opportunities in this rapidly evolving field.