With upcoming EU regulation poised to have a massive impact on ethical regulations around AI in Europe, Bayer Head of Digital Transformation Saskia Steinacker argues that the time is now for pharma executives to start preparing themselves and their teams for the future.

 

“We are in Healthcare. We are ethical by nature and on top of that, we have a lot of regulations. Why should ethics in artificial intelligence be relevant for us?” This is an attitude I encounter frequently when speaking with peers in other pharma companies.

With all these benefits from using AI, there are also certain risks

To grasp the importance of the topic one has to first look at the changing nature of the business: Artificial Intelligence (AI) helps to transform patient health by enabling earlier diagnosis of diseases, allowing precision medicine including treatments and healthcare solutions tailored to individual needs, as well as faster and more efficient development of new medicines.

But with all these benefits from using AI, there are also certain risks: Algorithms are more than simple calculators. They are fed by data which could potentially be primarily faulty, biased, inconsistent, incomplete and not available in enough quantity. Other examples for a potential risk could be a missing technical robustness or an unmanaged lifecycle of the AI application. This can lead to algorithms becoming unethical very quickly and posing a risk for the patient. So how to make sure an AI application is ethical and trustworthy? And is there any regulation on the horizon?

 

Ethics in AI: Available Guidelines and upcoming EU regulation

There are more than 200 AI Ethics guidelines and proposals published worldwide, e.g. the OECD Principles on AI or the Beijing AI Principles. Europe published the EU Guideline for Trustworthy AI in 2019 which was drafted by a High-Level Expert Group on Artificial Intelligence (HLEG), appointed by the European Commission. I was honored to be part of the group which consisted of NGOs, public servants, engineers, philosophers and companies of all sizes and from all sectors. The EU Guideline for Trustworthy AI is based on the imperative of upholding and protecting the human rights of Europeans as specified in the Charter of Fundamental Rights of the European Union and lists seven key ethical requirements for Trustworthy AI (see picture below):

Made with Visme Infographic Maker

 

The requirements need to be practically applied, so the group published an easy-to-use online assessment list (ALTAI) as well. This assessment list can already be used by anyone designing, developing or using AI solutions and will likely form the basis for the practical implementation for the upcoming EU AI regulation.

The EU AI regulation is expected for Q1 2021 and could become quite impactful. At least that’s the hope of the EU Commission, which expects a similar international standard setting effect like the one seen with the General Data Protection Regulation (GDPR). The GDPR had to catch up with many problems from the last industrial revolution, the internet, when the revolution was already mature and established. The new AI regulation will arrive while AI is still in its early days of transforming economies and societies. That makes this regulation so important. It has the potential to pave the regulatory way for the opportunities AI offers, and it will also set the limits of this development and include legal and regulatory guardrails to prevent adverse developments.

A look into the White Paper published by the European Commission in February 2020 is already revealing a lot of the thinking behind the new AI regulation. ‘Excellence’ and ‘Trust’ are the two main pillars of the European AI strategy, differentiating it from both the Chinese and the American approach. They stand for the desire to become competitive with the leading global players in AI through Europe’s very own brand of ‘Trustworthy AI’. Trust is seen as the core prerequisite for a widespread uptake and societal acceptance of AI solutions.

The White Paper proposes a risk-based regulation, meaning that high-risk applications will face stronger regulation than low-risk cases. How the risks of each individual application are determined, and appropriate risk classes are defined is currently heavily and controversially debated. Some are pointing at the potential damage even seemingly harmless applications could pose if not controlled properly, arguing for extensive horizontal regulation especially in sensitive areas like healthcare. They see ethical boundaries as a chance to develop more innovative products. Others are expressing the fear that too much regulation would stifle innovation and argue in favour of voluntary self-regulation and a labelling system for most applications.

The White Paper itself proposes to determine the risk based on the potential impact on peoples’ lives. It suggests establishing two risk categories (high, low) for the intended use of AI applications and its expected risk, plus the classification of certain sectors as high risk per se, such as the public sector, transportation, or healthcare. If an application is both labelled high-risk and being used in a high-risk sector, then full regulation including conformity assessment prior to application launch as well as continued assessments throughout its lifecycle would become mandatory.

It is important to note that the regulation also extends to the data used for training the systems, to the people involved in the creation and use of the system, and demands extensive documentation and record keeping, ensuring transparency and auditability.

 

How to prepare for the upcoming EU regulation

Regardless of the final details of the EU AI law, Healthcare will certainly belong to the most affected sectors, as it is generally seen as a high-risk area in need of strict regulation. While waiting for the new regulation we can recall our experiences with the GDPR, and which organizational efforts and resources it took to implement its requirements. The upcoming AI regulation will play in the same league, and we best prepare while there is still enough time.

2021 will be an interesting year for Healthcare leaders with rapid advancements in technologies transforming patient health

As a start, leaders need to ensure that their organization including management is aware of the upcoming regulation. The implementation will impact resource allocation which needs to be factored into the planning for 2021. Secondly, the organization needs to start thinking about the appropriate processes and set-ups to meet the potential requirements e.g. for transparency. While some think about installing an ethics committee at their company others deem this as a bottleneck given that so many departments work with AI applications and need to be fast to market. No matter which model companies go for to meet the requirements all of them will have to train and upskill their employees. Time will tell which set-up will work best for organizations and key will be to adapt in quick, iterative cycles.

In a nutshell: 2021 will be an interesting year for healthcare leaders with rapid advancements in technologies transforming patient health. And one thing is for sure: Ethics will play a highly relevant role and you better prepare now if you want to maximize the benefits of AI while minimizing the risk.

 

Sources

Charter of Fundamental Rights of the European Union: https://www.europarl.europa.eu/charter/pdf/text_en.pdf

High Level Expert Group on Artificial Intelligence

https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

EU Trustworthy AI Guidelines

https://ec.europa.eu/futurium/en/ai-alliance-consultation

Assessment List for Trustworthy AI (ALTAI)

https://altai.insight-centre.org/

White Paper on AI by the European Commission

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

OECD Principles on AI

https://www.oecd.org/going-digital/ai/principles/

Beijing AI principles

https://www.baai.ac.cn/news/beijing-ai-principles-en.html