CataloniaBio & HealthTech Tribune | Navigating the EU AI Act in Life Sciences

comunicacio@cataloniabioht.org,


CataloniaBio & HealthTech is launching a new space for analysing current affairs to give a voice to our main asset: the organisations and companies that are part of the association. A new opinion forum to make us stop and reflect on the effects that changes in the environment may have on companies in the health sector.

On 13 March 2024, lawmakers in the European Parliament approved by majority vote a new law to regulate AI systems, the first-ever legal framework on artificial intelligence. This is what Casper Wilstrup, CEO of Abzu, has to say on the matter.


Casper Wilsrtup - CEO, Abzu

After years of negotiations, the EU AI Act has finally become a reality. Initially a source of concern for potentially over-regulating AI research, this landmark legislation has instead focused primarily on AI use cases, a welcome relief for many in the field.

At the heart of the Act is the categorization of AI applications into distinct risk categories. It identifies some specific uses as either unacceptable risks, which are outright prohibited, or high-risk, which must meet stringent regulations. The majority of uses, however, are considered moderate or limited risk and are left essentially unregulated.

In the life sciences sector, many AI applications – such as those used in patient diagnosis, treatment recommendations, and drug development – fall under the high-risk category. These are areas where AI decisions have significant implications for human lives and will therefore be subject to heightened scrutiny.

In the life sciences sector, many AI applications – such as those used in patient diagnosis, treatment recommendations, and drug development – fall under the high-risk category.

I believe this approach is beneficial. AI systems making critical healthcare decisions must embody fairness, transparency, and explainability. As we integrate AI deeper into life sciences, these principles become the pillars that ensure that technological advancement aligns with human values and ethics. This is not just a regulatory obligation; it is also a moral one.

It's equally important for the AI industry to innovate and fulfill the transformative promise of AI. This necessitates developing technologies that are transparent and interpretable, which will give us humans the understanding we require to trust in decisions made by AI. In the context of life sciences, where the stakes are exceptionally high, trust in a decision made by AI is only earned if we have the ability to understand “the why” behind that decision.

In the context of life sciences, where the stakes are exceptionally high, trust in a decision made by AI is only earned if we have the ability to understand “the why” behind that decision.

Therefore, we should only allow technology that ensures safety, fairness, and clarity in high-risk applications to benefit from our scientific, technical, and entrepreneurial communities in Europe. As the CEO of Abzu, an AI research and development startup that is now best in class in in silico RNA therapeutics design, I've seen firsthand how AI can drive innovation. And I’ve also seen areas – black, white, and gray – where people weighed a perceived tradeoff between explainability and innovation.

And I’m glad to say that such a tradeoff isn’t a reality: We can demand explainability in high-risk applications and not fall behind in the global AI race. In fact, this unique quality can leapfrog us to the front of the AI innovation line.

Comments


To comment, please login or create an account
Modify cookies