Voeding, Pharma & Recht

Blog over Voedingsmiddelenwetgeving, cosmeticaregels, Europese wetgeving en geneesmiddelenrecht

Blog met kennisartikelen inzake het levensmiddelenrecht. Aandacht voor cosmeticawetgeving, Europese Regelgeving en juridisch advies farmaceutisch recht. Daarnaast juridische wetenswaardigheden en kantoorupdates van Slijpen Legal BV!

Berichten met de tag GDPR
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Machine Learning & EU Data Sharing Practices

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2020

New multidisciplinary research article: ‘Machine Learning & EU Data Sharing Practices’.

In short, the article connects the dots between intellectual property (IP) on data, data ownership and data protection (GDPR and FFD), in an easy to understand manner. It also provides AI and Data policy and regulatory recommendations to the EU legislature.

As we all know, machine learning & data science can help accelerate many aspects of the development of drugs, antibody prophylaxis, serology tests and vaccines.

Supervised machine learning needs annotated training datasets

Data sharing is a prerequisite for a successful Transatlantic AI ecosystem. Hand-labelled, annotated training datasets (corpora) are a sine qua non for supervised machine learning. But what about intellectual property (IP) and data protection?

Data that represent IP subject matter are protected by IP rights. Unlicensed (or uncleared) use of machine learning input data potentially results in an avalanche of copyright (reproduction right) and database right (extraction right) infringements. The article offers three solutions that address the input (training) data copyright clearance problem and create breathing room for AI developers.

The article contends that introducing an absolute data property right or a (neighbouring) data producer right for augmented machine learning training corpora or other classes of data is not opportune.

Legal reform and data-driven economy

In an era of exponential innovation, it is urgent and opportune that both the TSD, the CDSM and the DD shall be reformed by the EU Commission with the data-driven economy in mind.

Freedom of expression and information, public domain, competition law

Implementing a sui generis system of protection for AI-generated Creations & Inventions is -in most industrial sectors- not necessary since machines do not need incentives to create or invent. Where incentives are needed, IP alternatives exist. Autonomously generated non-personal data should fall into the public domain. The article argues that strengthening and articulation of competition law is more opportune than extending IP rights.

Data protection and privacy

More and more datasets consist of both personal and non-personal machine generated data. Both the General Data Protection Regulation (GDPR) and the Regulation on the free flow of non-personal data (FFD) apply to these ‘mixed datasets’.

Besides the legal dimensions, the article describes the technical dimensions of data in machine learning and federated learning.

Modalities of future AI-regulation

Society should actively shape technology for good. The alternative is that other societies, with different social norms and democratic standards, impose their values on us through the design of their technology. With built-in public values, including Privacy by Design that safeguards data protection, data security and data access rights, the federated learning model is consistent with Human-Centered AI and the European Trustworthy AI paradigm.

Meer lezen
Cursus AI, Data, Privacy en Innovatie in de Zorg

Suzan Slijpen, Sander Ruiter en Mauritz Kop over AI in de Zorg

Op 31 oktober 2019 gaven Suzan Slijpen, Sander Ruiter en Mauritz Kop een cursus AI, Data, Privacy en Innovatie in de Zorg in het Maasstad Ziekenhuis Rotterdam. Wij waren daar te gast op uitnodiging van Quint Wellington Redwood, een leading consultancy firm die organisaties ondersteunt bij het ontwerpen en operationaliseren van hun digitale strategie waarbij mensen, processen en technologie centraal staan.

Gebruik van patiëntgegevens, medische hulpmiddelen, datadelen, privacy & AI in het ziekenhuis

Doel van cursus was om helderheid te scheppen in de wettelijke regels over het gebruik van patiëntgegevens, datadelen, eigendom van trainingsdatasets, medical devices, privacy en artificiële intelligentie in het ziekenhuis. AIRecht werd ingeschakeld om expertise te geven over dit complexe en uitdagende onderwerp. Om barrières weg te nemen voor innovatie. Onder de aanwezigen waren het Maasstad Ziekenhuis Rotterdam management team, de CISO (Chief Information Security Officer), enkele artsen, radiologen en verpleegkundigen. Ook waren er data scientists uitgenodigd van Parnassia Groep, specialisten in geestelijke gezondheid.

Keynote Digitale Zorg - Medical Devices, Patiëntdata, MDR & AVG

Nieuwe Europese regelgeving (MDR) voor Medical Devices waaronder zorgrobots, medische producten, hulpmiddelen en medische software vanuit een AI-helicopterview, die in 2020 in Nederland van kracht wordt. Verhouding tussen de AVG en de MDR. Gebruik en uitwisseling van patiëntdata, informatiebeveiliging en digitale zorg: wat mag er wel en niet op basis van de Europese Privacywetgeving (AVG/GDPR)?

Meer lezen