Screenshot of the almost empty Spinelli building, room 1G-3, with remote participation of Members from the video shared by the EU.

Artificial Intelligence Act First Joint Exchange by IMCO and LIBE

Notes on the first parliamentary joint exchange of views between the committees on Internal Market and Consumer Protection (IMCO) and on Civil Liberties, Justice and Home Affairs (LIBE)

Alex Moltzau
24 min readJan 25, 2022

--

The development of regulation in the field of artificial intelligence can seem incredibly complex and confusing. As a person with no legal experience nor educational background in law I think the EU have done their best to make the process transparent and open for participation. What you will find further down are notes I jotted down (almost transcribed) from the meeting that took place the 25th of January 2022. I take no responsibility for the accuracy of these notes, I simply wanted to share them in case they would be useful for someone else.

My main interest lies within AI policy and ethics, clearly this legal proposal could have a large impact on both of these intersecting fields. It could also impact legislation more broadly in Europe and around the world, and to some extent shape how we develop or use AI-related products and services.

It might be good to know that in the concluding remarks of this meeting it was noted that there will be a new hearing with experts in academia in March 2022 and later the LIBE IMCO hearing in April 2022.

Shortly after the first EU artificial intelligence act proposal in April 2021 I wrote an in Digital Diplomacy about the first version of the proposed EU AI Act:

The video of the proceeding, the first joint exchange by IMCO and LIBE on the 25th of January can be found here:

The Procedure File in the legislative European Parliament legislative observatory can be found here:

Notes from the joint exchange by IMCO and LIBE

A. Cavazzini.

Good afternoon dear colleague I suggest we start with agenda item one. We have exchange of views in committees and committee presidents on the artificial intelligence act. Meeting between.

First time with horizontal EU legislation in the EU. To make sure that all AI systems in the internal market are fair.

J. F. Lopez Aguilar

We have a legislation worth noticing. For our two committees to ensure a comprehensive approach taking place. Taking into account the various values and interest at stake when analysing the artificial intelligence proposal. The impact of the proposal on fundamental values and rights. Today we are organising an exchange of views together. The floor will be held by Viola, that will join this join committee meeting remotely. I would ask all the speakers to respect the given time. We should be having a smooth running with exchange of views to present the artificial intelligence proposal. 15 minutes to present it to both of us as well as members of IMCO and LIBE. There you go Roberto Viola director of DG Connect.

Viola Roberto [13:56]

Thank you for receiving this presentation. Important proposal for the future of Europe [Sound is lost]. Welcomed by many in Europe and the world. The first regulation on artificial intelligence in the world. It is being studied as the potential model in multilateral models such as G7 and OECD. Like Europe managed to do for protection of personal data we are breaking new ground. This proposal does not come in a forth-night, there has been two years of preparation to arrive at the proposal that involves all the best AI experts of the world, stakeholders such as NGOs with extensive consultation where thousands replied. We went into three steps. The first is the publication of the whitebook, that took all the results we took from the expert group and with the whitebook publication we went into a public consultation with the answers. The final steps where presented to co-legislators. It is a thought through proposal. It is weighted agains proportionality (we are working on something new) and innovation. The fundamental choice is that we based our proposal on risks. So we basically have said , look there are many things that artificial intelligence can do to change our lives. Taking a picture better is the type of application that does not apply particular risk, but then when we speak about AI in medical systems, in cars (breaking systems) we speak of something. If it malfunctions it puts into risk the integrity of the people on board. Vast array that demands attention when it comes to algorithmic choices. This is the scope of the regulation. We are not changing the laws, but we are adding the artificial intelligence dimension to it. Then there is another category of services and products that are standalone and can be used. There we have also to consider another risk to mean physical products of injury or death of the user. In this other case we can look at risk to fundamental rights, because software is utilised to select who should get access to public office. If this software is biased it could infringe upon fundamental rights. To be master of our own future, not a machine telling us whether we can be cured or our kids can go to school. Regulation crafts in all systems to mitigate any risk of fundamental rights and an array of control mechanisms. These are regulated and roughly 10–20% of the AI applications. You have the applications that need to be regulated and not. You have some that civil society cannot accept, such as social scoring. This cannot be part of a democratic civil society — this needs to be banned from the outright. Another type of application is that of subliminal messages to induce the person to do something that damages the person. Let us see about software that tries to keep a truck driver awake instead of taking a nap. We want to ban software that exploits minors, these type that could be used against kids or mental disabilities should be banned. These belong to the tip of the iceberg. It requires governments and this also requires conformity assessment bodies able to verify the algorithms. Of course the regulation tries to limit the actual tasks for SMEs, in the interest of time I did not discuss much about for instance other elements of the regulation such as biometrics regulation. In the interest of time I would limit my talk to these general remarks.

A. Cavazzini [14.07]

Thank you, now for the IMCO statement.

B. Benifei [14.08]

Glad to start our parliamentary work on this text. It is important for SMEs and startups, without this regulation citizens will not trust. In relation to ban there are exceptions, parliament gave a clear message, we cannot risk interpretations that are too broad. If systems are too badly designed it could lead to job losses of thousands of workers. It could have an impact that some people commit more crimes because they come from a poorer district. Now a question about certification. I mentioned the problems last April and it needs to be looked at further. About assessing conformity and three on risk application (annex III). It would require a high level assessment of conformity, it would need to look at the brand, we need to look at GDPR and product safety. Part of it is that products have been conformed, but that is not the case. In Annex III there are risks to fundamental rights, so this needs to be looked at in more detailed, and another for individual certifications. Could you explain the reasoning on that point, would it not be good to have better balance?

A. Cavazzini [14.10]

Viola, do you want to react?

Viola Roberto [14.11]

I will focus on the central question of Benifei, there are things that focus on the bias of algorithms. We have to look at artificial intelligence in a cost-benefit balance. Human beings are not immune from bias. I read an article comparing mistakes in identification of a person and an algorithm, humans were performing less well. In many cases the bias of a single person deciding the fate, induced bias, can be equally dangerous. The regulation starts with the idea that there is a plus for society for a fair more equitable way of looking at things. The regulation is a world-premiere, so we do not have solid background in terms of how to evaluate those algorithms by third parties by standards, because those standards do not exist. In the perspective of proportionality and risk mitigation, so for all these reasons we have started looking at the areas where certifications does not exist or standards does not exist. Starting with self-assessment that bears with it legal responsibility, possibility for the user to verify. To ask a third party to verify and for legal certification. System can evolve into something that is a bit more refined. We cannot say we wait 5–10 years and block every development of AI. I think we are the first in the world to point to damage to fundamental rights. We have a system not to dissimilar that the producers are responsible for organisational measures, with this we start the journey.

A. Cavazzin i[14.16]

Thank you Mr. Viola, now let’s hear from LIBE rapporteur.

Tudorache Ioan-Dragos [14.17]

[first question lost — need to backtrack]

Second question goes to governance structure whether they are doing enough to look at other proposals.

Third question is to high-risk use of AI. It has done a good job with use cases, but these questions of democracy and why it is not mentioned.

Viola Roberto [14.21]

Mass surveillance and social scoring are forbidden in the regulation. Banned by the regulation. Indiscriminate use of cameras and facial recognition. There is an exception to the ban for limited places and limited times if this is authorised by a judge for cases linked with justice — in particular with very serious crimes, European arrest warrants. For terrorists attacks etc. Exceptions that are identified in a very limited spaces and very limited time, but the rest is forbidden.

In terms of the governance of AI and digital governance act it is composed of representatives and responsibilities. Setting up conformity assessment bodies. The digital service act in the version of the council we are about to start trilogues after parliament. It is stressing more the role of the commission. It is aligned with the digital governance act, a model which the commission would have a strong coordinating role. We prefer model similar to digital governance act for members to be a part. For SMEs we have billions in Digital Europe, Horizon Europe and a fund for this. We have sandboxing, we have simplification to help SMEs.

Elections and why it is not in the list it should not be a different system. Elections fall into a different scope. Matter of election we consider general and not part of the legislation.

A. Cavazzini [14.25]

Now for the shadows.

D. Clune [14.25]

Thank you for the presentation and this is new territory and we are forging the way. I support your risk-based approach and horizontal approach. That they know what laws they have to work under, could I just have a few questions to use my time. For the definition of AI in the proposal it is broad. Such a broad recognition would risk software applications risking over-regulation with a negative risk on innovation. Could you outline your thinking about that? How do you envisage that working.

Another one was just the systems themselves they can work in a complex chain, for those that are developing this without overburdening consumers. If the developer would not be involved in the final use of it, how do you see that in your scenario?

Sandbox is very welcome and we support their inclusion. How can we encourage their widespread use and sectors. Do you envisage support in this for legislation? We need to ensure innovation to develop our support for the industry.

A. Cavazzini [14.28]

A lot of questions to answer.

Viola Roberto [14.28]

The definition is not an easy task. This is something we have worked on for months. If it is broad then everything is AI, if too narrow important applications could escape. The current definition is aligned with the OECD definition of AI. Algorithmic decision-making with a certain amount of autonomy. Not necessarily an AI system unless… It gives more input to the decision-maker, that presents you choices and de-enact them we are in range of AI systems. Certain traditional systems are already like this such as the auto-pilot on a plane. The new dimension is a software dimension. It is a work in progress and we cannot make sure to get it right. We believe the definition is sufficiently broad to include applications and exclude other applications. To the point you make, what is thecumulative effect of the risk application? Could we regulate the behaviours of kids? What we do is taking as much as possible the outcome as the indicator of risk rather than the elements of the subsystems…

A. Cavazzini [14.32]

We need to move on. We move to the LIBE shadow.

A. Voss [14.32]

Thank you for the presentation. I would echo what the previous speaker just said. Is this focused on fundamental rights, I don’t know of any product we have, such as a hammer, that was focused on being relevant for fundamental rights. How this applies if you are buying a pair of shoes to do a high-risk assessment if you are infringing gender laws. I think we need to focus on life, body, health and property. The decision that an algorithm makes, just to make the fundamental rights link is not the right approach. Would it be high-risk if used in a critical structure, but the AI is not a high-risk and could become a high-risk system? This risk-based approach is right, but when it comes to the application we should not do what we did for the data protection not looking at the risk. Part of GDPR was not working and is not flexible.

Viola Roberto [14.35]

Thank you, maybe this let’s me reply to other questions too. Let’s stick with the shoes example. Low-risk examples? There might be better with a risk-based approach to focus on very important reasons, the regulation is clear where the element should be triggered. Biometric identification, admission to education system, employment, essential public services, law enforcement, migration and administration of justice and democratic processes. If you have to ask for a loan, such as buying your first house we are in a special case, important that the algorithms do not introduce a bias. We could not ask user to reply. The balance of this regulation is much more on the producer side to mention proportionality. If AI is inside critical machines, it does not change how products are put on the market. Cars and medical products are ‘the same’, if it falls into AI systems it will be tested with integrity etc.

A. Cavazzini [14.37]

We move to the LIBE shadow from SND Vitanov Petar.

Vitanov Petar [14.38]

Biometric should be for private as well as public, could law enforcement use biometric systems? Judicial oversight to be postponed, are there sufficient safeguards not to become objects for mass surveillance just for economic logic.

Viola Roberto [14.40]

Thank you Cavazzini for your question. Two things in general, one of the things about legislation to become a reference for digital it is not just the EU commission that will improve the proposal, this is also the case for this proposal. The second general point I want to make is what I said in the introduction. Are the signs plus, or are the signs minus? The scope of limited biometrics identity in specific cases is to make our society more secure. Because there have been cases where terrorists entered an airport, maybe with an identification system this could have been prevented, hard to say, we cannot predict the future. The question is whether it is too much for generalised control, no, it is not acceptable. Limited time and control in case of very serious crimes authorised by a judge is different. The use of this technology is forbidden by default by GDPR and authorisation, that those technologies can be used. There have been decisions already where they have ruled that usage is not in line by GDPR — it is clarified within this law. In many cases private or public remain high risk, so the producer must demonstrate — high-risk with product certification.

Hahn Svenja [14.44]

(IMCO shadow). Glad we have meeting between the committees. I would like to make a few comments with the AI rules is a hotspot for new technologies, we have to make sure that companies do not leave Europe. We have to make sure technology companies come here to set up shop. We need to make sure we have clear rules for the ethical rules of AI in the use of AI technologies. Even if they are used in classical products, we have to establish world-wide standards. Our work with other democratic nations such as the USA. There are other things we should not do, we should not say AI is dangerous, or place more demands on developers. We cannot overload the legislation with responsibilities that should be set elsewhere such as product liabilities and responsibilities. So we need to have clear rules so we do not have any question of the legal basis. I am hoping that we have all had a look at the text from the commission, positive with consumer rights. Important to avoid overburdening of the producers with red tape. Positive with AI sandbox, and this can be developed in a broad manner and ensure that in high-risk areas we cover all our bases. We talk about state scoring, thank you Svenja. I have one more question quickly — I think we have to make sure that in all areas with biometric scoring we have to work on this, I do not know why we do not have social scoring for private stakeholders and how to take actions against producers.

Viola Roberto [14.49]

Thank you for bringing the excellence dimension into the discussion. We have to consider how Europe can be an attractive place for researchers to be. We have plans for AI development and it is a big part of the Digital Europe programme, and with the network of super computers we are developing related to AI, that will be the largest across the world. In the areas working together in agriculture, healthcare, to utilise data spaces. So this should go hand in hand. International collaboration with the United States, it is one of the charters in the Trade and Technology Council to find risk-based approaches. We try to present a balanced approach, that looks at innovation and work to mitigate risk without. hindering the usage, but creating a system of trust. AI made in EU, maybe we do not compete on price, but we can compete on quality.

Van Sparrentak Kim [14.52]

(Shadow for Green party). Think about being tagged in a fraud risk countries with the SIRI system or childcare scandal that forced the government to resign. Think about keystrokes, gaze etc. Universities also doing the same during online exams. Impacts of fundamental rights needs to be assessed within societies. People have to ask about damage… Why has the commission not included a fundamental rights impact assessment. Why have these people not be included people (users) in the proposal, useful review and other parts subject to AI decision-making.

Viola Roberto [14.54]

Questions raised are very important. There have been some questions about algorithmic decision-making and would this regulation have prevented this to happen. We have seen poor responsibility that perhaps could be different with this regulation, that remains a philosophical question. Who should assess the vote, humans are not always better, but it might be the combination. Human in control is important. The balance user and producer is a complicated one. Yes, of course if participation of the user can be broadened there are ways to be more open to four eyes, six eyes screening. Fundamental rights impact assessments, there are impact assessments. One has to be precise on what is the ask for the producer. Otherwise smaller companies have to hire lawyers to understand just what they have to do. We need to be relatively imprecise on what they have to do to comply with the regulation.

Lagodinsky Sergey [14.57]

(Shadow from LIBE, greens). Are there any objective criteria that will establish risk? It is limited to certain domains. Sexual identity, predictive policing, is it covered or not covered here? AI physiognomy that is based on inferring race, religious beliefs etc. What about this twist? Energy consumption, why is that not considered a risk in data processing, what do you plan to offer on making it more transparent? Thank you and look forward to working together.

Viola Roberto [14.59]

All the law enforcement activities in general are considered high-risk. That is part of the each legal judiciary system what is the possibility for the police to prevent a crime. We are not in Minority Report scenario here. In using technologies to conduct investigation it is considered high-risk in this regulation. The question about energy consumption and in general whether there is a risk associated with those algorithms creating an impact environmentally… You know first of all mathematics does not pollute. Of course an algorithm… If it is not the most advanced data centre in terms of energy consumption it is a problem, engaging companies for green data centres. In terms of the future of Europe with AI chiefs you have a material scope to improve the regulation. European Council wants to introduce a minimisation of the algorithms, that might reduce accuracy. We have solar grids, windmills etc. Algorithms helps to stabilised those grids, I do not think it is the problem of having a better environment. We need proportionality in this complicated issue.

J. L. Lacapelle [15.03]

We are in a technological revolution. Regulation is welcomed, we need a balance between privacy and digital companies. We need to find compromise. Some uses are high-risk activities, could not use it to establish the emotional status nor use it to change human behaviour. You said you have a sandbox, you will allow these huge companies that can shoulder the costs, we have to avoid any kind of nightmare development from Minority Report etc. I was sorry to see we have no mention of the storage of data in the EU.

Viola Roberto [15.07]

Social scoring and emotions are all prohibited, and that is an easy case. When it comes to companies, I ask you ‘marketing’, it induces people to spend a little bit more than you can afford. You take a loan to get that, these activities are acceptable, but you cannot forbid to instigate dreams through marketing. This is where it becomes difficult to draw a line of what is forbidden. Targeted advertising in the digital services act… This is done by means of algorithm, in order to arrive at something proportionate between what is forbidden and what needs regulation by consumer protection law. On the storage of data in Europe in the data governance act that explains what are the limits and what should in this respect be acceptable and not acceptable. In AI we are an open society, we accept products from other side of the world, we need to know where that comes from. AI in Europe needs to be trained on data in Europe for safety reasons. We need to make sure that AI is fit for Europe.

K. Zlotowski [15.10]

(ECS Shadow in IMCO). AI is a change in technology, it can interfere with people’s private lives. There’s a focus on entrepreneurship and many points I would like to mention here, many things are not covered. We need to look at what the aim is and where we do not have AI yet, cost for companies, why is this not in the definition. What about the end-user? Who is using this tech? It will be problematic in responsibility such as doctor and patient — who is using the AI? We need to look at criteria for AI, this is not clear for companies yet. There are lots of exception in high-risk uses that effects the development of other products. We need to boost innovation and support SMEs, but the text needs to be reworded, we need more freedoms for example. If we think about the administration and data processed for research.

Viola Roberto [15.13]

Problem of balance and knowing the different actors. (Problem with microphone). Companies developing software to detect skin cancer. Companies have to be certified for this software that the data is collected, it cannot be the responsibility to decide whether this software works or not. In this part the doctor does his own responsibility. The patient cannot take responsibility of the doctor. Maybe you can use the camera? Then the patient does not do it, there are different responsibilities. The point is you cannot skew the responsibility, it has to be a fair share. It has to be on the companies producing the software, not the doctor or the patient.

R. Rooken [15.16]

What I hear today is about the algorithms running on the server. That’s looking at software in the 1980’s. Power of the Internet in 2022. Buy a book online with two clicks. AI will be without an ‘algorithm’, servers start by people. Create languages that people don’t understand, they do not ask questions. In December 2021 there was a mafioso that was detected by AI — police in Spain or Italy might they not use the tool in these situations.

Viola Roberto [15.19]

Technology is a field to control, it changes the world and the rules. Complexity of technology is not an excuse to say nothing can be done. Learning from an accident is the norm in the aviation industry. Controls and checks. Inventing fast ways… This regulation does not touch the majority of this development. Looks at where it could be damage to people and fundamental rights. During investigation advanced tools are used that is not surprising as long as they do not introduce bias or is not effective. In your example it is a very effective way of using a tool. Regulation is measured in the way it approaches the problem, it does not look at neural networks, it is horizontal in scope and about the risks society is facing.

A. Cavazonni [15.21]

We are coming to an end for the shadows.

Konecna Katerina [15.22]

We need to know that algorithms are doing public good, need for transparency in human-mediated process. We need to understand impact to understand… To understand harmful or unlawful practices… So called debiasing. It will concentrate power in the hands of the service providers, it gives them the power to decide when discrimination occurs. We should not support a decentralised system that reproduce… It should be regulated as a life-cycle… What do you think about the criticism of AIA that it draws its inspiration from consumer regulation, at the point of the AI life-cycle when it is drawn to AI rights, it is drawn into rights that the downstream deployers can use. (interrupted for reply of the commission)

Viola Roberto [15.25]

Cyber security — there will be cyber attacks. How are society ready when it happens. I do not think we can design algorithms with zero bias or zero errors, we need digital governance that can identify the effects. You are right, by choice the proposal puts more emphasis on the producers than the users, this is a specific choice by the experts because we are on the beginning of the journey. We do not want to give the users too many responsibilities. User needs a reasonable assurance that goes into verification, datasets can evolve and algorithms learn. What is the responsibility of the manufacturer or the user. You have an accident you cannot claim it is the responsibility of the producer. For those algorithms that continue to learn what is the responsibility of the producer and the user. Poses more responsibilities on the producers.

A. Cavazonni [15.28]

We are moving to the second round of speakers.

J. F. Lopez Aguilar [15.29]

Now we open up the floor from other members of LIBE and IMCO

Andreas. Schwab [15.29]

(IMCO). We have two legal tools, one is the machine redirective, we have the proportionality directive for services. AI could be a regulated service. Could you outline. key areas for common concerns. In the later one member states says that ensuring proportionate rules in the proposal that is my question.

Viola Robert [15.31]

It is complementary to the machine directive. AI regulation takes three steps. There are no specific provision in this (machine directive?). In this they could be a machine or service. Same conformity assessment bodies should be the ones dealing with algorithms, we have experience with some specialising in testing the AI path. We do not want to change existing legislation because then we would loose the neutral and horisontal nature of this. The ones you mention are not specific for AI.

Andrea Maldonado Lopez [15.33]

We will be the first continent regulating artificial intelligence we have DMA. We need tools to continue to be resilient and innovate. There have been discussions of impact assessment. There are sectors such as the health sector. We could have assisted medical care. How do we assess the risks, how do we reach the assessments to guarantee the safety. Each member state is introducing its own legislation, so how can we ensure that it is harmonised with the EU.

Viola Robert [15.35]

Thank you for the question. Maybe a navigation map. The AI regulation looks at production of algorithms that might entail a risk for persons or society. DSA looks at operation of conventional platforms. So the angle there is the service angle (country of origin rule there). DMA looks on the asymmetrical relationship between companies that have a platform service to offer service to the user as a gatekeeper. Medical application is clearly identified. We have no change to the medical device regulation to make sure that what is in the market is safe. What the AI regulation does include or integrates medical devices with the AI dimension. This automatic part will be regulated by the AI regulation. Being safe in accordance to performance, specific aspects will be spread out in the AI regulation.

Tudorache Ioan-Dragos [15.38]

How can we ensure future proofing. The way Annex III is exhaustively defined, we can add within categories, but what if other categories pop up in the future. How do you see this future proofing of this text?

Viola Robert [15.38]

In order to be future proof these Annex needs to be amended. There will be the need to maintain it and regularly update the regulation. It is a bit of a layered mechanism. This is horizontal enough, the annex are specific enough, constructed in a way that can be amended. Subject to co-legislators and possibility to come back and change. It is a long journey and this will be necessary for regulations to revise the. old architecture in some years from now — from the approval.

A. Geese [15.40]

(IMCO Green). When it is applied to humans the datasets are older, and we know that our societies are becoming more diverse, we get old stereotypes into our future. How can we address that? You say data has to be representative, but of what exactly? Most programmers are male, would it mean that 80% of new employees need to be male or of society of large. There are frightening tools for surveillance. There is a tool that start vibrating with non-efficient movements, that infringes human dignity. What does it mean for national workers rights? Courts tend to place internal market legislation higher than workers rights. Disproportionate impact on women, minorities, women of colour etc. Why can we not have these represented in the development of AI tools?

Viola Robert [15.43]

How many years do I have to respond? The first question — let me start with something easy to answer. The examples you mentioned about the bracelet, continuous control, it is forbidden in many labour legislations in Europe. EU just presented regulation where users need to be informed, unions involved, there should be addressed… We need other regulation such as protection for workers. Again and again we learn that humans are even worse than machine, if machines can be more fair — why if they can deliver. I worked in a bank, if algorithms looked only at the past it should be representative of the society that is now. A good designer should not do this, there are possibilities to actually involve with the datasets, you have to describe what you are doing. That is the area of the regulation.

J. F. Lopez Aguilar [15.47]

Two members not listed on the speakers list.

M. Walsmann [15.47]

This regulation will be for third countries as well. When we are talking about services in AI when it is developed with EU countries and third countries. Many carry out the first part of the development in US or China with components that have not been validated in the EU. In what way will there be interaction with developers and users, what possibility do we have for regulation for this?

Viola Robert [15.48]

Everyone has to comply whether it is produced in Europe or outside of Europe. If you are in the European market you have to comply. On top of this we say that whoever produces algorithms have to be in line with the European population. I can quote a specific example for sending algorithms to hospitals, in some ways when algorithms are not precise, it is not about being protectionist, it is recognising a pedestrian, but a European whatever this means. For sure we have characteristics of a population that needs to be respected when programming the algorithms. The conformity assessment bodies have to be consulted. When entering Europe.

Basso Alessandro [15.50]

The commissions proposal on AI is a vanguard on this issue, but its uniqueness might create implementation problems. How can we be sure that these applications do not give rise to delocalisation of data in countries that don’t apply the risk-assessment criteria. Data is a set of numbers delocalised what guarantees do we have that risk activities in Europe are not going to be carried out in Europe, US or else where without trace.

Viola Robert [15.52]

We have other regulation on transfer in GDPR that relates to personal data. Clearly in any case in respect of the GDPR outside of Europe, this is very clearly valid for AI datasets. It is not possible to say because ‘data is for AI’ it can be transferred. We work within the limitations… Other regulations such as the data governance act (non-personal data) if the user wants to Object to this there are possibilities for readers.

A. Cavazzini [15.53]

(Concluding remarks). I have to say I really enjoyed the debate as it went into detail and Viola giving good answers. Good foundations for months to come for this file. We have hearing with experts in academia in March and later the LIBE IMCO hearing in April. Looking forward to fruitful collaboration with our committees.

J. F. Lopez Aguilar [15.57]

We get the chance to have an open conversation to clear our minds before making laws. Surely for the members of both our committees. Hope that in the end European Commission standard of regulation will be the standard in the world. We have set the highest standard worldwide. We have gone global and set an example that is difficult to reach. We should take initiative, take prognosis of this field of scientific technological development, legislation. This conversation will serve that purpose. We thank you.

This is #1000daysofAI and you are reading article 509. I am writing one new article about or related to artificial intelligence for 1000 days. The first 500 days I wrote an article every day, and now from 500 to 1000 I write at a different pace.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau