Why Fortune is Incorrect About EU’s AI Strategy
What’s Wrong With Diverting Attention Away From Risk In the Health Sector?
I do not often write short opinion pieces, however I have to make an exception for a recent piece about EU’s AI strategy.
I found the tab of the article left perhaps an earlier title, thus my subtitle.
However the piece is now called The problem with the EU’s A.I. strategy.
In the article it is said that high risk hard to define, and it questions whether it is possible to choose what areas to draw a line around. Since Margrethe Vestager according to the article is ‘best known as Europe’s tough anti-trust cop’. She is also currently known as Executive Vice President of the European Commission for a Europe Fit for the Digital Age, and has according to the article said she is more concerned with applications in life critical areas (less so Spotify and Netflix).
The article argues:
“That all sounds reasonable. But in practice, lawmakers are likely to find it much more difficult to draw nice, fine-tipped Montblanc circles around high- and low-risk uses of A.I.”
Then he refers to a tweet between two AI experts. Thankfully AI experts are not legislators in most cases, as they at times seem to ignore other areas of the applications they develop that legislators are more cautious about. He mentions targeted advertising, and it is a fair point.
Yet in return we have to ask: how would you do it otherwise?
Of course we should say it is not a journalists role in all cases to outline suggestions, proving some faulty thoughts or bringing in a few questions is enough.
Why Is the Fortune Article Incorrect?
- In assumptions made on how explainable artificial intelligence is treated in a European context. A blatant lack of understanding or communication of how this area is being addressed in EU strategies.
- Diverting attention away from health applications and life-threatening or potentially dangerous uses of AI.
The main summary of the article reads like a question mark to a statement made by a politician rather than a comment on a strategy. One must question whether the writer has read the white paper or any strategic documents relating to AI for that case. I have a few thoughts:
- Did the author just jump at the word ‘high risk’ and react to a comment?
- There is little mention of any concrete part of the strategy in the article at all. What is the author even commenting on?
- There is no mention of any of the vast amount of strategic documents from the European Union pertaining to AI. He could at least have checked out the summaries from the Future of Life Institute readily available online.
The authors says:
“(Explainable A.I. is a fraught area, in which one always has to ask: Explainable to whom? To the software developer? To the doctor? To the patient?)”
Explainable artificial intelligence is not a ‘fraught area’, at least not in EU strategies, because there are large steps as well as progress being made — and to a large extent EU is a country that has deemed it important to invite to discussions as well as working directly on implementation of ethics guidelines. I am a member of the EU AI Alliance, there are many reporting directly to assist in updating these guidelines and making it possible to implement a more responsible approach to artificial intelligence.
In EU explainable AI is a well defined area that is being constantly addressed around in a great deal of research institutions in Europe as well as industry.
In comparison the US has not taken such steps, and we can consider their principles to be a list of ways to avoid taking responsibility by delaying regulation — as I have described earlier. The author should perhaps open his mind to compare if he decides to criticise so heavily without any grounds or arguments other than Twitter comments and big words.
The author could perhaps start by reading these documents that are included in the strategic work that EU does:
If the author wanted to dig deeper it would be possible to join the alliance online. It is not hard to become a member of the EU AI Alliance if there was a wish to contribute to a constructive discussion of this ‘problem’.
Health Risk is Important
The US health industry has a track record of ignoring its population. If we look beyond the treatment of the rich patients in the United States the health care system is one of the worst in the world. Clearly a journalist from the US could take this into consideration and again look comparatively towards the problem in EU compared to the US. In the EU patients have far more rights, and I do not for a second believe that only big data can help in developing good health solutions. The great solutions have been developed together with doctors and it should to a much higher degree be worked out in a safe way that retain the rights of citizens. Yes, there can be made some concessions, but protecting patient health data is vital.
Margrethe Vestager and politicians that dare to be so forward leaning may be needed more than ever with the amount of power large technology companies have to acquire health data.
Yes understanding our Spotify listening patterns or Netflix watching patterns can be used, however we have to start somewhere — by attempting to lessen or speak down this argument from Margrethe the author does not particularly contribute in any fashion to enlightening any discussion on the topic.
Yes, consumer rights or the rights of citizens overall seem to hold a more important position in the EU, especially within health. US is struggling with a large opioid issue driven to some extent by large pharmaceutical companies, and this is well documented. Maybe health risk is an important place to start for the author, and we may wonder if it could be suggested that the US makes it a higher priority. Could large technology companies in the US use artificial intelligence in unexpected and irresponsible ways in the health industry? The answer to this question is already yes, at this point damage control seems to be on the mind of EU legislators. Tech legislation has been notoriously lax in the US, and Stuart Russel mentions this in his newest book Human Compatible: AI and the Problem of Control. The general rule has been that there is a lack of rules.
Derogatory Comment about Margrethe Vestager
In addition the derogatory comment made by the writer, and that has been made by several other news articles is a slant towards the technology industry. According to the article Margrethe Vestager is ‘best known as Europe’s tough anti-trust cop’. I think the writer refers to her time as competition commissioner.
In fact technology companies from the US have been infringing heavily on consumer rights and human rights in several cases overstepping the boundaries of what is acceptable in any given context on the planet in terms of lies, deceit and highly questionable practices. That these malpractices resulted in pursuit of competition law is something many can be grateful for, and in the extension of the rights provided to citizens in the EU.
The author seems to have a total disregard and lack of understanding pertaining to this area, and a complete lack of respect for Vestager when perpetuating this talk brought forward from a series of technology journalists sympathetic with large technology companies.
I speculate whether this stems from a discrimination of women in politics from technology journalists or simple rudeness, perhaps it is both. Regardless one would hope that writers for Fortune would have more constraint in this regard, and of course be more respectful to politicians such as Vestager. At least get the facts straight or refer to concrete policy points when making the EU’s AI strategy into a problem needed to be solved.
This is #500daysofAI and you are reading article 271. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence.