Some Current Issues in Funding of Ethical Artificial Intelligence

500 days of Artificial Intelligence #3

I am challenging myself to write and think about the topic of artificial intelligence for the next 500 days with the #500daysofAI.

This is inspired by the film 500 Days of Summer where the main character tries to figure out where a love affair went sour, and in doing so, rediscovers his true passions in life.

I started my first day discussing some basic definitions of artificial intelligence (AI). The second day I wrote an essay on sustainable cities and applied AI. What I mentioned was that applications of AI rapidly enters a sphere of morality. What I perhaps did not mention was that this can in some cases be due to lack of existing regulations surrounding this progress or confusion regarding how existing regulations should be applied to these seemingly novel solutions. At this point we enter discussions of ethics AKA moral philosophy.

So who funds ethics?

I only have time for a short article today, but I will bring to light two examples. Two of the largest actors within the field of applied artificial intelligence is Facebook and Microsoft, two of the biggest tech companies in the world.

Facebook decided to fund the Technical University of Munich AI Ethics Institute with $7.5 million (TUM) earlier this year. Sheryl Sandberg announced this at one of the most prestigious tech conferences DLD.

Microsoft seems to be a close partner of the Stanford University Human-Centered Artificial Intelligence. Although this must be confirmed it has been mentioned that they intend to attract $1 billion in funding and is currently dominated by mainly male, white staff.

Why is this an issue?

It may be a rude line to draw, however let us remember other fields where possible addiction and severe consequences could lead to a certain conflict of interest. Is there an issue with smoking companies sponsoring research on whether smoking gives you cancer? Has there been an issue with pharmaceutical companies sponsoring certain research institutes working with regulations? We know in both cases the answer to be yes. Although it may not be the case for all industry actors it seems to be a general trend with larger companies. Similar actions have also been taken by mining companies sponsoring research on sustainability within mining.

Who else would fund this research?

There is a clear interest from companies in this direction to fund research that makes them better companies. However Facebook has some possible issues that may arise in regards to their conduct and how would this dynamic work if unethical conduct is discovered? Large technology companies understand they have to become better and many have experienced more regulations in the last times. I believe this question needs to be answered for us to not make the same mistakes with companies selling technology as has been done within other industries in the past.

Self-policing in applications of AI

We have to ask ourselves if research into the ethics of applied ai can be sponsored by the same companies who they question without implications. Further to this we have to ask how we would otherwise approach this important aspect of developing technology without relying on large companies to be self-policing its applications of AI.

This is day three, I tried to keep it shorter. Hope you enjoyed it.

Please leave a comment or tell me what other areas you think I should explore.

-Alex out



AI Policy and Ethics at Student at University of Copenhagen MSc in Social Data Science. All views are my own.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store