Screenshot form discussion taken the 29th of September

A Discussion Between Makers of Coded Bias & The Social Dilemma

How do we treat people with technology?

Today I watched a discussion between some of the makers of The Social Dilemma and Coded Bias. This article is not an extensive coverage of the discussion, you can watch it yourself, rather a few thoughts and a few questions from the participants.

“I am excited by the organisations that are here, because it is the now. The now what?”

At the end of the discussion Shalini Kantayya announced that there will be a Declaration of Data Rights as Human Rights that will launch at the global premiere of Coded Bias on the 11. 11. 2020.

Data Rights as Human Rights

To be launched, screenshot from discussion on the 29th of September.

Looking at the chat

One of the exciting aspect of this discussion was also the discussion in the chat. Here is one message that stuck with me from the chat:

“Solutions come from organizing with people. Without pressure coming from organized and activated grassroots, all the advocacy in the world won’t make a difference.” — Lilly Irani

Lilly Irani is associate professor of communication and science studies at the University of California, San Diego.

  • I am particularly curious about the conundrum of several of the interviewees in The Social Dilemma becoming wealthy due to their creation of unethical technology — yet now openly admitting they feel these technologies are negative. I would have loved to ask — how do they plan or intend to redistribute wealth to the communities and people they extracted it from by designing these technologies?
  • The Social Dilemma focuses on potential harms to all of us. That’s important, but algorithmic harms are unequally distributed. How can we shift the frame so that solutions focus on those who bear the greatest burden of sociotechnical harms?
  • Can you please share more about the choices around which expertise and perspectives to include in Social Dilemma?
  • Something Cathy O’Neil has been advocating for in NYC is more transparency in to the algorithms that affect New Yorker’s lives. I would love to hear the panel’s thoughts about algorithmic transparency entering the national conversation, and if we can begin to create laws to regulate runaway tech companies.
  • Do you have any ideas on what business model could be more compelling than advertising? Aka — if we’re trying to overcome an addiction — what is actually strong enough? Given carrot/stick options, regulation is all stick. Is there any carrot?
  • I would like to know how you received criticism about most of the people who spoke in the documentary being white men in technology and continuing to follow this pattern. think of making a second documentary with perspectives and work on technology for black women, LGBT, disabled people?
  • Can we talk about how to stop Palantir? Their stock launch is tomorrow; there is a huge and growing campaign against them due to their key work with ICE. https://www.washingtonpost.com/politics/2020/09/29/technology-202-activists-slam-palantir-its-work-with-ice-ahead-market-debut/

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau