A Discussion Between Makers of Coded Bias & The Social Dilemma
Today I watched a discussion between some of the makers of The Social Dilemma and Coded Bias. This article is not an extensive coverage of the discussion, you can watch it yourself, rather a few thoughts and a few questions from the participants.
You can watch the discussion here:
Coded Bias & The Social Dilemma // Ford Foundation & Omidyar Network - Crowdcast
Register now for Together Films's event on Crowdcast, scheduled to go live on Tuesday September 29, 2020 at 1:00 pm…
In aspect that gripped me was the quote by Joy Buolamwini.
“I am excited by the organisations that are here, because it is the now. The now what?”
At the end of the discussion Shalini Kantayya announced that there will be a Declaration of Data Rights as Human Rights that will launch at the global premiere of Coded Bias on the 11. 11. 2020.
Data Rights as Human Rights
Looking at the chat
One of the exciting aspect of this discussion was also the discussion in the chat. Here is one message that stuck with me from the chat:
“Solutions come from organizing with people. Without pressure coming from organized and activated grassroots, all the advocacy in the world won’t make a difference.” — Lilly Irani
Lilly Irani is associate professor of communication and science studies at the University of California, San Diego.
In addition to this there was a possibility to submit questions. I will post these questions here and perhaps they will spark a few ideas for you.
- “Could you share more about the original intention of the social dillema documentary? I could undertand the idea of bringing to public some important discussions about privacy, but as a women of color developing technologies I missed to hear more diverse perspectives of people directly impacted by these technologies and who is working to solve this problem. The documentary leaves who is watching withouth future perpectives about social consequences and what is being done to solve this dillema.
- I am particularly curious about the conundrum of several of the interviewees in The Social Dilemma becoming wealthy due to their creation of unethical technology — yet now openly admitting they feel these technologies are negative. I would have loved to ask — how do they plan or intend to redistribute wealth to the communities and people they extracted it from by designing these technologies?
- The Social Dilemma focuses on potential harms to all of us. That’s important, but algorithmic harms are unequally distributed. How can we shift the frame so that solutions focus on those who bear the greatest burden of sociotechnical harms?
- Can you please share more about the choices around which expertise and perspectives to include in Social Dilemma?
- Something Cathy O’Neil has been advocating for in NYC is more transparency in to the algorithms that affect New Yorker’s lives. I would love to hear the panel’s thoughts about algorithmic transparency entering the national conversation, and if we can begin to create laws to regulate runaway tech companies.
- Do you have any ideas on what business model could be more compelling than advertising? Aka — if we’re trying to overcome an addiction — what is actually strong enough? Given carrot/stick options, regulation is all stick. Is there any carrot?
- I would like to know how you received criticism about most of the people who spoke in the documentary being white men in technology and continuing to follow this pattern. think of making a second documentary with perspectives and work on technology for black women, LGBT, disabled people?
- Can we talk about how to stop Palantir? Their stock launch is tomorrow; there is a huge and growing campaign against them due to their key work with ICE. https://www.washingtonpost.com/politics/2020/09/29/technology-202-activists-slam-palantir-its-work-with-ice-ahead-market-debut/”
Another contemporary concern was Palantir.
One comment in the chat came from Rasha Abdul Rahim from Amnesty Tech, Amnesty International, based in London.
“Rasha Abdul Rahim: re: Palantir and ICE, Amnesty published a briefing on this yesterday, building on the amazing work of Mijente: https://www.vice.com/amp/en_us/article/qj4y9q/palantir-admits-to-helping-ice-deport-immigrants-while-trying-to-prove-it-doesnt
At the end you could see a descriptive poster with ways to take action.
This is #500daysofAI and you are reading article 483. I am writing one new article about or related to artificial intelligence every day for 500 days.