Photo from San Francisco by @therealamogh

Exploring The Partnership on AI

Gathering the biggest technology companies with nonprofits

When I found Partnership on AI I was astonished to see that some of the biggest technology companies in the world had gathered together with the nonprofits to consider artificial intelligence. Therefore it made sense to explore it further and learn more about their current focus so that I could make a better decision on what my opinion is about this consortium, as well as better try to understand some of their activities. Although you may see headings such as ‘history’ or ‘goals and work’ this article is not in any way comprehensive. However I think it is important to know what Partnership of AI is; how they came to be; and what they are doing.

Partnership on AI (full name Partnership on Artificial Intelligence to Benefit People and Society) is a technology industry consortium focused on establishing best practices for artificial intelligence systems and to educate the public about AI.

Partnership on AI (PaI) is on a mission to shape best practices, research, and public dialogue about AI’s benefits for people and society. Their members range from for-profit technology companies to representatives of civil society, to academic and research institutions, to start-ups and beyond.

The History of Partnership on AI

When I saw the Partnership of AI I wanted to know its history although it has been a brief time since its conception a few years ago.

The consortium was announced September 28, 2016, with its founding members are Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined the consortium as a founding member in January 2017.

The interim co-chairs has been Eric Horovitz of Microsoft and Mustafa Suleyman of DeepMind. There was an online ‘letter’ from both of them that is worth a read from the founding of the partnership.

“…with successes come new concerns and challenges based on the effects of those technologies on people’s lives. These concerns include the safety and trustworthiness of AI technologies, the fairness and transparency of systems, and the intentional as well as inadvertent influences of AI on people and society.”
-Eric Horovitz and Mustafa Suleyman, September 28th of 2016

Of course as a side note this may sound ironic if we consider Mustafa Suleyman’s team breach in regards to 1,6 million UK citizens’ patient data. However I think perhaps the intention here is important.

In October 2017, Terah Lyons joined the organisation as its founding Executive Director. She brought with her expertise in technology governance, with a specific focus in machine intelligence, AI, and robotics policy, having formerly served as Policy Advisor to the United States Chief Technology Officer Megan Smith. Terah and her team report to a board of directors comprised of six representatives of for-profit institutions and six Directors from not-for-profit perspectives and institutions. As such there is in theory or practice an equal balance.

The Partnership on AI is expressly not a lobbying organisation. According to themselves they are intended as a resource to policymakers particularly in regards to conducting research that informs AI best practices and exploring societal consequences of certain AI systems.

They are additionally developing policies around the development and use of AI systems. “The activities of the Partnership on AI are funded by charitable contributions in the form of membership dues paid by its for-profit Partners and contributions and grants from non-profit organisations and foundations.” Their office is currently located San Francisco.

Later more than 100 notable organizations active on five continents have joined the Partnership, more than half are human rights groups like Amnesty International, Future of Life Institute, and GLAAD.

In April 2019, the organisation released analysis warning that AI-driven risk assessment tools are not yet ready to replace cash bail systems.

The most recent news I could find is from the 27th of August 2019 with Terah Lyons talking of ethics, whitewashing, moonshots and power in VentureBeat.

In September 2019 The Partnership will mark its third year with an annual gathering of member organizations in London. From October to December, the Partnership will collect public input on ABOUT ML from groups typically underrepresented in technology, following the Diverse Voices method devised by the University of Washington’s Tech Policy Lab.

What is the Partnership on AI’s Goals and Work?

Looking for more information the first step was to access their the website of the Partnership on AI and find more information on their current approach. They have far more information, however I have attempted to shorten this section.

The goals of partnership AI is the following:

  • Develop and share best practices.
  • Advance public understanding.
  • Provide an open and inclusive platform for discussion & engagement.
  • Identify and foster aspirational efforts in AI for socially beneficial purposes.

Here PaI talks of outcomes, quality and costs in healthcare and transport. Effective and careful are two key words. There is an argument that robotic systems can enhance life and prevent needless deaths. However this comes with a great risk to safety — ethics is mentioned and preferences of people.

“We will pursue studies and best practices around the fielding of AI in safety-critical application areas.”

Value through recognising patterns and drawing conclusions from large amounts of data. Diagnostics and recommendation in a variety of areas are stressed for the possibility of ‘breakthroughs’. Again they ask to be vary of assumptions and biases in data. They ask for development of ways to detect and correct errors not replicate. We need systems that can explain why a conclusion has been made.

“We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.”

Here there is a discussion of value as opposed to automation generating less value for some. There is a want for innovation as well as competition in addition to widely sharing the ‘fruits of AI advances’. How do we ensure minimal potential disruptions?

“We seek to study and understand best paths forward, and play a role in this discussion.”

Augmenting is a keyword here. More accurate diagnoses, assistance to drivers and opportunities for R&D in best practices on AI-human collaborations. Trust is not mentioned, but implied — in making AI systems more reliable and easier to understand.

Recommendations can be as bad as they can be useful, especially if they manipulate people and influence their opinions.

“We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.”

The last point is to promote public good in the realms of: “…education, housing, public health, and sustainability.” They talk of moonshots as ambitious big bets or creative ideas harnessing AI.

“We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society’s most pressing challenges.”

In the previously recent interview with Venture Beat there is a question to Lyon in regards to her thoughts on the term ‘ethics washing’. The reporter asks: “What would your response be to someone who would say that the Partnership on AI is doing good research and bringing people together, but it’s also helping tech giants be involved in some form of ethics washing?”

The argument Lyons makes is that it goes back to the founding which was mainly led by the R&D sections in the big technology companies. I am not sure how persuasive this argument is. She thinks AI has an ability to hold a mirror up to ourselves as a society and say: Where have we experienced discrimination or marginalization?

I think taking this excerpt and judging based on it would be false, however it is important to consider that we need to go beyond believing that we can easily display and judge decisions with AI. Racist historical data is important to unravel, yet this could have been seen simply from spending a few weeks in different court rooms or looking at existing statistics.

The argument: we do not have enough data or we need more data is pervasive, and it is certainly to the technology companies benefits and sometimes in our own interest. One thing is certain is that Lyons has taken part in a variety of policy processes and she understands how to navigate between many actors. It will no doubt be exciting to follow particularly the results from the meeting in London and the research to be done this Winter.

This is day 88 of #500daysofAI. I am currently writing about AI Safety.

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau