The US Principles for AI and OECD
The Great Difference Between the OECD Principles and The US Principles
This may be one of the shortest articles I will ever write, however I think there is one major difference between the OECD Principles of AI and the US.
US is part of OECD and could have chosen to adopt these principles like some other countries have, however this is a completely different direction and may negatively influence other countries to follow their irresponsible approach.
I will state this clearly: a complete disregard for the current climate crisis; placing fairness after risk management and cost; and its hands-off regulatory approach to AI are not good principles.
Let’s have a brief look at a shortened version as posted by MIT Technology Review:
- Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
- Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
- Scientific integrity and information quality. Policy decisions should be based on science.
- Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
- Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
- Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
- Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
- Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
- Safety and security. Agencies should keep all data used by AI systems safe and secure.
- Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.
Let’s contrast this with the OECD Principles of AI:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Listen to this: people and planet.
Yes! AI does not operate in a vacuum. It gives an indication of responsibility.
These principles puts sustainability front and centre closely followed by human rights and accountability.
Although the US Principles look great at face value, they are an international disgrace for the community and represents the lack of focus on the climate crisis by the current administration.
As I have mentioned previously the Obama Administration with its plans on AI had included these different considerations while the Trump administration has not.
It was not unexpected, however highly unnecessary as it would be better that US defaulted to OECD Principles rather than their own when considering for example introduction of laws or solutions.
The US principles in their current shape and form are not very trustworthy.
That is my short opinion on the matter. I would be happy and open to hear other opinions on why I am completely wrong.
This is #500daysofAI and you are reading article 217. I am writing one new article about or related to artificial intelligence every day for 500 days. My current focus for 100 days 200–300 is national and international strategies for artificial intelligence.
Photo in banner from @mrthetrain.