Photo by @alu213000

The Top Myths About Advanced AI

According to the Future of Life Institute

Alex Moltzau

--

After writing about the most recent research proposals by the Future of Life Institute (FLI) I decided there would be plenty of information to look into, and it may fill up quite a few of my days focusing on AI Safety. FLI has a page with existential threats whereas artificial intelligence is listed as one of these, as such I decided to read the page and in a way share/regurgitate. After all I do not fully understand why it is or is not a myth. Hey, I simply saw an interesting cartoon displayed and it caught my attention, this one:

Future of Life Institute, Benefits and Risk of Artificial Intelligence.

The page references does go into more depth on each of these topics. However let it suffice to say these are apt summaries. What I find interesting is that FLI has far more information on the topic of artificial intelligence than climate change as an existential threat, however then again AI does seem to be their focal point due to the composition of its founding team.

One of the resources listed at FLI was a link to a lot of abstracts and excerpts on the topic of existential risk seemingly for a class at Harvard on Catastrophic Risk: Technologies and Policies.

A short thought on existential threats

I am so new to security studies and I find it quite hard to dive into existential threats such as artificial intelligence or climate change. It is as exciting as it is frightening and my perception does waver between those two or somewhere in between. “Some emerging technologies promise to significantly improve the human condition, but come with a risk of failure so catastrophic that human civilization may not survive” according to Seth Baum.

I find it strange to sit down with my wife and think about nuclear warfare or autonomous weapons, and in one way it should not draw away the attention that I have towards the present – yet it does. I am on day 68 of writing about artificial intelligence, day 18 of writing about AI Safety. I consider how hard it must be for people with more responsibility than me fore people’s safety, although in a way we are all responsible. People in government, defence or private security must have a challenging time.

Security is freedom from, or resilience against, potential harm (or other unwanted coercive change) caused by others. Beneficiaries (technically referents) of security may be of persons and social groups, objects and institutions, ecosystems or any other entity or phenomenon vulnerable to unwanted change by its environment.

That is according to one common definition this relationship is complicated. Protecting, I think most want to protect others from harm or potential harm. Then again it brings to mind the nuclear weapon drills they did in US, now with climate and the ‘war effort’ needed to reduce emissions it is hard to think about what the situations will be like.

Security is straightforward and feeling safe is, yet both are complicated. However that seems to be my answer of late: it is complicated. Looking into artificial intelligence, safety, artificial general intelligence, inequality and climate change. Solving large issues on a keyboard seems somewhat unlikely, and I have never been interested in nihilism, but I am getting close.

Nihilism the rejection of all religious and moral principles, in the belief that life is meaningless. I have heard people talk of it, yet I have never agreed and I still do not agree. I can see where this statement is coming from and from one perspective it is hard to argue with. On the other hand life is meaningful: and if we are to attempt then we should to make life slightly better, my perspective is meaningful to me and meaningless to some.

When I say better it is not better in the sense of to ‘improve on or surpass’. After an ethics class a fellow student claimed I would be placed in Virtue Ethics. Virtue ethics is a broad term for theories that emphasise the role of character and virtue in moral philosophy rather than either doing one’s duty or acting in order to bring about good consequences. Maybe my fellow student was right. When someone asks why do good? Then the easy answer is: to do good. Then the question becomes what is good? What is virtuous?

To protect humanity and to ensure our ecosystem functions so we can get along seems virtuous. A common criticism is that its theories are self-centered conception of ethics because human flourishing is seen as an end in itself and does not sufficiently consider the extent to which our actions affect other people. Another critique is that it does not provide clear principles and that the ability to cultivate virtues are affected by other factors such as education, society, friends and family.

I can remember reading from Plato discussions with Socrates that a man cannot be just in an unjust society proceeding to the discussions of the polis (city). Can artificial intelligence be just in an unjust society? Justice, in its broadest context, includes both the attainment of that which is just and the philosophical discussion of that which is just. I think I know, however I am not sure what just artificial intelligence is.

This is day 69 of #500daysofAI. My current focus for day 50–100 is on AI Safety. If you enjoy this please give me a response as I do want to improve my writing or discover new research, companies and projects.

--

--

Alex Moltzau
Alex Moltzau

Written by Alex Moltzau

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.

No responses yet