A Google Question Answering Competition at NeurIPS 2020

A Challenge and Workshop in Efficient Open-Domain Question Answering

I came to learn about an interesting competition from the Google AI blog.

This is apparently one of the ‘competition tracks at NeurIPS 2020.

If you want to skip this article and jump to the competition the link is here:

If you are interested in machine learning then you might have heard about NeurIPS.

NeurIPS, or the Conference and Workshop on Neural Information Processing Systems, is a machine learning and computational neuroscience conference held every December. It has been held since 1987 to the present.

It is known to be one of the most challenging conferences to submit a paper to in the field of artificial intelligence, as an example, with fierce competition.

So, to make your mark on a challenge could make a difference either for you or the research teams involved in hosting it.

A quick thing to know beforehand is that there are several tracks before we enter the topic.

Competition Tracks

“This competition has four separate tracks. In the unrestricted track contestants are allowed to use arbitrary technology to answer questions, and submissions will be ranked according to the accuracy of their predictions alone.

There are also three restricted tracks in which contestants will have to upload their systems to our servers, where they will be run in a sandboxed environment, without access to any external resources. In these three tracks, the goal is to build:

the most accurate self-contained question answering system under 6Gb,

the most accurate self-contained question answering system under 500Mb,

the smallest self-contained question answering system that achieves 25% accuracy.

We will award prizes to the teams that create the top performing submissions in each restricted track.

More information on the task definition, data, and evaluation can be found here.”

There is a lot of variation and possibilities within this competition, but what is the topic?

Open domain question answering is emerging as a benchmark method of measuring computational systems’ abilities to read, represent, and retrieve knowledge expressed in all of the documents on the web.”

In this manner it is important to large companies that have search integrated, and I am sure you could think of a few of these.

Organisers come mainly from Google Research:

The competition was launched this July 2020, so there is still time to participate.

For further information one can read Google AI’s blog.

“One of the primary goals of natural language processing is to build systems that can answer a user’s questions. To do this, computers need to be able to understand questions, represent world knowledge, and reason their way to answers.” Google AI the 23rd of June.

The article from Google AI speaks of what has been done.

Traditionally, the answers to a question could be retrieved from a collection of documents or a knowledge graph.

They give the example of: “When was the declaration of independence officially signed?”

After the question is posed a system can find the most relevant article from Wikipedia and locate the answer from within.

You might have experienced this searching for something on Google.

They mention that approaches to this may have changed with transfer learning (T5).

There have been models trained on text that can answer questions directly as well.

So how should knowledge be stored?

That is a burning question that have contributed to this competition taking place.

The following illustration was shown in the blog article from Google AI:

An illustration of how the memory budget changes as a neural network and retrieval corpus grow and shrink. It is possible that successful systems will also use other resources such as a knowledge graph.

“The goal is to develop an end-to-end question answering system that contains all of the knowledge required to answer open-domain questions.”

They mention that there are ‘no constraints’ as to how this knowledge is stored.

That is why there are different tracks both ‘unrestricted’ and ‘restricted’.

There will also be human evaluation in the competition.

For example, for the question “What type of car is a Jeep considered?” both “off-road vehicles” and “crossover SUVs” are valid answers.

A funny side not is that they will put each of the winning systems up against human trivia experts (the 2017 NeurIPS Human-Computer competition featured Jeopardy! and Who Wants to Be a Millionaire champions).

This time it will be in a real-time contest at the virtual conference.

Google is still asking the big question: of how to answer questions?

This is #500daysofAI and you are reading article 385. I am writing one new article about or related to artificial intelligence every day for 500 days.

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store