Can You Answer These Questions That Facebook Thinks Any Good AI Should?

Illustration for article titled Can You Answer These Questions That Facebook Thinks Any Good AI Should?

For decades, the Turing Test has been used as a yardstick by which to measure the abilities of artificial intelligence. But now a team of Facebook researchers has developed a set of reasoning and natural language questions that it thinks any good AI should be able to answer.


Developed by a team in Facebook's AI lab in New York, the questions test whether a system is "able to answer questions via chaining facts, simple induction, deduction and [...] more," according to a research paper published on arXiv. The researchers believe that "many existing learning systems can currently not solve them."

All told, the team has put together a series of 20 questions that test different types of reasoning and the ability to process language. Some are simple questions that require the recall of facts, while others require the answerer to count objects through the use of language, manipulate timelines or reason about the qualities of objects.

We've included three examples of the kinds of question below (with the answers), but you can find examples of all 20 question types in the research paper. If you stumbled over any, don't worry too much: you'll be pleased to hear that in tests on seven different computer learning algorithms, not a single one got all the answers right. As New Scientist points out, it's the range of questions that makes the test challenging in different ways—so any natural weaknesses are quickly revealed.

Of course, this test goes to show that computers are some way off comprehending the real world in its true complexity. These examples, as you'll see below, reduce our day-to-day experiences to the linguistic level of a children's book, as opposed to the nuanced world humans face at every turn. It's going to be a little while before we're having complex conversations with everyday computers

There are plenty of researchers around the world working in this field, so it's unlikely that this set of questions in particular will become the de facto test for AI in the future. But for now, it's fun to see what humans are up against. [arXiv via New Scientist]

Example 1

John picked up the apple.

John went to the office.

John went to the kitchen.

John dropped the apple.

Where was the apple before the kitchen?

Example 2

The triangle is to the right of the blue square.

The red square is on top of the blue square.

The red sphere is to the right of the blue square.

Is the red sphere to the right of the blue square? Is the red square to the left of the triangle?


Example 3

The football fits in the suitcase.

The suitcase fits in the cupboard.

The box of chocolates is smaller than the football.

Will the box of chocolates fit in the suitcase?


Example 1: office

Example 2: yes; yes

Example 3: yes

Image by Shutterstock/Olga Nikonova



cleverbot is just an asshole...