AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Facebooks 10 problem language test
 
 

I expect my colleagues here to do well on it.

https://venturebeat.com/2018/06/20/salesforce-develops-natural-language-processing-model-that-performs-10-tasks-at-once/

My chatbots cannot do these broad sorts of tasks but we all strive to solve a unique problem [I am trying to learn how to keep track of context within a narrow conversation.]

 

 
  [ # 1 ]

Interesting. It looks like this is the source:
https://einstein.ai/research/the-natural-language-decathlon

I’m not sure how others fit into the picture, it doesn’t seem to be a contest. As for how my program would do at these types of questions, I’ll go over the tasks:

“What is a major importance of Southern California in relation to California and the US?”
My parser still makes at least 20% mistakes when reading Wikipedia, and these questions are too cryptic. A statistical system on the other hand could easily match the keyword “major” in the given context. Looks like a job for IBM Watson.

“What is the translation from English to German?”
Theoretically my semantic parser is suitable for the task of translation, but the actual translation would require an entire new grammar template per language with a boatload of grammar rules, and more extensive word sense disambiguation. Again, this is much easier for statistical models.

“What is the summary?”
There is a summarisation function in my program, but I find that it is more effective to summarise at textual level (keywords, phrase substitution) than semantic level (stemming, parsing, knowledge representation). It wouldn’t be the same model as for the other tasks.

“Hypothesis: Product and geography are what make cream skimming work. Entailment, neutral, or contradiction?”
Seriously, if you tell my program that “cream skimming has two basic dimensions, product and geography”, it won’t have a clue what “dimension” means here. Too hard a task for me at this time.

“Is this sentence positive or negative?”
Sentiment analysis is relatively easy and I’ve integrated it well enough, I like to think.

“What has something experienced?”
This kind of vague phrasing would require a little more tinkering, but my program could readily answer “What had been experienced?” as they mean to ask.

“who is the illustrator of Cycle of the Werewolf?”
It would be easy enough to broaden the scope of my program to cover synonyms in its knowledge searches (I don’t, because it makes the program slow), so that it could pick up on the word “artist” in the given context. However, first I’d have to set up a way of handling book titles, which has not been a priority.

“What is the change in dialogue state?”
I don’t even know what this means.

“What is the translation from English to SQL?”
Er… sure, I could set up some specific treatment for the word “column” and pass the parsed subjects and objects to a template for SQL format. Maybe if I ever get a job in website design, I’ll do just that.

“Who had given help? Susan or Joan?”
Winograd Schemas, check. I can handle these, especially this one. However, I only ever developed a partial solution, nobody is great at it yet. It’s a bit odd to see Winograds treated as 1/10th of a test when they were designed as a benchmark test of their own.

On the whole I think this is not a test that anyone other than neural network developers will be working to tackle.

 

 
  login or register to react