AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

How would you detect a chatbot?
 
 

To the experts here who compete to create chatbots that can act like people, let me ask: what would you do to detect a chatbot? How would you avoid being fooled?

Are the Russian political “bots” detectable? How would you police Twitter?

Is there a single kind of question that would trap all but the most sophisticated bots?

 

 
  [ # 1 ]

The main thing to realise is that there is a great diversity of bots, but each is limited in domain, so your best tactic would be to try very varied questions.

The average chatbot or spambot won’t be able to handle deliberately abusive misspelling, but a commercial chatbot will, because they can afford the resources involved in spellchecking. This is still the most effective tactic though.

Second-best is asking word games, like “How many vowels in the xth word of your last response?”. Because there is no reasonable purpose for word games and there are many variations possible, few bots are made to answer them.

However, with chatbots I generally just look at how often they return generic all-purpose replies like “yes, I do” (do what?), or if they keep referring to what I said as “it” for several bouts without topic-relevant details. A lot of bots don’t handle pronouns because that requires resource-heavy parsing.

As for troll bots, whenever I found insinuative vile spewing posts they typically turned out to be from real Americans. Bots’ subject matter seemed more focused on controverse than provocation. The following features are common:
- Outlandishly foreign-seeming usernames with awkward and long combinations of syllables (generated)
- Repetitive subjects and frequent posting history (only equalled by some obsessive autistics)
- Lack of interactive activities (no replies, no following, followers, interests or subscriptions)
- Bad grammar (similar to Indian and Russian, with verbs and prepositions out of order)
- Bare-bones profile information is a bonus
For the time being, troll bots are best detectable through meta-data and posting history. Since a bot’s purpose is to spam, it should post as often as it can, without for instance the inescapable 8-hour period of inactivity known as sleep. I heard there’s a famous Russian bot in America that starts anger-Tweeting at 3AM in the morning every day. That’s definitely not human.

 

 
  [ # 2 ]

A wonderful list. Thanks.

I was thinking not too many bots can answer simple questions that combine generalization, arithmetic, and pronouns:
“I have 3 apples and Jon has 4 oranges: how many pieces of fruit does he have?”

 

 
  [ # 3 ]

You’ll also find some interesting suggestions here: https://www.chatbots.org/ai_zone/viewthread/1776/

I’m afraid the questions that one intuitively considers difficult, often turn out to be easily sidestepped. Mostly because everybody else also thinks they are good questions, so chatbots will have encountered them before. Fruit arithmetic has been so regular in the Loebner Prize that one can do with recognising two numbers and “how many” and ignore everything else for a pretty good guess. Of course then you think, “What if I ask how many elephants?” and you’d probably find that that also occurred to the chatbot creators.
But, I should mention that this is only true for chatbots and most particularly Turing Test participants. It is highly unlikely that spambots or commercial bots would be equipped for that line of questioning, because it is not relevant to their domain.

 

 
  [ # 4 ]

Here’s a relevant image of bots at work. Again, it’s the meta-data like time that’s a clearer sign than the things they say.

 

 
  [ # 5 ]

Don:
Why doesn’t someone come up with a product concept for Twitter/Facebook users, where they can do a quick meta tag analysis? Or why doesn’t some company (duh: Twitter or Facebook) incorporate these simple truths you have been telling about and flag or remove the obvious bots?

 

 
  [ # 6 ]

That is the right question.
Twitter and Facebook have only begun tackling the problem this year, and only due to pressure from the public and US congress, who have been looking for someone else to blame for their recent voting behaviour.

According to e.g. this article, the executives of these Social media are “freedom of speech” fanatics, the American way. The last thing they want is apply any sort of censorship. Policing is not in their interests. It would require an ongoing research branch like anti-virus measures, and they’re already not making a profit. But mostly they’re ideologically pigheaded.

Independent researchers are showing a lot more interest. Last year there was a guy that released a browser add-on that automatically traced article links back to unreliable sources, shortly after Facebook claimed they couldn’t do anything about disinformation.

 

 
  [ # 7 ]

As a friendly contribution to this discussion, there is a secret point of view not available to the general public.

 

 
  [ # 8 ]

After you build one, you start to realize quickly the hardest things to try to code.

I agree that once a question is known or type of question it can be added so it’s easier i.e. when a question is used in the loebner awards the next year every chatbot knows the answer so there’s no point in using it again.

I find the easiest way is either very short or very long sentences smile

Since almost all chatbots (except mine apparently) doesn’t use pronouns it’s easy to ask a question and then follow up with a “why?”, “how?” and watch it get confused.

Or use embedded English clauses, passive voice, run on sentences or other constructs like “what would happen if the quick brown fox, in a sense and immediately, jumped over the very very lazy canine animal under the river and through the woods . . . “.

I’m working on the run-on like the one above now. It’s not easy. smile

 

 
  [ # 9 ]

Stanford’s parser can handle sub-clauses like that, as in, it can tie verbs to the correct subjects even when there’s a clause inbetween. However, I don’t know any chatbot that uses it.

Cleverbot and other “sequence-to-sequence” trained chatbots like Microsoft’s do indeed only see pronouns as another piece of text. AIML does have a way of handling pronouns, but as I understood it, the referent has to be set up manually for each input. Some of the more modern chatbot services have a more integrated pronoun system, but still require the developer to specify that e.g. “he”, while in the context of [movies], may only refer to [actor names]. Again, Standford has software to resolve pronouns in any context but I don’t know it to be used in bots.

I recently spotted someone making a Russian bot classifier by observing word usage:
https://www.briannorlander.com/projects/reddit-bot-classifier

 

 
  [ # 10 ]

im not an expert by any means but as don patrick says chatbots can’t handle misspellings but a lot of it is context. If someone is repeating you and they’re not making a lot of sense than it’s possible they could be a bot. however, it’s very possible that a real person could be mistaken for a bot especially if the person isn’t a native english speaker. The problem is platforms like twitter are places where you judge whether a comment is human based on only one reply which I think would absolutely cause false positives. I tried to make a substitution file for a robot that automatically added periods based on certain words in english. However, the one thing I was never able to do was get it to work with names. The main way of telling a robot from a human is simply their ability to elaborate. It’s possible Don Patrick has created robots that can dynamically create sentences, but when robots create sentences themselves I think that’s when they have the most potential to ring false. a robot can give you several premises(almost like playing cards) that are the perfect way to describe how it’s feeling. However, at least with my robot if the robot has any elaborate information that information is static and doesn’t change.

 

 
  login or register to react