How do you know the suspect is using bad grammar… maybe he is literally stating that he did NOT kill NOBODY… he is intending on confessing, in his double-negative way, that he DID kill somebody…. or he could be using bad grammar, you don’t know do you lol
Thus, ‘type of user’ should come into play. If the user is a ‘tough guy’ on the street, then “didn’t * nobody” take it to mean “didn’t * anyone”. If, on the other hand, it is a smart boolean-logic type person, then take “I didn’t kill nodoby” to mean an admission of guilt like Steve pointed out.
Bots need to factor ‘type of user’ into their determination of intended meaning.
Just parachuting in here, but could it be that a bot IS a bot and as such is usually just expected to respond like one.
At some core level a successful (engaging) bot should, and can (be programmed to), be able to discern logical inconsistencies and mis spellings, have some persistence of conversation/interaction, and be able to ask for clarification of (or ignore) ambiguous input.
Most chat bots seem to be used as Expert systems- knowledgeable in a specific area(s). Does a service bot working the service desk at a warehouse really need to know not to go recursive meltdown when when someone inputs: “It is opposite day today.”, or just intelligent enough to respond with “Today is [TODAYS DATE], Your item is on Tray [TRAYNUMBER], Have a nice [“opposite”] day!”?