AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

A few secret ‘gotchas’ of Neural Nets
 
 

Voice recognition is working pretty well but claims that it is a neural net “success” story are a bit disingenuous. It took more than 30 years of research into voice signals to learn what parts of the signal are significant - what features to extract and use for neural net input. For example, talk quietly and the voice signal changes a lot with no change in the meaning. Now talk in whispers. Again the signal changes but not the meaning. It takes time to learn how to filter out things like loud/soft or vowel/no-vowel meaning in-variance. Once that is known the “magic” has already happened and inputting the right “feature vector” into the neural net works because statistics works when applied to good information. Secret #1: choosing the right feature vectors is the magic, not the generic correlation engine that puts it together at the end.

Another secret is the idea that a neural net developer will be able to determine valid classification categories at the beginning of their research. For example a researcher might want to classify handwritten characters using the alphabet as the underlying category - when in practice it might be very useful to determine if a character is ‘round’ as a dependent but important “feature” on the way to a successful processing of a handwritten word. Secret #2 that the most useful classification categories can be determined in advance.

There is at least one more secret but rather abstract “gotcha” to neural nets - namely that a human category is a convex set in any old feature vector space. Less abstractly this means: the assumption that if A and B are in a category then anything between A and B must also be in that category. Since this is not generally true, it leads to the false belief that when more and more boundary cases are added to the training sets that accuracy increases. Without convexity it can decrease and, in any case, loading up the system with borderline examples probably skews the statistics. By now you may have read about recent research showing wrong classifications as close as desired to correct ones.Secret #3: that adding new examples always improves accuracy.

 

 
  login or register to react