This is a study of chatbot interpreter routines coded in PHP, which includes open source chatterbot knowledgebases converted to pure PHP. It is a small PHP script with no XML, nor SQL used. This Chatbot study in PHP is entirely coded in all PHP.
Think of this experiment as like a cheap GPS which has recorded speech audio, but has no text to speech. Although a cheap GPS does not pronounce the street names, like the more expensive GPS systems do, it still surprisingly works very well. Is a similar phenomenon possible with chatbots? What will be revealed if we peal back advanced routines, like recursion or wildcard matching?
The question that this study is attempting to answer, is whether a chatbot simply needs at least several thousand records for its robot brain to cross a certain threshold to gain the quality of being believable?
http://elizabot.com/study/
Suggestion: Please include small paragraphs while testing such as, “Hello. I am testing you. Hopefully you will not mind. Is that OK?” or anything else is also fine, “I am here. You are there. We are where?” It does not have to be anything special. The “Chatbot study in PHP” parses punctuation including periods, question marks, commas, etc. With that suggested, please feel free to do whatever comes natural testing… This is simply a suggestion. Thanks!