In order to reduce the work of developing a chat bot, and for the use of .net plateform, i came up with an idea of developing an SDK.
The main problem with the chatbot was how to make a computer understand the human language, and finally came to a conclusion it too difficult, but it can be done to an extend if a computer can distinguish type of a sentence (the user input)
1. its a command (asking the bot to perform something)
2. a question (asking the bot to search for an answer (not perform))
3. a statement (telling bot to learn something from the input) and
4. a simple response (like say hello, how are you doing)
so this was the main idea for me into making the working logic of the chat bot,
i used a method of pattern recognition for database search.
it can have patterns like
where is * located -> search
* is an actor -> learn -> who is *, * is an actor
then next problem i faced was if a user replies “srk is an actor”, how do i make the bot understand that srk is a human not an object, ie when another user ask “what is srk,” it would be bad to reply “srk is actor”, or worst “who is bus” to “he is an vehicle”.
i came up with a solution to this by finding a keywords that represents humans (like “actor”) from the input
and many problem continued
then when it came to speech recognition i used Microsoft speech. and when i finally got it working i started recognizing garbage sentences as its dictionary was open to all possible works. so the next task was to find answers (knowledge) from database and create appropriate questions possible for that and add only them to the recognition grammar,
so at last i developed abstracted class for Speech.
Thus at last after solving this problems i made an SDK to make my work easy,
it is available here with demo and references, hope it helps some new beginners.
Is this the way i should be going towards making a chatbot? and is the logic i used relevant.
Comments and questions are welcome.