Everyone:
I think we are blurring the concept of AI and reasoning inside a bot-mind (¿wow… a mind?), also all this AI things are breeders for many contradicting thoughts and too many threads to continue on w/o guidance.
Let me put some things out and clear and try to shed some light on this, or please correct me in this!
AIML is not AI nor a ‘thinker’ engine, it’s just a combinational (recursive) pattern matcher, with some sort of ‘output’,
Precisely: if we restrict randomness in output (once a pattern matched), this can be seen as a simple transformation automaton, which added some fancy ‘output rules’ (like conjugate or mock input pattern into output) may give to the interlocutor an impression of having some ‘cleverness’ but nothing more than this.
AI Reasoning is based on 3 strategies: (1)Understanding the facts and do and (2)Elaboration afterwards. Then (3) Response Generation, depending on the formers result.
1 - Understanding needs Pragmatics extraction, which needs Semantic extraction, which needs Grammatic Interpretation, which needs deep Morphologic Analysis, which may need a Huge Lexicon or a rule-system to extract semantics, and some spell restoration/correction if needed. all of them (in this chain) may have multiple outputs which must be disambiguated, this is the hard part, or we got a NP-problem. Here is where the state of the art is stuck! Cyc and others ‘common sense frameworks’ tried (in vane at my sight) to solve this step by brute-force attack!
2 - Elaboration, need planning and need some solver agents to be available, so you need that the pragmatics extraction result do a planning to find a proper ‘agent’ set (like Minsky’s Agents) to accomplish the job, so you may be limited on the knowledge of certain bot which may have only a limited number of thematic ‘agents’ to solve some kind of puzzles or questions.
Here is where the ‘problem solving’ might be using simple reasoners, or theorem proofing by FOL, time-wrapping, and perform the needed deixis resolution, to trace a feasible planning th accomplish the query or intellectual job, using the available agents (seen as resources).
3 - After all of this may be done and have some sort of success, you might end up with several responses, many without ‘common sense’ (depending on the ‘agents’ capabilities in bringing good responses) Then you need to choose the best or more plausible (with some coherent metrics) and make a elaboration (plan) to say the output in the same language by doing “Natural Language Generation’ based on retrieved data. (this is far from being trivial)
So here is my insight of a ‘smart ass’ bot
¿Is there a serious approximation toward this done?
- I guess not at all!
There are too many problems unsolved, the worst is the ambiguity times the lack of mind-modeling specially the intellectual planning and understanding, there is also no unique nor easy knowledge representation, including all the OWL, thesaurus, and hierarchical databases which are ‘hardwired’ thought based on someone’s model or theory, which is not necessary a good one to do some calculation or reasoning nor represents real knowledge. So here we are now! - stuck until some -mind freak- really shed some light on this!
In my humble opinion, I may only have identified the problem, and therefore I am personally involved on tracing a route to some tiny success, crystallized in some reasoning and inferencing engine, not a math-rule system but only a small learning engine, based on biological modeling of pragmatics.
Also, for fun (and also some commercial reasons), I am writing a small bot-engine, but this bot will not be a real smart-ass, he will know noting at all! but, instead he will have a robust understanding engine, to extract pragmatics, and then with some tiny rule-inferencing engine he will deal with evidence and extract this new evidence and rules might be from a conversation or a already written text. That’s all. He will know many ‘common sense stuff’ which is ‘idiot-safe’ like math, physical units conversion and relations, money and stock valuation, and many other common-world knowledge, this is not AI its only a good database and modeling engine, but required a lot of (my) work to be accomplished. this works fine! (although errors are constantly being debugged).
I think 4.5 billion years of evolution, cannot be bad! nor can we make a better model in a thousand lifetimes, only we may, somehow (i we understand it) get it done a bit faster with some -electronics- clown (silicon and electricity, instead of chemical ion-movements, and transformations). that’s all I guess for us!