Yes, I do see your point of view regarding providing the AI the logic.
In my model, there is a very heavy dependency on the data… (the ontology). The ontology is what gives meaning to some parse trees. Parse trees that ONLY contain grammar knowledge, are basically discarded. Unless the bot sees a chain of semantic connections between the words of a given parse tree, that parse tree is ignored.
Later the bot will update its ontology KB from conversations.
Now, regarding your point about not hard coding logic. I do agree with this. I had mentioned in another thread that to me, there seems to be 2 “levels” of “logic” at work, which are completely independent of each other.
I call them ‘executive logic’ and ‘knowledge logic’.
The knowledge logic is something like if you tell a child “If you touch the stove element when it is red, you will hurt yourself”.
Now, this (knowledge) logic does NOT tell the child what to do. What I mean is, when the child takes that input in, it does actually TAKE CONTROL and totally dictate what the child actually does.
Now, executive logic is what makes the bot ‘tick’. Executive logic takes all inputs, considers there semantics, and matches them up, correlates them. This correlation is done using even more detailed knowledge logic.
For example, a stove element when red is hot. Touch stove element with finger means heat would be coming into contact with human flesh. Even more detailed (knowledge) logic says ‘excess heat (perhaps specify temperature range) applied to human skin can damage it. Damage to human skin means injury, and injury means pain.. pain means ‘hurt yourself’...etc etc.
Now, important… very important to understand is, in CLUES this (executive) logic makes no assumptions. . . it absolutely doesn’t care about the form or what they (knowledge) logic is saying. Executive logic only gives the bot the means to ‘thought experiment’ with these (knowledge) logics.
The bot would EXECUTE, that is, what the bot does, has nothing to do with the (knowledge) logic… the (execute) logic allows it to experiment by combining these (knowledge) logic . ..which could end up perhaps rejecting it.. with. .
“Well if that rule was true, then given fact1 & fact2 , would mean fact3… which is saying the opposite of fact4”
where fact1,2 & 4 were facts you directly told it. And fact2 is the output from the rule (ie argument) provided to it.
So the “rules” are not the hard coded logic in the engine. . they are rules that it can acquire by communicating with it in Natural Language.