Another thought I had was regarding ‘continual reflection’.
Meaning, your program is asked question Q, and by more than one algorithm (or rule, or learned information, whatever you want to call it), replies with:
Yes
No
say rule-1 output Yes to Q, but rule-2 output false.
Before the AI responds, it should look at all the outputs that it would have sent to user, and first pump them into a “level 2” analysis.
A “level 2” analyzer would see… “hum.. we can’t output both Yes and No” .. so that “level 2” rule would take the input, do whatever processing and output whatever, call it R1. BUT, what if there were other level-2 rules that generated R2, and R3 ?
so NOW. .the AI should ..can I output THIS list to the user (R1, R2, R3) ? *OR* perhaps does that need further processing ? Try all “level 3” rules… and *IF* no level-3 outputs anything (there are no level 3 rules that have anything to say), only THEN it knows it should stop processing, and respond with R1, R2, R3…. but if there WERE outputs, the process goes on and on.
Another idea .... we need to convert events that are happening into data that can be processed.
“Will you answer no to this question?”
the bot is unsure what you mean and asks:
“What question?????”
But then. . . the very EVENT of it answering should be (somehow) converted to a statement.
It needs to realize .. “Hay !! I didn’t answer NO to that question”. .... now that event-to-statement conversion results in a fact.
so now we have
Fact : user wants to know if I will answer “no” to his question
Fact (being converted from actual “event”) : “I did not answer no to his question”
........and deduce ......
you get the idea