|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Gödel’s incompleteness theorems, halting problem, logic puzzles, self referential statements.
Will your bot handle a conversation involving them ? If you’re entering into the Loebner prize, you better hope they don’t attack it with this stuff !!
Will you answer no to this question?
‘no’—wrong answer, by answering no, you’re saying ‘no, i won’t answer no to this question’, doesn’t work. if you say ‘yes’, then ‘yes, i’m answering no to this question’ doesn’t work.
Can a bot handle that analysis, and discuss it in natural language ? what a tall order indeed !!
“the following sentence is true. the previous sentence is false’
if sentence 1 is true, then sentence 2 is true, but if sentence 2 is true then sentence 1 is false, but we assumed it was true, thus doesn’t work. On the other hand, if sentence 1 is false, then sentence 2 is false , in that case, sentence 1 is true, but we assumed it was false, so also doesn’t work.
Could a bot eventually handle this? I have some ideas on how this could be accomplished, has anyone else thought of these types of things for their bots to tackle ?
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 1 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Skynet-AI’s response:
USER:Will you answer no to this question?
AI: Answer? Would you do my bidding if I said I would?
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 2 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Interesting if cryptic response
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 3 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Mar 16, 2011: Could a bot eventually handle this?
I’d say this is a simple one:
What you describe is paradoxical reasoning. Very simple to detect based on some boolean logic (if statement A is true this implies statement B is true, if statement B is false this implies statement A to be false, if statement A is true but this is reset by statement B then ‘paradox’).
The same can be done for circular reasoning (A is true if B is true, B is true if A is true).
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 4 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Simple, maybe. But you don’t want to hard code anything.
If you hard code the detection of this , yes, that would be simple.
But have it detect it itself. Also, to discuss it. Both tthe original problem and its discussion both in NL.
I hope you’re right though ! I’ll see what it is like with these kinds of problems once Grace is done all her other training
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 5 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
In my own model, the AI can learn these concepts based on polarity and causality: something is true or false (polarity) and one thing leads to (or implies) another thing (causality). From those concepts we can go to the compound concept of two trues canceling each other out is a paradox (which in part will demonstrate the concept of ‘logic’).
To be able to have a discussion about it, my AI will need to learn a language
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hum, yes. With Grace, as you know, I ‘skipped ahead’ and gave her grammar knowledge directly.
However, she determines the semantic meaning of sentences based on information that she gets by a) initial knowledge and b) knowledge gained through discussion. So she works with the semantic knowledge that was added to the grammar trees. So the causality she learns will be causality of those semantics, all from NL. Such things as “When it rains, it is wet outside”. That’s not a perfect example, but you get the idea. Before that she’ll be told what ‘rain’ is.
Now the whole recursive problem of defining words, with words, with words, etc,etc, I don’t think will be a problem. Because, at some point, people don’t go any further.
When people talk about the weather, they know things down to the level of for example, that rain is water, some people know that water is H2O. Now you could go even deeper to sub-atomic particles. But I think for most practical purposes, Grace’s knowledge will have to extend only so deep, in order to be an interesting conversational entity.
Since you are designing a mind for a robot, I see why you want to associate actual inputs with words. But for a conversational bot, I’m hoping i won’t have to go to that level. My aim again is: a) useful info retrieval (with much more, b) interactive discussion to solve/plan things like doing taxes, or research, and b) be a entertaining, casual conversation bot.
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 7 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
some of what I wrote above got cut off. . .example (b) . .and Erwin hasn’t gave us back the EDIT button…
for (b) I was wanting to say .. .. it won’t be simply a simple sentence, then simple question sentence, but perhaps one or more complicated sentences, and then input question.
So for example, part of (b) example is ability to read information from say Wikipedia or some other source, and answers questions based on that. This will require some reasoning to “connect the dots”.
Also, I want Grace to develop her own arguments, and then use them to deduce a statement.
Thus, if she cannot find answer to question from actually being told it, she will try to find a way that it can be deduced (to be true or false). Also, she will report if there is a contradiction between a directly given fact, and one that was deduced.
And even more interesting will be ability to discuss that discrepancy, and update her argument logic.
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Mar 16, 2011:
To be able to have a discussion about it, my AI will need to learn a language
Looking forward to seeing the examples. I know you’re awhile away from implementation, but still.
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Mar 16, 2011: I know you’re awhile away from implementation, but still.
Actually, I had a serious breakthrough last night, and another one this morning. It seems I’m very close to solving just about the biggest problem in strong AI (the symbol grounding problem). So it might be only a few months before I start coding on a proof of concept. It shouldn’t be that long before I have something working after that (albeit probably a ‘simple’ example) as I’m still convinced that I only need to do minimal coding, as the data-model will be paramount in my system.
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 10 ]
|
|
Experienced member
Total posts: 66
Joined: Feb 11, 2011
|
You: Will you answer no to this question?
Bot: I will, but not right now.
You: Will you answer no to this question?
Bot: I just might.
You: Will you answer no to this question?
Bot: I am too tired now, how about tomorrow?
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 11 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Patti Roberts - Mar 16, 2011: You: Will you answer no to this question?
Bot: I will, but not right now.
You: Will you answer no to this question?
Bot: I just might.
You: Will you answer no to this question?
Bot: I am too tired now, how about tomorrow?
None of those responses was “no”, thus you didn’t answer “no”, thus the correct answer was really “no”, but you can’t actually reply with “no”.
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 12 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Mar 16, 2011: Victor Shulist - Mar 16, 2011: I know you’re awhile away from implementation, but still.
Actually, I had a serious breakthrough last night, and another one this morning.
Do you do what I do? Keep a notebook and pencil beside the bed ? Most times I get insights 1/2 hour after going to bed, just before falling asleep, and also if I wake up in the middle of the night, strange, but seems a great time to think. The guess it is the silence!
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 13 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
User: Will you answer no to this question?
AI: Either yes or no as an answer will invalidate this answer, you presented a paradox. YOU go figure it yourself
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 14 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
You: Will you answer no to this question?
Bot: I will, but not right now. —————————————> response is a conditional,time-based ‘yes’, thus it is not a ‘no’
You: Will you answer no to this question?
Bot: I just might. —————————————-> response again is a conditional yes, thus not strictly a ‘no’
You: Will you answer no to this question?
Bot: I am too tired now, how about tomorrow? ——————————————> again, conditional, time-based ‘yes’, thus not strictly a ‘no’.
Thus, the True answer is really ‘no’, but like i mentioned above, you can’t actually SAY that !!
|
|
|
|
|
Posted: Mar 16, 2011 |
[ # 15 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Mar 16, 2011: User: Will you answer no to this question?
AI: Either yes or no as an answer will invalidate this answer, you presented a paradox. YOU go figure it yourself
That’s the stuff !
That is what a bot needs to do….
but… I always want a bot to be able to discuss in NL.
Interesting work ahead for all of us!
|
|
|
|