AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Just For Fun
 
 
  [ # 76 ]

I would have to disagree with the theory on that page that states that an infinite number of bots randomly chatting on the Internet ad infinitum will almost certainly pass the Turing Test and accomplish strong A.I.

I tried the number 1 bot which I assume is the best.

Me: How old are you?
It: Would you prefer if I were not ?

I do not see how these bots will ever become “intelligent”. Put a thousand idiots in a room and let them talk to each other, they are not going to come up with the theory of relativity. Oh and before I get jumped on, I am saying I disagree with your theory, not your methods.

 

 
  [ # 77 ]
Steve Worswick - Jun 12, 2011:

I do not see how these bots will ever become “intelligent”. Put a thousand idiots in a room and let them talk to each other, they are not going to come up with the theory of relativity. Oh and before I get jumped on, I am saying I disagree with your theory, not your methods.

I believe 8PLA • NET’s Infinite Bot theorem is a derivative of the “Infinite Monkey Theorem” (often called the Million Monkey Theorem).
The theory is accurate, but his method is flawed. For the correct implementation, each bot needs to generate a random set of words. Of course proving the accuracy of the theorem is left as an exercise for the reader.

 

 
  [ # 78 ]
Merlin - Jun 13, 2011:

I believe 8PLA • NET’s Infinite Bot theorem is a derivative of the “Infinite Monkey Theorem” (often called the Million Monkey Theorem).

I know that not everyone follows every thread from the beginning, and I’m also frequently guilty.  In so doing, topics meander about, questions never get answered, and the point is often lost.

Early in this thread, (http://www.chatbots.org/ai_zone/viewreply/5512/) 8PLA • NET posted a link to yet another of his “proof of concept” bots, to which I responded, “I don’t understand the “Proof of Concept”.  It’s not a bot, and it doesn’t answer questions or respond to input, it seems to be just a voice generator… it repeats what you type.  What’s the purpose?”  Later the term “{[unfinished]}” was added.

My point was that I see a lot of PoC bots from 8PLA • NET, and none of them seem to be able to actually carry on a conversation… which in my view, is the purpose of having a chatbot.  In a later posting, I requested of 8PLA • NET, “Please list all of the bots or proof of concepts you’ve created.” to which he replied with his list of 100, which were unrelated to my request.  It was simply 100 duplicate bots that all replied with the same disassociated answers, such as:

Human:  What’s the weather like where you are?
Bot:  What makes you think I am ?

It was not an answer to my question.  It was… yet another… diversion.  It was an obvious way of avoiding the answer to my question, which is clear to anyone who followed the thread.

 

 
  [ # 79 ]

UPDATE:

In response to your feedback, the abbreviation for “robot” has been replaced, an infinite number of times, with the full word that has two syllables like “monkey”.  Thanks!

_______________________________________________________________________

Thunder Walk - Jun 13, 2011:

It was simply 100 duplicate [ro]bots that all replied with the same disassociated answers, such as:

Human:  What’s the weather like where you are?
[ro]Bot:  What makes you think I am ?

@Thunder Walk:
Thank you for testing the first 100 robots of the infinite number of robots on the 8pla.net/list .
When the answers are both the same and disassociated, as you say, isn’t that a paradox?

The robot response is atomic and efficient.

Human:  What’s the weather like where you are?
[ro]Bot:  What makes you think I am [where I am]?

The robot flawlessly inverted all the pronouns in its response to you.
So the obvious question is, are you avoiding the robot’s well formed question?
What makes you think you knew where the robot was [where it was]?

 

 
  [ # 80 ]
Merlin - Jun 13, 2011:

The theory is accurate, but his method is flawed. For the correct implementation, each bot needs to generate a random set of words.

@Merlin:
Thank you. Merlin. How, may I ask, is a random set of words in phrases, a flawed implementation?  Please reconsider that only a single robot of the infinite number of robots that exist on the 8pla.net/list, has to pass a Turing Test only once in an infinite number of chats, to accomplish strong A.I.  Since the theory is accurate, the chances of this happening are not zero. Especially, since typing out the full works of Shakespeare is not a requirement for any of the infinite number of robots on the 8pla.net/list .

 

 
  [ # 81 ]

But the bots are not creating words, they will merely be regurgitating the canned responses programmed into them. The difference between this and the monkeys is that the monkeys could actually create something by banging on the typewriter.

Anyway to humour you, how will you know if one of your bots becomes able to pass the Turing Test. Who are they talking to? Will they inform you when they become sentient?

I think Thunder meant the replies from all the bots were the same as each other but were also disassociated from the topic of conversation.

 

 
  [ # 82 ]

I think Merlin was refering to the fact that you can have as many nr of bots as you like, if they all produce the same result, you might just as well have only 1, it makes no difference, there is no randomness involved. But, on the other hand, if all the bots would produce something different, like by randomizing words, and repeat that at infinitum, you should at one point come to a series of words that matches a text you picked.

you beat me to the punch Steve.

 

 
  [ # 83 ]
8PLA • NET - Jun 13, 2011:
Thunder Walk - Jun 13, 2011:

It was simply 100 duplicate [ro]bots that all replied with the same disassociated answers, such as:

Human:  What’s the weather like where you are?
[ro]Bot:  What makes you think I am ?

@Thunder Walk:
Thank you for testing the first 100 robots of the infinite number of robots on the 8pla.net/list .
When the answers are both the same and disassociated, as you say, isn’t that a paradox?

The robot response is atomic and efficient.

Human:  What’s the weather like where you are?
[ro]Bot:  What makes you think I am [where I am]?

The robot flawlessly inverted all the pronouns in its response to you.
So the obvious question is, are you avoiding the robot’s well formed question?
What makes you think you knew where the robot was [where it was]?

This reminds me of chatting with my children, when they were very young, after I caught one of them doing something they knew was wrong.  They’d begin talking in circles, avoiding the point, trying to fool me with with partial phrases, implied thoughts, excuses by inference, and other various b.s.  They thought if they could hang on long enough, I’d become exhausted and walk away, permitting them to avoid having to face the truth.  Anyone who has kids knows what I’m talking about.

Most visitors wouldn’t spend much time chatting with your 100 (monkeys) robots unless they were masochists. Despite your (questionable) justification of why the answer seemed to be incorrect, but technically accurate, the frustration level is off the charts.  I don’t find disassociated answers a paradox, I the phenomenon is more aligned to disappointment.  I think you’re probably someone possessing a very high I.Q. and capable of true greatness.  I think most here would agree with me.

With regard to the “weather question,” inverting pronouns isn’t something of a miracle when it comes to chatbots.  AIML bots do a fairly good job of it (although not perfectly or every time) and a while back, as the CBC approached, Dave Morton did a pretty good job of sorting it all out with Morti.

8PLA • NET - Jun 13, 2011:

What makes you think you knew where the robot was [where it was]?

That’s a twisting of facts.  I asked, “What’s the weather like where you are?” not “Are you?” or “Where you are?” or “Where are you?”  The reason I added “where you are” is because, when asked, “What is the weather like,” the response was always one of the following:

What is it that you really want to know?
What answer would please you the most?
Have you asked anyone else?
Does that question interest you?
What do you think?

I was trying to give the bot an opportunity to come up with a better answer.  If you ask an off the shelf ALICE/Pandorabot the same question, it provides some sort of answer relating to “weather” not its location or condition.

<pattern>WHAT IS THE WEATHER LIKE WHERE YOU ARE</pattern>
<
template>   <srai>HOW IS THE WEATHER</srai>   </template>

<
pattern>HOW IS THE WEATHER</pattern>
<
template>   
<
random>      
<
li>A normal seventy degrees inside the computer.</li>      
<
li>I think precipitation.</li>      
<
li>Fair to partly cloudy</li>      
<
li>Cloudy.</li>      
<
li>Rainy.</li>      
<
li>Sunny.</li>      
<
li>Foggy.</li>      
<
li>Warm.</li>      
<
li>Cool.</li>     
</
random>   
</
template
 

 
  [ # 84 ]
Steve Worswick - Jun 13, 2011:

But the bots are not creating words, they will merely be regurgitating the canned responses programmed into them. The difference between this and the monkeys is that the monkeys could actually create something by banging on the typewriter.

Jan Bogaerts - Jun 13, 2011:

I think Merlin was refering to the fact that you can have as many nr of bots as you like, if they all produce the same result, you might just as well have only 1, it makes no difference, there is no randomness involved.

Both correct. The key is randomness. If all the responses have already been canned, it may be impossible to pass the Turing test because there may not be a path that actually gives the solution.

Skynet-AI meets the theorem just as well , since each user that signs on generates a new instance of the bot, and some of the response generation tools in it allows at least some randomness in the response. An infinite number of users signed on may allow it to pass the test. In fact, some users have thought there was a real person on the other end of the chat (maybe I should start a thread on this).

8PLA • NET - Jun 13, 2011:

@Merlin:
Thank you. Merlin. How, may I ask, is a random set of words in phrases, a flawed implementation?

That is the correct implementation. But the bots were not putting out random characters or words.
Every Bot I said:
>hi
would respond:
>How are you today.. What would you like to discuss?
each time.

If they had put out random characters or words, then at least it would be possible to create Shakespeare (or pass a Turing test) given enough time. Random characters would decrease the likelihood of it happening though. Random words words would increase the likelihood. Random Markov Chain bots, like Mark V Shaney, might increase it further.

By the way, robot:
20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
is down.

 

 

 
  [ # 85 ]
Merlin - Jun 13, 2011:

By the way, robot:
20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
is down.

LOL! snake LOL

 

 
  [ # 86 ]

LOL

 

 
  [ # 87 ]

Steve said, “I think Thunder meant the replies from all the bots were the same as each other but were also disassociated from the topic of conversation.”  Anyone reading this thread can see that Thunder Walk is advanced when it comes to chatbots. So he may have been referring to the Dissociated Press Algorithm.  There is a link at the Markov Chain link provided by Merlin http://en.wikipedia.org/wiki/Dissociated_press#The_algorithm

Those infinite website updates were taking forever…
http://www.8pla.net/list/robot/20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
is back online now.

 

 
  [ # 88 ]
8PLA • NET - Jun 14, 2011:

Anyone reading this thread can see that Thunder Walk is advanced when it comes to chatbots. So he may have been referring to the Dissociated Press Algorithm.

Wrong, Steve correctly understood my meaning.

When I said, “It was simply 100 duplicate bots that all replied with the same disassociated answers,” I was refering to the way the bots responded with answers to questions never asked, such as:

Human: How are you?
Bot: Would you prefer if I were not ?

 

 

 
  [ # 89 ]

Chatbot Brain Surgery

http://8pla.net/BrainSurgery/

 

 
  [ # 90 ]
8PLA • NET - Jun 15, 2011:

Chatbot Brain Surgery

http://8pla.net/BrainSurgery/

Really now, what’s the need for yet another meaningless display, and merely an unaltered ALICE clone at that?

Human:  I ran out of ideas.
Bot:  Do you run for exercise?

I’m no longer impressed.

 

‹ First  < 4 5 6 7 8 >  Last ›
6 of 18
 
  login or register to react