Gary Dubuque - Jul 11, 2011:
@Robert,
Why would an algorithm worry about being in prison for the rest of its existence. It seems your view of them already puts them in a prison.
Why would Watson worry about winning at Jeopardy. It seems the programmers’ view of it already puts it in a game.
“How long would a computer wait on the other computers before it could conclude that there was more to the problem than just logic or at least that it now had the additional information to determine they all were marked red? A couple of microseconds? Computers can figure out all the possibilities much faster than you can. I maintain that the most direct calculations can determine is that there are at least two of the three marked red. The taps don’t tell enough to declare all three are marked red.”
How long would a human wait on the other humans before it could conclude that there was more to the problem than just logic or at least that it now had the additional information to determine they were all marked red? A couple of minutes? ...
In other words, the time factor is irrelevant to whether the entities are computers or humans.
“I’ve never heard of an algorithm that understands what other machines are doing without sensors or instrumentation as agents inside the other machines. Are you suggesting all computers have some kind of built-in protocol so they can handshake these things without arranging any direct communication paths?”
IRC bots are algorithms, and I can communicate with them using natural language…
“If so, why wouldn’t they all three quickly know the answer and be set free?”
All three humans could also simultaneously respond with the correct answer, so again this objection is the same whether the entities taking part in the test are humans or computers.
We could do a test. Put three programs in an irc channel, and have a “warden” assign colors to them. The programs would have to blindfold themselves so that they wouldn’t record the colors internally and use that to generate their answer; perhaps you could use irc Actions to assign the colors and make sure the programs did not read the actions. Or some such contrivance.
I’ve started on such a program: subbot.org/hatriddle.
A sample dialog, also available at subbot.org/hatriddle/dialog/three_red_hats.txt (note that the dialog is frightfully synchronous, but an irc wrapper would take care of that, since irc is inherently asynchronous):
> you are wearing a red hat
Okay, I am wearing a red hat (but I can’t see it!).
> you see a red hat
Okay, I see a red hat. *TAP*!
> you see another red hat
Okay, I see another red hat.
> you hear a tap
Okay, I hear a tap.
> you hear another tap
Okay, I hear a tap.
> what color is your hat?
...
> what color is your hat?
Red!
—-
So, the bot doesn’t answer the first time, because it doesn’t know; it does answer the second time, because it now knows that no one else has answered.
It would be fun to put this program to the test against others in an irc chatroom. Or in another setup. Anyone else up for the challenge?!?
—-
The logic boils down to the following if-then statements (expressed in Ruby in hatriddleagent.rb):
if I heard 0 taps then I am wearing a White hat.
if I heard 2 taps and I tapped then I am wearing a White hat.
if I heard 2 taps and didn’t tap then I am wearing a Red hat.
if I heard 3 taps then wait.
if I heard 3 taps and enough time has passed for the others to process the first four rules, then my hat is Red.