Hi Merlin, Victor, and Jan,
First Merlin,
Regarding {B} it is assumed to have a probability of 1 (it represents a certainty within the Luce and Raiffa example). So your choices are between a certain outcome {B} and uncertainty outcomes {A, C}. I should have stated that prior and hope that clears up the misunderstanding there.
Regarding the first input, you don’t have to forget about it. I’ll demonstrate in just a second. I just want to get to your other point first.
Let us first begin with the difference in applying a Turing Test (in your version - the Merlin Version) to you at two years old versus you now. At two, you lack the ability to communicate, but you still have the ability to reason and learn, else whence did your ability to learn language derive. This is why the “Terrible Twos” are called that. Children at that age are frustrated in that they know what they want but cannot communicate it. There is a difference between knowledge and intelligence. Now consider you example of naming a number. Ask your a two year old to name a number; they can’t thus they fail the Turing Test. That doesn’t follow correctly because the two year old is still intelligent, he/ she merely doesn’t have the knowledge of the words.
Also, pairwise comparison is a simple way to consider two sets such that A is something defined as, say, “Barack Obama,” and ‘A is all that is not Barack Obama such that these two sets represent {A} and {B}. One may ask whether or not a card drawn from a hat which maintains many sheets of paper with a name on each paper of the item which may be in either {A} or {B} was in fact in set {A} or set {B}. The computer and human may then learn the prior probability that something is in either set by iteration.
So , “Give me a number,” computer responds, “Blue,” the interrogator responds “no, blue is not a number,” Blue is assigned a 0, “Give me a number,” the computer responds, “Butterflies,” the interrogator responds “no, butterflies is not a number,” Butterflies gets a 0. Going through a list of everything (a single iteration) results in 0, 1 for those which are not and which are, respectively, numbers. The second time around, the question is asked, and the computer will provide a number from the set of those things which have a 1, and that will be a number, until someone says, no, that thing is not a number.
Getting to plurals, ask a 3-year old what plurals are; they won’t be able to tell you. That’s not because they aren’t intelligent, or don’t think, but because they don’t know what a plural is. They haven’t learned it yet. It is over many iterations of feedback that they learn what a plural is.
So the point, Merlin, is that to control the Test, you must begin with a question which both computer and human are equally ignorant to, and see then if they differ in their ability to learn the concept. Questions like the position of an electron, the location of a moving target, or even putting humans in a maze just like the rat mazes with a piece of cheese, those all work, because both human and computer begin from the same point in their knowledge.
Now Victor,
Victor my version of the Turing Test is valid. Your assessment that my version of the Turing Test is invalid is incorrect. Strong AI has been achieved, whether you are aware of it or not; differentiating between Strong AI as the ability to learn and Strong AI as consciousness, is non-sequitor because you can’t define consciousness yourself. Regarding feedback being disallowed by the Turing Test, Turing sites a conversation with a computer. Conversation requires feedback. When a child is in school, they listen to a teacher, and then they are tested on that which they have learned. The teacher grades the test; the child receives feedback and then learns from the mistakes. That is discourse, that is a conversation whether the feedback from the children to the teacher is provided over the course of a semester, or whether that is provided in the moment.
Regarding your objection to the feedback in which you state,
“You say allowing the interogator to provide feedback wasn’t specifically disallowed by Turing.. I think common sense dictates that it is… perhaps the computer should be able to ask a human for the appropriate reply ? well, that wasn’t specifically disallowed by Turing was it ? Snap out of it.”
Again, your point is a complete red herring. Here is why. There is absolutely no reason why the computer should not be able to ask the human interrogator for an appropriate reply. ABSOLUTELY NONE. Were the computer to ask this of the interrogator, it should help the interrogator ascertain that the computer is in fact a computer. The computer asking the interrogator for an appropriate response however is completely in line with the Turing Test, it just means that in doing so the computer is more likely to fail it, but of course, the Turing Test allows for the computer to fail. Furthermore, it does not follow that my program is asking the interrogator anything. The interrogator is asking both the human and the computer for information; both provide some; the interrogator provides feedback. This is distinctly different from the example you provide (which isn’t a violation of the Turing Test anyway - it’s criteria for the computer failing). So I don’t need to snap out of anything. You need to formulate logical arguments as opposed to logical fallacies such as red herrings and non-sequitors.
Regarding prediction, no-one is predicting- that is impossible for human and computer because prediction involves ascertaining the future which is a set of information that has yet to occur/ be created. Thus what people are doing is instead calculating probabilities of events - that isn’t predicting the future, that is recognition of a preference. By your argument that a computer must understand the information it is asserting to be defined as asserting, that just isn’t necessarily true. When you are incorrect, you must recognize that you are incorrect and incorporate that into what it was you previously thought you asserted as fact. Further applying this towards the next event asserts that you have an understanding of the application of the information you possess. I just differ on what it is you define as thinking. Many people confuse thinking with remembering - there is no such thing as thinking. “Thinking” is accessing memory and observing that it doesn’t match the present, updating your memory with the new information, and repeating the same experience of being wrong. Humans decide, they don’t think.
Hi Jan, regarding the Turing Test which is what I am rehashing, no, you are not correct in presuming that all I have is theory. I have a program, as I have told you; it is called Mu, and it defeats the Turing Test. Given I have stated this several times, am I correct in presuming that you haven’t read the several times I have told you that I have a program called Mu, and that I assert defeats the Turing Test?
I recognize that you don’t agree that what I have stated as the Turing Test is the Turing Test, but just because you don’t agree doesn’t mean that you are correct. I recognize that on my end, but I’ve been spending most of my time defending my version of the Turing Test, and still, no-one has provided any strong argument against it for which I have not provided a viable defense. Furthermore, I recognize that because I’m on a chatbot site, that there may exists a prior prevalence toward one particular definition of the Turing Test. I don’t argue that an expansion of decisions requires an expansion of response, but at the core level of learning (which many seem to confuse with prediction which is the equivalent of reading tea-leaves) which is that of a single decision, the Turing Test is defeated, and AI at that level is very strong.