|
Posted: Jul 18, 2012 |
[ # 76 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Do you have something working, and if so, what exactly can it do? (Please don’t say it can pass the turing test. I prefer an example of a conversation or anything other that you deem ‘intelligent’)
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 77 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Hans,
Regarding the program, you are correct: Mu has absolutely no correspondence with the human also being interrogated. Nevertheless, it generates the exact same responses. This is possible because that is exactly what it has done in empirical test after empirical test. Zero difference. This happens in quantum physics all the time. Two particles acting in the exact same way simultaneously, although there is no transfer of information between the two. This is a big part of the discussion between Einstein and Bohr. This is the argument between special relativity and quantum mechanics, with quantum mechanics in the affirmative, and special relativity in the negative. Quantum mechanics thus far has won the argument on empirical data - we observe simultaneity of identical behavior among two identical particles in separated by space/time. This goes against deterministic causality (a requirement of classical Newtonian and Einstein relativity). Quantum mechanics argues against deterministic causality and states that all things are probabilistic in nature. The two particles demonstrate identical behavior or react to each other simultaneously despite a separation of space/time.
So Hans, empirically, yes, two things can mimic each other simultaneously and we see that every day. http://www.americanscientist.org/bookshelf/pub/concurrent-events
Regarding human thought, I am talking about decisions; decisions are always in the binary. What I argue is that we as humans DO NOT think. We confuse memory with thoughts. Assuming I can mimic your decisions in a general system so perfectly as to be completely indistinguishable from you, then the property by which you make decisions exists prior to your awareness of them. Those decisions are probabilistic in nature (not based on deterministic causality). The choices you are presented by life are probabilistic in nature as are your interpretations of the potentiality of their outcomes.
You posit the question, “How do you feel today?” The response to this question is an answer which is the result of a choice. How you feel today is a matter of binary choice. If you choose to feel happy then Happy gets a 1, and the set of all other emotions gets a Zero (the set from which happiness is excluded). If you say, “I feel sad,” then you have chosen to feel sad, and the Sad set gets a 1, whereas the set of all other emotions gets a zero (every element within the set is assigned a Zero).
Regarding reality, I understand how it might feel that I am narrowing down reality to a point where my program will actually work, and from that point of view, I can understand how you feel that it is an academic approach that has nothing to do with our human reality. However, please let me provide you with an alternative interpretation to what I am doing.
Consider the Mandelbrot Set; here’s a great graphical illustration of what that is: http://www.youtube.com/watch?v=gEw8xpb1aRA
Like the song, infinite complexity can be explained by simple rules. That simple rule is z=Z^2+c (where c is a constant in the complex plane). That accounts for all Julia sets.
What I am trying to show you is that there is a fundamental process which accounts for all the complexity of human thought. Go to this image of the Christchurch Cathedral in Christchurch, New Zealand: http://www.photosbyralf.com/keyword/cathedral/2/1225381841_hxwXh#!i=1225381841&k=hxwXh
When you look at that Cathedral, it is amazing as a construction, but the masonry begins with the element of the brick and the mortar. In the same way, Euclid built the wonderful world of traditional geometry with his “elements” which he referred to as “bricks”.
What I am trying to demonstrate is the element, the brick, the fundamental way by which we make decisions that can be used to build models that account for our “human reality.”
It begins with the decision, and the decision is binary, always. Binary decisions are simultaneous in nature. Every moment you are observing something, the 20 million billion calculations in your brain are asking, what just happened? And before you reason what just happened, you assert that what just happened is what has been happening (you assume the present mimics the past), and so you then assert that, and notice across all of your various “receptors” or nerve responses whether or not that is in fact true. Just one difference, one change asserts the existence of time, because in the same way that 1,000,000,000,000,000,000,000,000 is different from 1,000,000,000,000,000,000,000,001, we assert that because in the past it has been 1,000,000,000,000,000,000,000,000, that it is now 1,000,000,000,000,000,000,000,000, but in checking to see if that is true we see 1,000,000,000,000,000,000,000,001, and thus we now assert that in the 1’s digit place, change is occurring and time is what we construct to account for that.
Every single decision is binary. The outcome is observed as binary, the experience is probabilistic.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 78 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Jan,
So this is what it does.
1. There are three rooms, separated by any distance of space you desire. In one room is the interrogator, in the second room is the human, and in the third room is the computer. Of course the interrogator has no idea which room the computer or the human are in.
2. The interrogator begins the conversation with both rooms with something like this: “Hello. I currently have a child named Kramer in the room. Kramer is going to jump to the left or to the right. Now, Kramer just jumped. Which way did Kramer jump, to the left or to the right?”
3. Both the computer room and the human room provide a response.
4. The interrogator responds to each room accordingly, “That is correct/incorrect. Now, Kramer is going to jump again. Kramer just jumped. Which way did Kramer jump?”
5. Both the computer room and the human room provide a response.
6. (4) and (5) over and over again.
7. After some iterations of this, the interrogator makes a decision on who is the human and who is the computer.
8. The interrogator will not be able to decide because the responses are either identical statistically (such that if Kramer’s behavior is random, the responses of both will be random), or they are identical prima facie (such that each response, other than the first two are identical).
Thus in the course of this very limited conversation with discourse and feedback, the Turing Test has been defeated at its most elemental level, because the computer reasons the same way a human does. This is exactly the same as the rat/maze tests (specifically the T maze) to ascertain if rats are learning.
Now, you can expand this test by asking all sorts of questions reduced to binary decisions, and the goal of course is for the result to remain the same. Do humans associate the two different words as the same piece of information? If so, we are all already able to model that (SOMs, neural net, etc.), so we assign the 1’s and 0’s to the same memory. But the structure of the decision itself, is fundamental; it is binary, and it is how we learn. We defeat the Turing Test (big whoop, huh), but the way in which we do that is based on assigning memory one’s and zero’s for each option, and using those one’s and zero’s to make a decision which is tested as a hypothesis against the present.
E=mc^2 is pretty simple and straightforward. The implications are what is important. sigma(position)*sigma(momentum) >= planck/2 is pretty simple, but it is fundamental to our thinking of quantum mechanics, the uncertainty principle, and the observer effect. Simple doesn’t mean wrong. It doesn’t mean not powerful. Nature prefers the most simple method that is able to self-replicate. That’s what my program is. The most simple mimicking algorithm there is that is capable of self-replication (as well as dying) in the very spirit of John Conway’s game of life.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 79 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
I hope that it is seen that I am positing the question of Schroedinger’s cat in the form of a Turing Test, to a computer and a human and discovering that they provide the same response.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 80 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Jonathan Charlton - Jul 18, 2012: E=mc^2 is pretty simple and straightforward. The implications are what is important.
E^2 = m^2*c^4 + p^2*c^2
Hoo boy, this thread is a whopper.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 81 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jonathan Charlton - Jul 18, 2012: So Hans, empirically, yes, two things can mimic each other simultaneously and we see that every day.
I’ll predict that when you ask the same question to me personally and to your system, you will not get the same answer. I’ll even predict that no matter how many questions you ask, you will never get an identical answer. I can make this prediction with great precision, simply because I know what I’m going to answer up front while your system does not now what I’m going to answer. I can constantly choose to give an answer that would be very hard to predict, would be completely unrelated to the question (people can do that), or whatever way I choose to evade the possibility of your system to mimic me.
Jonathan Charlton - Jul 18, 2012: You posit the question, “How do you feel today?” The response to this question is an answer which is the result of a choice.
No it’s not, it’s the result of internal assessment of your emotional state, which in turn is based on the chemical balance (very analogue) between the neurotransmitters in your brain. Even naming your emotional state (as in ‘choosing how to name it’) is not binary; we often struggle when we are asked to describe our emotional state. This is because the intrinsic analogue nature of emotion, that is by definition hard to capture in unambiguous textual descriptions.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 82 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Hans and C.R.,
First Hans:
You can choose to make your answers hard to predict, but that begs the question of whether you are choosing to answer the question, or whether you are choosing to provide random responses. If you choose to answer the question you are being asked, then your choices are in response to the objective of the question.
I too can program a computer whose responses are hard to mimic. Your assertion is besides the point.
Regarding your emotions, I’m sorry, they are a choice. Whether I am happy or sad is independent of whether or not I have a chemical response. I choose stress, and thereby produce the cortisol. It’s not the other way around. Choosing your emotional state is binary it selects A versus ‘A. Emotions are a binary choice.
e.g. Luce and Raiffa:
Let us assume that there are three possible outcomes A, B, and C, such that A is better than B, and B is better than C and assuming transitivity, A is better than C. We at this point have an ordinal understanding of the three outcomes, yet we cannot state to what extent A is better than B, B is better than C or A is better than C.
Now assume that we add to this that A and C are a matter of probability such that the probability of A is P, and the probability of C is 1-P (such that their addition is 1). So our choices are now between the set of outcomes {A, C} and {B}. As P approaches 1, we will prefer the choice {A, C}. However, as P approaches 0, we will prefer choice {B}. It then stands to reason that at some probability P we are indifferent between choices {A, C} and {B}. In observing whether or not someone then prefers one choice over the other we assign it a higher utility. However, we do not state that because one choice is made over the other that it has a higher utility, but that because one choice is preferred over the other WE ASSIGN IT A HIGHER UTILITY. The preference exist prior to the utility. The choice is binary, and the preference is this versus that.
Now to C.R.
I don’t really understand what the point of your post is. If you are attempting to be snide, that you have achieved. If you are attempting to make an argument, it can hardly be seen.
I will assume that you take some issue with my assertion that E=MC^2 is simple in its assertion of relativistic mass (as opposed to matter) and how this is a property of all energy in that it maintains relativistic mass. This isn’t a difficult concept, unless a physics teacher (one such as yourself) is attempting to make themselves seem smarter by presenting it in an obfuscating way, thereby doing their field an injustice by relegating to the likes of mysticism, much in the same way most physics and mathematics teachers choose to teach standard calculus which is much more difficult to conceptualize relative to non-standard calculus and infinitesimals which are much easier on a student to grasp. They both explain the same concept, but many people get caught up in intellectualism as opposed to pedagogy. In terms of machine learning and the Turing Test, I prefer to consider the latter as opposed to the former, so that I don’t make myself look like an ass by putting down people I don’t know.
Your thoughts?
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 83 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Jonathan Charlton - Jul 18, 2012: I don’t really understand what the point of your post is. If you are attempting to be snide, that you have achieved. If you are attempting to make an argument, it can hardly be seen.
I was attempting to be snide.
Jonathan Charlton - Jul 18, 2012: I will assume that you take some issue with my assertion that E=MC^2 is simple in its assertion of relativistic mass (as opposed to matter) and how this is a property of all energy in that it maintains relativistic mass. This isn’t a difficult concept, unless a physics teacher (one such as yourself) is attempting to make themselves seem smarter by presenting it in an obfuscating way, thereby doing their field an injustice by relegating to the likes of mysticism [...]
Just trying to make sure the little photons and other moving bits don’t get left behind. But as to who is obfuscating their point with irrelevant information, I’ll leave that as an exercise to the reader.
Jonathan Charlton - Jul 18, 2012: [...] many people get caught up in intellectualism as opposed to pedagogy. In terms of machine learning and the Turing Test, I prefer to consider the latter as opposed to the former, so that I don’t make myself look like an ass by putting down people I don’t know.
Your thoughts?
My thought is that your approach to “pedagogy” is no more than baseless assertion. I have no interest in it, other than entertainment value at the tack this thread has taken.
You sound like a well-meaning guy, Jonathan. But you are a little too eager to “educate” about entire fields of study—neurology, psychology, even economics—without apparently any consideration for their literature or anything else beyond your own feelings on the matter. Feel free to prove me wrong. I’d be happy to see it (all other emotions = 0).
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 84 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Addendum: I do realize you cited a paper by Luce and Raiffa. However a paper on whether or not utility precedes preference (although the definition of utility, as I understand it, is synonymous with preference) is not sufficient reference for the rather remarkable assertion:
Regarding your emotions, I’m sorry, they are a choice. Whether I am happy or sad is independent of whether or not I have a chemical response. I choose stress, and thereby produce the cortisol.
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 85 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
C.R.,
I can see how from your perspective, my assertion of pedagogy is baseless. I can also see the other side. I don’t obfuscate - I directly address it. Game theory is highly Bayesian in the way that it asserts decisions are made. Asserting that a simple Bayesian technique is accountable for how we all make decisions and learn is baseless though. That simple equations and concepts can account for complexity is proven, and I make an assertion that my (Bayesian) technique can be applied to the most elemental scale of decision-making within the frame of a conversation. I even provide the example of Schroedinger’s cat as a direct analogy to question I am posing as an interrogator within the Turing Test. The question is relevant to the Test because it asks the interpreter of the test to consider exactly what they mean by the observer (the interrogator).
Regarding neuroscience, Psychology, and Economics, I have and MBA and an MS in Quantitative and Statistical Finance (aka Econophysics), among other degrees, and experience - let’s just say as an Arabic and German linguist and intelligence analyst in the US Army, I applied a little psychology - aka PSY-OPS, as well as SIGINT, ELINT, IMINT, COMINT, MASINT, HUMINT, and FININT, as well as a game theorist for NORAD-USNORTHCOM, the National Geospatial Intelligence Agency, the National Reconnaissance Office, and the Office of the Secretary of Defense, and regarding the neuroscience I work with several of them in to ascertain certain aspects of human intuition for the Office of Naval Intelligence and IARPA pursuing contracts; so maybe I’m a little light on neuroscience, but I can definitely understand its concepts and can definitely read through them. Quantum physics provides an excellent assertion of what might really be going on neurologically. Personally, based on my empirical research, I see that financial systems are just series of decisions given uncertainty. And those decisions are a simulation of what is occurring in the physical world (i.e. a stock return is a set of information which reflects a physical reality). Personally, I don’t agree with deterministic causality; everything is probabilistic. Everything. This is true at a quantum level right up to macro-scale, but I think that it is very difficult to shed the concept of deterministic causality which is why Stephen Hawking said that every time he thinks of Schroedinger’s cat, he wants to get his gun. I understand that feeling. Wavicles seem strange.
Hans, I agree with you that my assertion of Luce and Raiffa, don’t exactly prove the point. They simply assert where I’m coming from. It’s in watching my machine defeat the Turing Test (albeit a very basic/ elementary version of it), that I begin to consider the implications of that. Two rooms, no communication, same responses to answering of the question. Simultaneity. The only conclusion I can make from it is that the process by which we make decisions given within any general system is standardized and thus, the decision isn’t really a decision but more of a “waking up” to reality. I thought I had free-will until I watched this thing work. But the question I have when I watch the very real performance is, if I can mimic your decisions so perfectly as to be completely indistinguishable from you, then what is free-will? That would seem to agree with the conclusion of deterministic causality but for a different reason. Instead of there being what we call time, I assert (this is the best I can reason), that we are all just going through the exact same motions, over and over again. The process doesn’t change. We may have many decisions, but the process of assigning value, the process by which the decision is made given uncertainty is utterly constant.
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 86 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Jonathan Charlton - Jul 18, 2012:
Let us assume that there are three possible outcomes A, B, and C, such that A is better than B, and B is better than C and assuming transitivity, A is better than C. We at this point have an ordinal understanding of the three outcomes, yet we cannot state to what extent A is better than B, B is better than C or A is better than C.
Now assume that we add to this that A and C are a matter of probability such that the probability of A is P, and the probability of C is 1-P (such that their addition is 1).
If the probability of C is 1-P, then the probability of B is 0%.
1-P would normally be thought of as the probability of C OR B, wouldn’t it?
So our choices are now between the set of outcomes {A, C} and {B}.
There is no choice if the probability of B is 0%
As P approaches 1, we will prefer the choice {A, C}.
Which also means that C approaches 0.
However, as P approaches 0, we will prefer choice {B}.
That is incorrect. As P approaches 0, C approaches 1, there is no choice B for it has the probability of 0 in all cases.
It then stands to reason that at some probability P we are indifferent between choices {A, C} and {B}.
If the probability of B is 0 we should never pick B. That may be true for A and C though, since we do not have any expected value differentiation. But if we had an expected value, say A=$1,000 and C=0, we would be smart to always pick A as long as the probability was non 0.
In observing whether or not someone then prefers one choice over the other we assign it a higher utility. However, we do not state that because one choice is made over the other that it has a higher utility, but that because one choice is preferred over the other WE ASSIGN IT A HIGHER UTILITY. The preference exist prior to the utility. The choice is binary, and the preference is this versus that.
I agree that you can analyze decisions this way. In my bot, I do just that. I test against a specific decision and if it fails I test against the next highest decision.
Bayes rule based analysis is common in AI classes and drives much of how Google looks at AI. The tough part is getting credible models/data to provide for the Bayesian learning and handling underflow because the probabilities can become so small.
In the Wikipedia article on the Turing test, it talks about different versions of the test. Maybe we should just call this the “Charlton version”. You do have some significant differences between your version and the others (ie Ignore the first input, provide feedback to the participants, repeat the same questions).
I still fail to see how probability will allow you to deduce the answer to a question where you have not seen the answer before (this is also the problem I have with Google’s approach to AI).
Simple examples:
Q1 Name a number
AA1 Three
AB1 96 (assuming the bot knows what a number is. Is bot grounding part of your assumptions?)
Q2 Add one to the number you named
AA2 Four
AB2 13 (Probability alone cannot predict the correct answer 1 in 2^32 given a 32 bit number)
Q3 Add one to the number you named
AA3 Five
AB3 Four (100% probability)
Interrogator claims room B is a bot. If he wants to be sure, he could go on repeating the question.
Q3 Add one to the number you named
AA3 six
AB3 Four (50% probability) Five (50% probability)
Q3 Add one to the number you named
AA3 Seven
AB3 Four (1/3 probability) Five (1/3 probability) Six (1/3 probability) - 100% probability of wrong answers without additional reasoning.
If continued the answers of the bot would end up looking like random numbers before too long. Now you could add math reasoning to probability, but now you are extending beyond Bayes rules. It is also easy to extend the concept to other areas since there is no restriction on the types of questions. Without solving the grounding problems, probability alone won’t do the trick.
Q1 - Name an animal we have never talked about.
Q2 - What is the plural of that animal?
Q3 - Name an animal we have never talked about.
Q4 - What is the plural of that animal?
Q5 - Name an animal we have never talked about.
Q5 - What is the plural of that animal?
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 87 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Jul 18, 2012: I have said this before: a chatbot is actually a grammatical expert-system (of some sort), and like with all expert-systems, the ‘intelligence’ is brought in by the builders of the system. The bot just follows the predetermined rules. So I agree that it is (still) not thinking.
Yes, you HAVE said that before, and it was no more true then than it is now lol. I don’t care what kind of software you come up with, anyone can ALWAYS argue that “the computer is still just following instructions”. That argument is old. You think you’re coming up with something that uses “emotion”... and somehow YOUR code, even though, like everyone else’s , is “just being executed by the computer”, is somehow different? If we both write software for a Turing Machine, and even if they do they same thing, you call yours “thinking” because the algorithm is different, and you call that algorithm “emotion”. That is utter nonsense.
And I have said THIS before: when the ‘production knowledge’ (what everyone else calls ‘rules’), are sufficently general, that is, to the level of generality that compares to the human mind, THAT will be thinking, or at least most people will consider it to be. Still, you will get those that hold that “it is just following instructions”... but then , if that is so, so is the human brain. The human mind is “just following instructions” of the laws of physics of the chemical reactions going on between its nuerons.
and I’ve also said THIS before .. I don’t care what you call it, and I don’t care how it works.. .. thinking will be judged based on what the system CAN DO. Not the theory behind it… not what it IS. .what it DOES. Alan Turing knew this 60 years ago.
Also, who says that a chatbot has to be limited to containing grammatical knowledge. Some how you have fixated that if a bot is equiped with grammatical knowlege, THEN IT JUST USES THAT KNOWLEDGE AND NOTHING ELSE. Try to break free of that fixation please.
Jonathan - you’re version of the Turing Test is invalid. Changing the rules of the game doesn’t allow you to win; it is greatly over-simplified. If we could consider the entire universe as “Block World”,( http://en.wikipedia.org/wiki/Blocks_world ) then Strong AI was achieved long ago. This is what you are trying to do. Steve, I see you’ve given up on explaining this to him.
You say allowing the interogator to provide feedback wasn’t specifically disallowed by Turing.. I think common sense dictates that it is… perhaps the computer should be able to ask a human for the appropriate reply ? well, that wasn’t specifically disallowed by Turing was it ? Snap out of it.
To the original topic .. . thinking REQUIRES UNDERSTANDING. We have had systems that PREDICT for a long time, we have had systems that use probablity for a long time. We have NOT yet achieved a system that understands concepts and is capable of carrying a conversation. To understand concepts, one can use the power of natural language. There is a reason it is called NATURAL language. It is because it is powerful enough to explain and communicate concepts of the naturual, physical world, in all its spender and complexity. If a human/animal/computer/alien cannot understand information, it cannot think about that information.
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 88 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Hey CR,
By the way, how did you vote on the original question? Does Skynet-AI think in the example given?
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 89 ]
|
|
Experienced member
Total posts: 66
Joined: Sep 15, 2011
|
Victor Shulist - Jul 19, 2012: Hans Peter Willems - Jul 18, 2012: I have said this before: a chatbot is actually a grammatical expert-system (of some sort), and like with all expert-systems, the ‘intelligence’ is brought in by the builders of the system. The bot just follows the predetermined rules. So I agree that it is (still) not thinking.
Yes, you HAVE said that before, and it was no more true then than it is now lol. I don’t care what kind of software you come up with, anyone can ALWAYS argue that “the computer is still just following instructions”. That argument is old. You think you’re coming up with something that uses “emotion”... and somehow YOUR code, even though, like everyone else’s , is “just being executed by the computer”, is somehow different? If we both write software for a Turing Machine, and even if they do they same thing, you call yours “thinking” because the algorithm is different, and you call that algorithm “emotion”. That is utter nonsense.
I think what he’s trying to say is that there is a difference between a system that can go beyond its programmed boundaries versus one that is imprisoned by its own code. Whether or not they are following a pre-set rules i think we can both quite conclude is irrelevant.
Anyway, thread is quite interesting to say the least.
|
|
|
|
|
Posted: Jul 19, 2012 |
[ # 90 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Jonathan, given you are re-hashing the same test, am I correct in presuming all you have is a theory?
|
|
|
|