|
Posted: Jul 18, 2012 |
[ # 61 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Steve and Merlin,
First my response to Steve.
Steve, I just have to disagree with you. Whether a conversation is interesting or not is besides the point. The point is whether a conversation is occuring. My basic version of the Turing Test is spot on. Nowhere does Turing preclude feedback. Nowhere. In fact, conversation by its definition includes feedback. When you talk with someone, you receive constant feedback from many sources including not only the content of the response but as well facial expressions, tone, etc., each of which you respond to. If you are having a conversation without any feedback, then it is one-sided, and that isn’t conversation, that’s dictation. Here’s the definition of conversation from Dictionary.com (http://dictionary.reference.com/browse/conversation):
con·ver·sa·tion
[kon-ver-sey-shuhn] Show IPA
noun
1.
informal interchange of thoughts, information, etc., by spoken words; oral communication between persons; talk; colloquy.
Here is the definition from Websters (http://www.merriam-webster.com/dictionary/conversation):
a (1) : oral exchange of sentiments, observations, opinions, or ideas (2) : an instance of such exchange : talk <a >
Both require exchange which is “feedback.”
Again, the conversation I have defined is the most basic form of conversation and discourse. This does not in any way violate the criteria of Turing’s Test. It is its most fundamental form.
Also, Repetitio mater studiorum est. Repitition is the mother of learning. Here’s the science to back that up - http://blog.brainscape.com/2011/05/repetition-is-the-mother-of-all-learning/ .
You learn through feedback. There is no other learning. When you engage in conversation, you learn, whether it is through content feedback, tonal feedback, expressive feedback, or any other. If you’re in a conversation and not learning on some level, then you’re not having a conversation. You’re dictating. While many times we have experienced those situations where people seem to just like to hear themselves speak, even then, they are implicitly learning through the feedback they receive from those with whom they speak even if it is not in the form of the content, but the expressions.
My program deals with conversation and learning at its most fundamental level. Just because it’s simple doesn’t make it wrong. E=MC^2 is simple. It is the implication and the testing and feedback from that testing through which we learn a statement to be true: see http://en.wikipedia.org/wiki/Einstein#Academic_career. It was only following Sir Arthur Eddington’s empirical data (feedback against the test) that showed the intelligence of the equation; it was then we learned it was true.
Learning occurs through feedback on a fundamental level, and a machine’s ability to mimic the way in which humans learn and provide feedback, receive feedback and provide feedback being demonstrated by a machine demonstrates independent learning. You have to construct the world in which you live (just close your eyes and you can still simulate your environment - that’s how you’re able to walk through your house without any light, but why a person who’s not ever been in it would stumble). That is the core of human intelligence, and the Turing Test as I have laid it out is 1:1 with what the criteria for a Turing Test is, at it’s most fundamental state.
Now my response to Merlin:
Merlin, by your requirements, how can you distinguish between two humans taking the Turing Test at the most fundamental level I have laid out? You cannot. The reason is that they reason the same way. My computer reason’s the same way. You can even use it to compare it to a rat learning how to get cheese, and in fact we use the exact same test to ascertain whether or not a rat is learning. See http://www.ratbehavior.org/RatsAndMazes.htm. I have the Howard Hughes Institute backing up this very method of testing intelligence. So I’m sorry, but this isn’t a rewriting of the test. It is an well established and accepted method.
Regarding the overfitting of the data. That’s just not what it’s doing. It doesn’t do ANY prediction. It looks for the prior probability in the same way that the rat maze seeks to ascertain whether or not the rat has directional preference. Again, these are very well accepted and highly regarded tests of intelligence and preference. My program uses them as a means to ascertain the propensity of the market to move in a particular direction in the exact same way, and it repeats, looking for self-similarity/ self-affinity (aka pattern recognition), and it’s not a complex thing to do at all. The key is to generate (not predict) a series of data of the same fractal dimension as that of the market. Take that and some simple interrogation within the context of the test that I laid out and voila - you get the market’s behavior EXACTLY. When Mandelbrot and I discussed this topic, he was telling me when he demonstrated fractal geometry as the nature of nature, people would say, “Oh that’s simple!” And he would say, “Yes, but I had to show you.” Human intelligence, actually, just plain intelligence, the ability to learn is simple. It’s not complex. Complexity is in the numeracy and structure of the decisions, BUT NOT IN THE DECISION ITSELF. If a caveman had to think through some complex algorithm to figure out whether or not he should pet the Sabretooth tiger or to run, he’d already be eaten, as I’m sure many cavement were. It’s the ones who said posed the question, is this dangerous?, and then observed what happened to his buddies, and through probabilistic stylized fact learned that it is dangerous that lived and procreated.
Again, this isn’t overfitting - that only occurs when you are trying to predict (aka logistic regression, MANOVA, ANOVA, multi-discrim analysis, rule-based, etc.) All I’m doing is asking my program a simple question over and over again, and it learns the prior behavior.
If I asked you, which way does the rat go? And you answered “right”, and I said left, and then I asked you again, which way did the rat go this time, and you said “right” and I said, left, and then after 20 times, I kept saying left, at some point you recognize the propensity of the rat going left and infer a preference. That’s how you learn. Repitition, and feedback. If something has a preference you observe it. If it has no preference - aka random - then you learn that instead. Given that random is always a property of markets, people, everything, then you have to learn to differentiate when something is acting randomly and when something is acting with preference, NOT when something will act randomly or with preference - that is impossible. So you have to be able to recognize change. Accordingly in my rat example, you might learn that the rat prefers left. But lets say over the next 50 trials it went left right left right, etc. You would alter your guessing to a random distribution, and voila - you learned. Learning is not predicting. Predicting is not intelligence. Learning is intelligence. And learning is repetitive observation; it is not repetitive dictation of the future.
So I’m sorry, but the definition of my Turing Test is spot on. It violates no requirements of the Turing Test (and I don’t mean the one Hugh Loebner has constructed which is a very myopic view of the Turing Test allocating it to NLP as opposed to decision theory.) Furthermore, my algorithm doesn’t over fit anything. It just guesses, and the outcome of its guesses are what you see on the map. Bayesian learning, not assigning. That’s it. It constantly adapts to and learns new information, and tests to see whether or not what it learned continues to be true, not whether it will be true.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 62 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
I guess my point is that if you:
1. Take “the same fractal dimension as that of the market” for example 1 sample per day for each stock.
2. Take that and some simple interrogation within the context of the market (ie “What was the closing stock price of each stock”)
3. You get the market’s behavior EXACTLY because you know what happened.
Generation of that series of data should be identical to the original market index because you fit it exactly. Now as you reduce dimensionality, you have the ability to generalize (which is also used in many cases to aid prediction with a certain level of uncertainty).
I also think your Turing test may not account for dynamic variables. For example: What time is it?
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 63 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Merlin,
Please let me clear up the market aspect. I start with the rat maze analogy. When we begin the process, we are aware of where the cheese is (esp. directionally), but the rat does not. In the same way, with my program, we know which way the market went, but the program does not, and in my general program Mu, Mu does not know which way kramer has jumped although the interrogator does (the interrogator is a separate program within the program).
So the rat’s first inclination (assuming no prior preference) is random. Then it discovers the cheese. After many iterations of putting the cheese in the same place (let’s say the right side of the T maze, the simplest test, analogous to the basic form of the Turing Test), the rat develops a preference to the right. What the rat really is doing though is learning our preference (we prefer to place the cheese on the right). So the rat mimics our preference. Now the rat won’t be able to tell you in words that he is going right (as opposed to the left), or that he prefers right (as opposed to left), but he does communicate that to you through actions.
Now, my financial program learns if the market has a particular preference (a prior probability), and it turns out that it most definitely does. At each iteration, the preference is reinforced and the fractal dimension (which is self-affine, similar to self-similarity) is updated. Not only does the program learn this preference but it’s choice of what it will do (buy/ sell) is based on that preference. It chooses that preference until another appears, but what is interesting about the market is that the preference is very stable. That is for daily returns. The preference becomes even more ordered for weekly, quarterly, and annual returns. The program mimics it just the same, but at each decision, it has absolutely no idea what the outcome is until it makes it’s decision and then is informed whether that decision was right or not, just like when you buy a stock one day and learn whether that was the correct decision (the stock provides a return) the next day. You don’t know that in advance, and claiming that being able to predict it is not intelligence. It is being able to ascertain whether or not your decision was correct and incorporating that information into a greater pattern recognition of what is occuring. So if the stock keeps going up, Mu learns that is the correct “direction” and assumes up. When Mu makes an assumption of the market going up and decides to purchase, and the market goes down, Mu incorporates that information into its prior assumption of the direction of the market. If the market kept going down in general, Mu would learn that and short the stock (sell it), and would continue to do so until that is not the prior prevalence.
In terms of decisions, we observe our environment; does the sun come up each and every day? Yes, so when we go outside we assume its presence which is what we already have stored in memory and generate it in our heads. Our brains conduct on average 20 million billion calculations per second (1000 billion neurons times 200 firings per second times 1000 connections per firing). We store what happened in prior moments and assume their persistence into the next moment and confirm whether or not that is the case. Consider a mosquito, several mosquitos. Now they fly in front of you and you try to smash them, only to see that they’re not there and you try to guess where they are, but you can’t see them because they’re not where you expect them to be. Only when you realize they are after your body, and most likely are near there (as opposed to the air you think you swatted them into) are you able to see them. The same with planes. You hear them, but unless you look ahead of the position from which you hear them do you see them. So your mind plays tricks on you. Not everything you see is there. Here’s an example: http://www.youtube.com/watch?v=G_Qwp2GdB1M. The reason you don’t see the hollow mask is the very reason my program works. You’re prior probability of seeing a hollow face just won’t let you (although if you are schizophrenic, you will actually see the hollow mask as opposed to the illusion).
Now regarding time. That is my favorite question, and I’m very glad you asked it. Time doesn’t exist (although our experience of it, like that of the hollow mask, does; our experience is a real event, but the context is not). I know that sounds weird, but I intend to prove that with my program. Time only exists in the future, and the future does not exist. Think of how you predict the future. If you say, I can tell you what will happen in the future i.e. in dynamic systems, then we know that is just not true. The future represents a series of information that has yet to happen. If you consider yourself internal to a system for which you intend to predict the future, then if you were able to do so with certainty, you would act upon it, thereby alterting the outcome rendering your initial assessment of the future incorrect. If, however, you were external to the system whose future you were predicting, you could not alter it in any way, and thus the utility of doing so becomes fruitless (but there is no general system from which you the observer are external in terms of measurement; this is true at all scales including the quantum scale due to the observer effect as demonstrated through empirical proof of Heisenberg’s Uncertainty Principle, and through the gedanks experiment of Schroedinger’s Cat). So where does that leave us, because we are all trying to predict the future? What that leaves us with is determining probabilities and that is done based on previous events. Thus the highest probability of the future is that which has already happened. The problem with the proximity of the nearest moment in the future from the present moment is that there is no nearest moment because there is no deterministic causality to the systems of observation. Nevertheless, when we construct probabilities, we do so with multiple dimensions of time, three specifically, to include interval, iteration, and horizon. In context, consider the question, “what is the probability of a 20% return?” In order to answer that question you must define the interval of the return (daily, quarterly, annual), the iteration of the return (the frequency of observation), and the horizon over which your iterations of intervals are observed. Changing any of these variable dimensions of time results in a different probability distribution. Thus, the future you are predicting is the dimensions of time for which you have the greatest prevalence which are those that have been most useful (called upon) in your struggle to survive. Time is a construct, thus asking a program what time it is would not confound it. If I programmed it to select a random time, it would receive feedback, and learn the prevalence (preference) of temporal dimension of the interrogator/observer, such that mu considers whether the time is to the left or the right of it’s estimate. The percentage of time (frequency) the prior answer is to the left or the right of the temporal guesses Mu has provided (whichever is greater, left or right) Mu asserts that percentage of the time difference to the left or the right. In this way, Mu will always tend toward the correct time but will never actually get it exactly right - that agrees with exactly what would happen if someone were to ask you what time it is at a particular moment, but you responded prior, you would always get it wrong, but eventually you would begin to lead or lag (which would take many iterations if the differential is small analogous to assuming random deviation).
In any event time is a complete construct, as there is no time other than the present, and that isn’t time. Here’s a thought experiment though.
1. If I could perfectly mimic you, let’s say I could, so as to be indistinguishable from you, and put that into a machine, and
2. If I could ship that machine in the off position and send it some large distance that corresponds to the duration of life in your biological housing, and turn it on, and there is no effect on “you” (that which is housed in the machine), then what is distance? What is time if no change occurs on that which is you? Time is a construct of our mind to account for what appears to be change, but if no change, then no time. I can create no change (relativistic entropy), thus I can demonstrate the non-existence of time.
Consider this, if that beaming me up device in star trek were to exist, and I could disentigrate you here and resolve you somewhere else do you age in the process? No, thus no time.
Here’s another one. Let’s say we have the same beaming you up machine. But this time it malfunctions and while you’re on the Enterprise, the machine didn’t disolve you on the earth. You now exist in two places at once. Only, let’s now say the one on earth is left fatally injured (some internal injury) by the whole malfunction, and that version of you only has three days to live, while the one on the ship is just fine. Do you care? You shouldn’t, but you do.
The point is that the implication of perfect imitation is that time is non-existent. You could just keep recreating yourself over and over and over. And that’s what you’re doing (at least that is what your conscious is doing) each and every day. Time is non-existent because perfect imitation is possible. Thus the question of what time it is is irrelevant as to whether a computer is conscious, or self-aware. Can a computer, and more specifically Mu, learn the concept of time. Absolutely, but only from another person who is concerned with it. Animals are not, in my opinion, concerned with such an artificial concept though, although I am certain they experience change, they do not make a point of constructing a measurement of change in the form of time.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 64 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
So when you think of the future, don’t think of the what might occur. Instead of that, think of the thing which is generating what is occuring (your noggin).
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 65 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
Thanks, Andres, for giving us your definition of thinking.
Something which is capable of finding/searching some hidden information among many aqcuired/accessible data, by designing a strategy which was not previously learned by rules or taught to him by programming, at least not in a direct form.
In other words: something which might be creative and design its own strategies! whether they fail or not!
I have two questions:
1) does this mean all other (human) mental functions are not “thinking” functions?
2) is there a way to test an agent, using this definition, to determine that the agent is thinking?
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 66 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Jul 17, 2012: If you ask my bot, “Which is larger, a whale or a flobadob?”, it will respond along the lines of “I don’t know what a flobadob is but a whale is pretty big and so is probably larger”. This is an educated guess based on what it DOES know but I doubt it is “thinking”.
The ‘educated’ part in this is not coming from the bot (as it is not ‘educated’ in any way, it has no comprehension of the concepts), so the ‘educated’ part is coming from you, the programmer.
I have said this before: a chatbot is actually a grammatical expert-system (of some sort), and like with all expert-systems, the ‘intelligence’ is brought in by the builders of the system. The bot just follows the predetermined rules. So I agree that it is (still) not thinking.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 67 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Merlin - Jul 17, 2012: The basics of an educated guess is probability. Is it more likely that…
Probability plays a role here for sure, but the main feat is ‘handling analogies’ to calculate the probability. When we make an educated guess, we ‘don’t know for sure’, but we up the probability of our guess by filling in the gaps in our knowledge with analogies.
We can even state this: ‘I’m not sure about X, as I have never seen X before, but it looks a lot like Y, so it probably X behaves in a similar way as Y’.
You can clearly see that in such a case we calculate the probability based on analogy. To handle analogies, a system must be capable of comprehension of concepts.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 68 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
True Hans but for the sake of discussion, I would compare my educating the bot to the way a teacher educates a small child. The teacher imparts knowledge that whales are very large and the child can assume that an object it doesn’t know about has a good chance of being smaller.
Once the child is asked, “Which is larger, a whale or a flobadob?”, surely he is thinking of the answer based on his previous education?
The teacher educates the child
The programmer educates the bot
The two are the same in my eyes.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 69 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jonathan Charlton - Jul 17, 2012: Here’s what happens when it goes through the Turing Test.
The interrogator asks a specific question to both rooms. Both rooms provide a response. The interrogator tells the rooms whether they got the question right or not and then asks another question. The more questions the interrogator asks, as a point of mathematical fact, the more likely the computer will mimic the human control variable (in the other room).
I think you misunderstand how the Turing-test works; the computer has NO access to the responses of the human participant. Therefore there is nothing to mimic ‘at that time’. This is exactly where (so far) all systems fall down eventually; the programmers need to anticipate all possible questions in advance and build a system that can anticipate… well almost anything conversation-wise. This is near impossible because the interrogator, being a human, can come up with any question and formulate it in any form imaginable. The chatbot has to be able to cater for that ‘upfront’ of the test.
While I am known on the board here to see chatbots as (very) weak-AI, I do bow to the chatbot developers that manage to score pretty high in a Turing-test, given the task described above.
So from the description you gave, I’m pretty convinced that your system not only will fail the Turing-test, but will be pretty useless in a real Turing-test.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 70 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Jul 18, 2012: True Hans but for the sake of discussion,
I like that
Steve Worswick - Jul 18, 2012: I would compare my educating the bot to the way a teacher educates a small child. The teacher imparts knowledge that whales are very large and the child can assume that an object it doesn’t know about has a good chance of being smaller.
Once the child is asked, “Which is larger, a whale or a flobadob?”, surely he is thinking of the answer based on his previous education?
Your example is a good one, as it already hints at ‘something’ that is ‘comprehension’. In this case, the only way for the child to assume the size of an object being ‘smaller then a whale’, it must have comprehension of the concepts of size, smaller, larger, etc. Without understanding spatial references, a system can not assert such a difference between objects.
Putting ‘rules’ into a system, so the system can infer such a difference without ‘understanding why there is a difference’, is not education, it’s cheating
It’s like the saying goes: you can teach a monkey a trick, but does he understand it?
Also, if putting rules into a computer system is equal to teaching, then we teach a wheel to role by making it round
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 71 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Wow, what a long thread this has become. Fascinating read.
Jonathan, you do understand that if you narrow things down like that, you can actually proof most ideas?
I also think that most traders already use similar systems like that (I know of at least 1, and he is just small fry). The basic problem is that they are notorious in handling fundamental changes (like a market crash).
Next, what makes you think that current chatbot systems don’t make binary choices based on previously learned content? A pattern definition is something that is stored (whether it was a learned pattern or predefined, doesn’t really matter, many systems can do both). When new input is received, the computer has to make a binary choice: this section of the input, does it match that pattern or not, and if not, does it match the next pattern?
Which brings me to the next thing: how do you handle sequences? Life is not just a binary choice, you may discard time, but you can not discard sequence. And trying to express sequence in a binary way will not really be cost effective, will it?
Finally, this whole binary idea sounds so familiar, doesn’t this ring a bell with anyone else?
I for one will also be very interested to see what will happen in August, please keep us updated.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 72 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Hans and Jan,
I’ll respond to Hans first.
So Hans I definitely agree with what you’re saying about the human and the computer not corresponding. To clarify, in my test they do not speak. Essentially, the interrogator has the exact same question for both rooms, engaging in the exact same conversation (albeit the most fundamental and basic form of conversation). What the interrogator gets back from each room is an identical response. That is, imagine you went to one room and had a conversation with that room (through a microphone). Then you went to another room and had a conversation with that room, and the conversation you ended up having (both you and the respondent) were identical to the room you just had a conversation with. You (1) wouldn’t be able to tell the difference between either room and (2) would wonder, am I talking to identical, very identical twins?
That gets me to Jan. Essentially, Jan, what my program argues is that all decisions are binary; essentially, all decisions are True/ False (I just use left/ right). For example, in a for-loop, you have for x in i, and this is evaluated as true false. The computer must decided true/false before taking action. The same for while loops, the same for do-while, the same for if then. Every single decision, at it’s most fundamental is true/false (binary). Now I imbed two levels of binary that result in three choices, but only two outcomes (my program accounts for random).
Moreover, when we realize that all decisions are binary, then the WAY that we evaluate true/ false is standardized, and even furthermore is that when we do this, repitition is the mother of learning. So the WAY that we learn is through repeated true/ false evaluations of the same thing. That is how we learn about that thing at our most fundamental level. The fact is, we learn in repetitive binary (applying memory), in much the same way the rat learns in the rat/maze T-test. We can expand our decision between one way and not the others (also a binary decision). So at the core of our learning is decision making, and the fundamental method of decision making is binary. Given that, we are all identical in learning, and thereby programming a computer to do this exact thing, it defeats the Turing Test because the way in which we all make decisions is identical. Essentially, Turing would not be able to tell the difference between a machine and man based upon the decisions each of us makes.
Now can we expand binary decisions to account for language? Yes. I am building that functionality. It is a tensor structure (2nd order tensor). Mu self-replicates when it’s performance is better than random and dies when it is less. If it is equal, it merely lives. There are actually two Mu’s running simulatenously, from which on one side (i.e. left brain), Mu selects from all of it’s versions which version has the longest life. The other side of the brain selects the Mu replicant with the best performance. This is referred to a cerebellum-like structure which is in turn just another version of Mu which selects which brain (left/right) has the best performance, and in the event that they are tied….well that’s part of the secret sauce, but let’s just say their’s a reason for “random” in the random access memory function within your own brain. Decisions are then queried to Mu (like NP Complete questions that can be broken into binary decisions), and Mu is agnostic to them. All it does is query its brain, and those parts of its brain which have performed best or have lived longest given the set of all possible or experienced binary decisions return decisions. The cerebellum then chooses between these decisions and returns them as feedback to the “interrogator”. It then ascertains the outcome to see whether the decisions was indeed correct, and updates the cerebellum and all the replicants of that outcome, so that they may update. So you have a huge number of perspecitves of the same binary decision, but like in Highlander, there can only be one (that is chosen which is choosing).
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 73 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
If all decisions are binary, does that mean everything is either false or true, no values in between allowed?
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 74 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
Hi Jan,
That’s close. Essentially, observations are binary. Essentially, the feedback is only binary (0/1). However, that feedback is stored in memory for each option. So if you have two options (Left/ Right), based on the feedback Right gets a 1 or a zero and left gets a 1 or a zero. However, they result in a probability distribution for each option. So one receives binary input (what happened), one transforms that into a probability distribution of what happened, and then one asserts that what happened in terms of probability is what is happening, by asserting that what just happened in the next data point follows the probability distribution in the prior data point, determines if this is correct and records yet another 1 or 0.
All decisions are binary. We may predict, such and such stock will have a return of 20% at such and such constructed time. Then at that time we state did that stock return represent a 20% return, true or false. Even the choice as to whether the stock return of concern is 20% is binary. Am I concerned with 20% yes or no. Am I concerned with 19% yes or no. Am I concerned with both possibilities, yes/ no. Did I receive a stock price of 20% yes or no. Nature provides us with the questions relevant to our survivial. Our brains simply answer the questions in binary. The fact that one process (self-replicating) answers many decisions results in a memory that is queried to answer all questions, even it they are seemingly non-related. That is because the Mu’s (the replicants) that survive are the ones who’s answers perform better than random across all possible questions that have been asked of it prior.
Everything is recorded as true or false (0, 1). Everything is experienced as a a probability distribution between 0 and 1. Mathematicians and philosophers have a hard time communicating because of the question of the identity, which is 1, such that the entire basis of all mathematics and scientific observation is that the number 1 exists. I assert that mathematics is incorrect in its assertion that “the number 1 exists (the identity exists).” I assert that philosophers are incorrect in their assertion that “the number 1 does not exist (the identity does not exist).” My primary assertion is that the identity (aka the number 1) MAY exist - that it is a matter of probability. As a result, the identity is fractal.
To give an understanding of why a probabilistic identity is fractal, here’s basically how I do it. Imagine you are a satellite collecting LIDAR data (but still imagine yourself as the LIDAR data collector). Essentially, LIDAR shoots a laser beam directly down and measures the time it takes for that beam to bounce back up. This tells you the distance it has traveled. By persistent surveillance, the beam bounces back up different heights and you get an understanding of the topology of the earth.
So now that you are a LIDAR beam, imaginge there is a square on the ground. It appears for all intents and purposes to be two dimensional - a square, but as you move some distance orthoganal to the direction of your LIDAR beam, you notice that it is not a square, but instead it has a 3rd dimension (height), making it cuboid. So you go all over the earth and see lots of squares and sometimes they are only two dimensional and sometimes they are three. Thus the dimension of squares is not 2 or 3 but somewhere inbetween; let’s say 2.8. This is a fractal dimension.
Now we can reduce this information to the question as to are squares 2 dimensional (0) or 3 dimensional (1), and the answer would be .8. Thus the identity of squares as three dimensional cuboids is .8 (that is the identity of the 3-dimensional cuboid), and the fractal identity of the 2-dimensional square is .2. There exists no identities that are with any certainty 1 or 0. It we use the convention (construct) of time as an extra-dimension, we see that the square’s position itself is in a state of constant change and thus never are we observing the same square twice. However, since our sensations (our feedback) for the most part on a macro level doesn’t see subatomic angular momentum or position, then we often assert that the same square we saw (i.e. as a LIDAR beam) 20 revolutions around the earth ago, is the same square we see now. So we consider it as the same square because we ask, “Is this the same square, yes/ no, and our sensations tell us yes. If however, we were to look at a cuboid sandcastle on the beach with a moat dug around it, and observed that same cuboid structure for 30 days, we may experience that square at certain observations as cuboid and at other observations as square. Thus is the fractal dimension of our observation. So yes, we record only 0, 1; however the result is a fractal identity. Also, for every dimension of which you are certain (so a singularity point could be a line, like looking directly down at the tip of a flagpole, or like looking at a line which happens to be the very top of a tennis net, or at the top of a square which happens to be a cube, or at a cube which erodes into a square, or on which additional cubes are placed (like a child putting a smaller cube pale of of sand on top of a larger pale), there always exists the possibility, however, improbable, of an additional dimension. This continues out infinite, but the basis of the information on the identity, is this that?, is always recorded in binary and applied as probabilistic, or as a fractal identity.
|
|
|
|
|
Posted: Jul 18, 2012 |
[ # 75 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jonathan Charlton - Jul 18, 2012: So Hans I definitely agree with what you’re saying about the human and the computer not corresponding. To clarify, in my test they do not speak. Essentially, the interrogator has the exact same question for both rooms, engaging in the exact same conversation (albeit the most fundamental and basic form of conversation).
I think you still don’t understand the major issue with your idea: during the test, the computer has NO access to the other discussion in any way. So there is simply no way that the computer can mimic anything from the other discussion.
Jonathan Charlton - Jul 18, 2012: That gets me to Jan. Essentially, Jan, what my program argues is that all decisions are binary; essentially, all decisions are True/ False (I just use left/ right).
First of all, human thought is not binary in nature, it is analogue. Secondly, and in conjunction with the first point, discussions are not just a string of decisions. A question like ‘how do you feel today’ can not be answered by a yes/no response.
It seems to me that you try to narrow down reality up to a point where your program will actually work. That’s a pretty academic approach and has nothing to do with ‘our human reality’.
|
|
|
|