|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Will a computer program ever cross that threshold?
Will all a computer program ever do is “information processing” , no matter how many levels of inductive reasoning is involved?
Perhaps that is all humans do is “information processing”. After all, if you believe in determinism everything is just a chain reaction, including our so called thought processes.
So, the question is, can someone give an example of an interaction with a bot that would show the difference between “information processing” and something more along the lines of ‘thought’ (whatever that is !!!!)
Some argue that since a computer has no free will (it is simply a deterministic Turing machine), it cannot ‘think’ (no free will = no capacity to think).
Or, is there some intermediate stage between “information processing” and “thought” ? Or perhaps many levels between?
What DO you think ?
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 1 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Feb 8, 2011: Some argue that since a computer has no free will (it is simply a deterministic Turing machine), it cannot ‘think’ (no free will = no capacity to think).
‘Free will’ can be described as ‘random choice’. Of course, random choice can be implemented in software. So we can let software make ‘random decisions’. Going from there we introduce ‘boundaries’ to make ‘better decisions’. Now, let’s assume the AI can set it’s own boundaries based on ‘learning’ and ‘building experience’, that would be getting pretty close to what we call ‘consciousness’; the AI would be able to make a ‘conscious decision’.
The above would lead to an AI being able to disagree with your input; it would actually ‘reason’ with you. I would say that would equate ‘thinking’, at least within some model or definition.
It might be obvious that the model I’m working on is aiming in this direction
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 2 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
My first thought when reading Victor’s post was that in order for a bot to have free will, we need to throw a random element into the code. Hans thoughts seem similarly directed. However, upon further mulling, I’ve decided this isn’t necessary, and possibly isn’t even helpful. After all, random decisions incorportate “free” but they don’t really imply “will”.
Our inability to predict a person’s output makes a person seem as though they have free will and intelligence. But just because we cannot predict an outcome, does not mean that it is not deterministic. We simply do not know all of the relevant “initial state” information or we do not know/cannot follow with our own minds the algorithms that operate on that information. Thus it seems some random element must be employed to direct the person’s thoughts in the surprising direction they took. And certainly I sometimes feel like a thought popped into my own head randomly. But the semblance of randomness is not (necessarily) randomness in reality.
In physics, one often talks about “emergent” behavior. That is, a macroscopic state that arises from the microscopic details of the system, but which isn’t obvious or easily predicted when directly studying the fundamental forces/interactions at play. These macroscopic states can be rich and complex in behavior, and though we can often write down the fundamental equations that describe the system, they aren’t practically useful in studying the emergent state. I view intelligence in an analogous way: an emergent behavior arising from many interactions (or in the case of AI, algorithmic processes) that, when considered individually, would not necessarily imply the complex and often surprising outcome of the whole.
This analogy also gives me hope that we do not need to directly reproduce the brain in order to reproduce its behavior. The best models of emergent systems are not their fundamental equations, but simpler equation forms that take advantage of our understanding of the emergent state.
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 3 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Feb 9, 2011:
‘Free will’ can be described as ‘random choice’. Of course, random choice can be implemented in software. So we can let software make ‘random decisions’.
Interesting comment, this is still of course assuming the entire universe is non-deterministic (ie there truly is a source of randomness). But the same thought has crossed my mind (having the system initally randomly select, then based on feedback from its environment, refine its choices). I have even went as far as to use a completely external source of entropy - hot bits ( http://www.fourmilab.ch/hotbits/ ) - random numbers being generated from nuclear decary - CR can provide comments here I’m sure.
C R Hunt - Feb 9, 2011:
However, upon further mulling, I’ve decided this isn’t necessary, and possibly isn’t even helpful. After all, random decisions incorportate “free” but they don’t really imply “will”.
Hum, or does it? So it seems “will” is either:
a) random (or at least initially random)
b) “hard coded”, but sent to us
c) “hard coded” inside of us
or perhaps a combination of all the above.
I myself do not really believe “will” is necessary for intelligence.
C R Hunt - Feb 9, 2011:
In physics, one often talks about “emergent” behavior. ............. I view intelligence in an analogous way: an emergent behavior arising from many interactions (or in the case of AI, algorithmic processes) that, when considered individually, would not necessarily imply the complex and often surprising outcome of the whole.
100% agreement ! And this is why it is often frustrating in this forum, showing early demonstration of your bot, where you are trying to provide “sneak peeks” at its abilities and someone says “Oh no no no… that’s just information processing”
Yes, it will not be one single algorithm, it will be layers and layers of algorithms (or layers and layers of levels of abstraction of rules).
I don’t believe if we get to true AI, that it will be suddenly BANG !! It goes from “information processing” to “truly thinking”. My opinion is that computer intelligence will evolve in an analog way - that the difference between “information processing” and “thought” is simply the level of complexity it can handle —and—the levels of abstraction.
So I think more and more people will start refering to computers as “thinking” (some non-techie people even do right now when a web page shows “Loading…..” which is of course laughable) as they climb that ladder of abstraction.
So by year X, Y number of people will believe computers “think” because their information processing abstraction level has reached level Y
by year X + C1, Y + C2 number of people will believe computers “think” because their information processing abstraction level has reached level Y + C3
by year X + C4, Y + C5 number of people will believe computers “think” because their information processing abstraction level has reached level Y + C6
C’s being some constant unit of time.
Of course - there are people that argue that EVEN IF a program passes a Turing Test, it does not prove it is thinking, because the WAY it is done, it is simply INFORMATION PROCESSING !
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 4 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Which brings me to the statement that I’ve been tempted to make so many times, but refrained from due to considerations of politeness:
All that human thought is, is simply information processing.
Intuition, on the other hand, is the “act” (for lack of a better word) of coming to a conclusion, or making a supposition, without sufficient information to support said conclusion or supposition. Well, that, AND just happening to be correct, of course.
The point here, of course, is “So what if it’s simply information processing? The same can be said about us, as well!”
I think we can now safely say that this particular dead horse is well and truly beaten. Next?
[edit]
Don’t get me wrong here. I think that the above discussion is well worth pursuing. But the negativism of declaring that “item X is just adjective Y!” is counter productive when it’s repeated like a knell of doom. To make the statement once, and perhaps also citing some supporting facts is one thing. To belabor the point is unnecessary and defeatist.
Ok, I now officially relinquish the Soap Box.
[/edit]
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 5 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
C R Hunt - Feb 9, 2011:
Our inability to predict a person’s output makes a person seem as though they have free will and intelligence. But just because we cannot predict an outcome, does not mean that it is not deterministic. We simply do not know all of the relevant “initial state” information or we do not know/cannot follow with our own minds the algorithms that operate on that information. Thus it seems some random element must be employed to direct the person’s thoughts in the surprising direction they took. And certainly I sometimes feel like a thought popped into my own head randomly. But the semblance of randomness is not (necessarily) randomness in reality.
When I built JAIL (JavaScript Artificial Intelligence Language-used to create Skynet-AI) I recognized that one of the limitations of most bots is the level of predictability. It makes them boring. In fact, based on experience, if a user receives the same response with-in 10 lines of a conversation he is much more likely to sign-off. Some of the key features I added were to help overcome that.
The “directed randomness” helps provide the illusion of intelligence.
In Skynet-AI it is virtually impossible to have the same conversation twice. It has a built in “directed randomness” or in CR’s terms “emergent behavior”. To see some of the best examples of this you can ask Skynet-AI the question, “Who are you?” and in real time it will answer you and write you a press release. The press releases are not stored as files but generated as an algorithmic process.
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Discussing the ‘free will’ of humans being deterministic or not has very little merit to me. We (try to) measure AI in relation to ‘human’ behaviour, but it doesn’t matter if this human behaviour is ‘real’ or just how we humans perceive it to be. That is a whole different discussion that has little bearing on AI research.
As for the ‘random’ thing; I just stated that we can have software do ‘random’ things. However I don’t think that ‘pure randomness’ has anything to do with reasoning. Even more so, the moment that a human defaults to randomness (like ‘let’s just choose an option’) is when real reasoning halts for a lack of information, knowledge or insight. Nevertheless, what follows from this is that AI does need randomness when faced with the same situation of having insufficient data to get to a reasonable result, if even that randomness leads to the answer ‘sorry, not enough information to give a reasonable response’.
And for the sake of conversation I throw in a part from a Rush lyric: If you choose not to decide you still ave made a choice.
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 7 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Even with “sufficient data” data, an AI will be perceived as more intelligent if the output to a given stimulus is not static. The output cannot be purely random, but adding variation makes it seem much more human.
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Feb 9, 2011:
And for the sake of conversation I throw in a part from a Rush lyric: If you choose not to decide you still ave made a choice.
Ah, a Rush fan, me to, they’re great Maybe they haven’t made a choice, but they have made a “meta choice” (having chosen not to decide).
Merlin - Feb 9, 2011: Even with “sufficient data” data, an AI will be perceived as more intelligent if the output to a given stimulus is not static. The output cannot be purely random, but adding variation makes it seem much more human.
Agreed. One very simple example of this in my bot is, I will have it pick synonyms. Also, in addition to that, it will first pick a template to choose from, then after picking the template (that is, randomly choose its output syntax, then replace ceratin adjectives with their synonyms). Give it a little randomess. I have to agree with Hans, randomness is perhaps not as important, or not at all, in the reasoning stage.
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Merlin - Feb 9, 2011: Even with “sufficient data” data, an AI will be perceived as more intelligent if the output to a given stimulus is not static. The output cannot be purely random, but adding variation makes it seem much more human.
I agree that the AI should be able to ‘formulate’ an answer in such away that it is not static. However, I think there are better mechanisms then ‘randomness’. In AIML it is very easy to have a list of synonyms for words to choose from at random while building the response. I experimented with this myself but this is one of the things that pushed me towards strong-AI research; randomness can give some pretty stupid results in the flow of a conversation. So in my current model formulation of an answer IS based on synonyms, but a synonym is being chosen based on the value of a synonym in relation to the conversation flow and (this is important) the ‘emotional status’ of the AI (based on the PAD-model).
|
|
|
|
|
Posted: Feb 9, 2011 |
[ # 10 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Yes, I’d imagine if you let it be TOO random it could get quite humorous, can you think of any examples?
For me, the bot must understand what it is saying. I will only limit the randomness to 3 things:
a) selection of an output template
b) selection of adjective synonyms
*and*
To give it a sense of “free will”, even if asked a direct question, it could suddenly say “Boy, you’re full of questions today, aren’t you. Anyway your answer is <x>”
BUT, of course it will have to count the number of questions asked in this “conversation” so far… not say that as its first reply of the conversation
and that is where conversation history and state plays a huge role.
some bots fall victim to…
me : hello
ai: hi there !
me: hi
ai: hi there !
me: greetings!
ai: hi there !
me: good morning
ai: hi there !
how about .. .“ok buddy, can we get past the greetings?”
So this is synonym on the incoming side
by the way, the whole free will thing, i found
http://www.spaceandmotion.com/Philosophy-Free-Will-Determinism.htm interesting.
“What we need for understanding rational human behaviour - and indeed, animal behaviour - is something intermediate in character between perfect chance and perfect determinism - something intermediate between perfect clouds and perfects clocks. (Popper, 1975)”
|
|
|
|
|
Posted: Feb 10, 2011 |
[ # 11 ]
|
|
Senior member
Total posts: 153
Joined: Jan 4, 2010
|
Perhaps reading a book or two written by Douglas Hofstadter might help here. He has spent much of his life examining thought. He even has a book subtitled “Computer Models of the Fundamental Mechanisms of Thought”
No I don’t believe all human thought is information processing. Why would anybody dream? What is god for? etc.
Does “emergent” behavior include Darwin’s theory of evolution? Maybe evolution is not scientific…
C R says: “This analogy also gives me hope that we do not need to directly reproduce the brain in order to reproduce its behavior.”
I see “analogy”, “hope”, “reproduce”, “behavior” which are terms full of abstraction that often are more than information processing (although “behavior” strives to be as exact as information processing with no bias added.) “Reproduce” is interesting because she intends to simulate the real thing in an artifical way. This is something new to match against the information extracted (processed.) It is creating, not inferring or deducing. It is experimenting.
|
|
|
|
|
Posted: Feb 10, 2011 |
[ # 12 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Gary Dubuque - Feb 10, 2011: Perhaps reading a book or two written by Douglas Hofstadter might help here. He has spent much of his life examining thought. He even has a book subtitled “Computer Models of the Fundamental Mechanisms of Thought”
I’ve never read any of his work—thanks for the recommendation.
Gary Dubuque - Feb 10, 2011: Does “emergent” behavior include Darwin’s theory of evolution? Maybe evolution is not scientific…
I can’t comment specifically on evolution, as it isn’t my field. But in general, when considering the many chemical and “nanomechanical” processes that go on within cells and between them, giving rise to an organism that functions as a single unit—yes, that strikes me as quite emergent.
Then to go to the scale when we consider the effect of a few—or perhaps even one—nanoscale change to such an organism has on the development of the whole, and then to the population of organisms, to the point of speciation. Such complex systems may well be emergent.
But, again, this is only an impression and not an analysis.
I searched for a reference to emergent phenomena and found a paper by Vince Darley of the Harvard Division of Applied Sciences titled “Emergent Phenomena and Complexity”. On first glance it looks pretty good*. Here’s a particularly relevant excerpt:
“Now we need no longer deal with any explicit dichotomy
between emergent and non-emergent phenomena. The
perceived lack of understanding in the former is really
just another way of describing the complexity of the map
between initial state and final phenomenon. In the sense
that a lack of knowledge of the initial conditions will usually
cause increasingly poor predictions [...]”
Gary Dubuque - Feb 10, 2011: C R says: “This analogy also gives me hope that we do not need to directly reproduce the brain in order to reproduce its behavior.”
I see “analogy”, “hope”, “reproduce”, “behavior” which are terms full of abstraction that often are more than information processing (although “behavior” strives to be as exact as information processing with no bias added.) “Reproduce” is interesting because she intends to simulate the real thing in an artifical way. This is something new to match against the information extracted (processed.) It is creating, not inferring or deducing. It is experimenting.
Experimenting certainly is creating something new—but not out of a void. When we create, we decide that there is an analogy between the object we are designing and some other known objects that do what we want our object to do. We decide they are analogous because we know a vast quantity of information about what the other known objects are composed of, how they act together, etc. We use what we know of physical laws and cause and effect—built on a lifetime of experience—to deduce what our new object will do.
* I want to clarify something that might appear to be a deviation in the text from my description of emergent phenomena. That is, Darley asserts that a system is emergent if it is more efficient to numerically compute the solution to the system rather than develop a “creative analysis”. The numerical computation is analogous to directly solving the fundamental equations, as I discussed before. In practical cases, computational methods introduce problems of their own and the most useful descriptions of systems, while not necessarily capturing all of the physics, are clever models that are deduced using what we know of the special properties of the emergent state.
|
|
|
|
|
Posted: Feb 10, 2011 |
[ # 13 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Gary Dubuque - Feb 10, 2011: Does “emergent” behavior include Darwin’s theory of evolution? Maybe evolution is not scientific…
I was looking more carefully through the Darley paper and it seems that the leap from emergent behavior to evolution has occured to others as well:
“[...] let us hypothesise the concept of
an ‘emergent phenomenon’ as a large scale, group behaviour
of a system, which doesn’t seem to have any clear
explanation in terms of the system’s constituent parts.
Rather than viewing this as the first step in the hierarchical
evolution of hyperstructures[1], I am interested in
‘first-order emergence’ in its own right, as it is currently
more amenable to a precise formalisation than the elusive
increase in complexity one observes in natural evolution.”
|
|
|
|