|
Posted: Jul 3, 2012 |
[ # 31 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Jul 3, 2012: Victor Shulist - Jul 3, 2012:
I disagree. By that definition, when we think about how to do something, then write an algorithm for it and put it in a machine so the machine can do it, the machine would be ‘thinking’.
And yes, I’d say it is thinking by following instructions. Every time a machine can do something a human can, we stop and say . .. oh . .. a machine can now do that, so that is not thinking. This is called the “AI effect”, and it is a falacy.
If I gave a human being the code to some of the software I wrote, he/she would fail at some point, loosing focus, and making errors. A computer ‘keeps it all straight’. Following virtually unlimited depth of complexity in instructions. Yes, it is a form of thinking. Again, much more narrow than human level, but again, thinking is a ANALOG value, not yes/no
Dave Morton - Jul 3, 2012: I’d have to debate the concept of “error free”, but I would accept a concept of “minimal error”, instead.
Ok, are we talking about MIcrosoft Windows here, or a CPU & compiler.
When was the last time a microprocessor made an eror. . . Oh . .perhaps the pentium bug of like, what 1991 ??
When was the last time a properly tested CPU exectued your code incorrectly? Exactly.. doesn’t happen my friend.
When there is an error . . . .it’s your error , not its
There will be some people, that no matter *what* the system does, pass the turing test, surpass every human at absolutely every possible task, even have creativity and come up with new scientfic theories, solve the grand unified theory . . .nope . .. since a computer is doing it now… it’s not thinking… and come up with something like .. hum. ..it doesn’t have a soul or something silly like that, to justify that it doesn’t really think. . . after all, they’ll say ..isn’t it just STILL following a program !!!??
|
|
|
|
|
Posted: Jul 3, 2012 |
[ # 32 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Victor Shulist - Jul 3, 2012:
Dave Morton - Jul 3, 2012: I’d have to debate the concept of “error free”, but I would accept a concept of “minimal error”, instead.
Ok, are we talking about MIcrosoft Windows here, or a CPU & compiler.
When was the last time a microprocessor made an eror. . . Oh . .perhaps the pentium bug of like, what 1991 ??
When was the last time a properly tested CPU exectued your code incorrectly? Exactly.. doesn’t happen my friend.
When there is an error . . . .it’s your error , not its
Actually, I’m also considering hardware failures, as well. Overheating CPU’s, degrading RAM modules, and other hardware failures can introduce errors, as well. Granted, most instances of hardware failure produce effects that are far more catastrophic than a simple error, but the possibility exists, however low the probability.
Victor Shulist - Jul 3, 2012:
There will be some people, that no matter *what* the system does, pass the turing test, surpass every human at absolutely every possible task, even have creativity and come up with new scientfic theories, solve the grand unified theory . . .nope . .. since a computer is doing it now… it’s not thinking… and come up with something like .. hum. ..it doesn’t have a soul or something silly like that, to justify that it doesn’t really think. . . after all, they’ll say ..isn’t it just STILL following a program !!!??
You’re pulling a “Rush Limbaugh” here, Victor, by using an absurdity to point out an absurdity. I’m not sure it’s working here. But that’s just my opinion.
I used to follow some discussions about AI over at LinkedIn, and one particular discussion comes to mind here. The question that was being debated (and rather passionately, I might add) was “do Humans have ‘Free Will’?”. Seems simple enough, really; or it seems so to me, at least. It was shocking to me just how many otherwise intelligent individuals firmly (even zealously, fanatically, obsessively, pick your favorite extreme adjective) believed that “free will” was nothing more than a biochemical reaction, and that we Humans have no more free will than a nematode. I see the potential for similar “strong polarity of beliefs” happening here, so I just want to ask now, before things (potentially) get out of hand, that we realize that not everyone will agree with our ideas, and that’s ok.
|
|
|
|
|
Posted: Jul 3, 2012 |
[ # 33 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
When was the last time a microprocessor made an eror. . . Oh . .perhaps the pentium bug of like, what 1991 ??
I think AMD screwed up big time just recently, with an error in one of the instructions.
|
|
|
|
|
Posted: Jul 3, 2012 |
[ # 34 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Dave Morton - Jul 3, 2012:
Actually, I’m also considering hardware failures, as well. Overheating CPU’s, degrading RAM modules, and other hardware failures can introduce errors, as well. Granted, most instances of hardware failure produce effects that are far more catastrophic than a simple error, but the possibility exists, however low the probability.
Extremely low probability, exactly. So my original point is made
Dave Morton - Jul 3, 2012:
You’re pulling a “Rush Limbaugh” here, Victor, by using an absurdity to point out an absurdity. )
Well no I’m just saying, at the end of the day, people will believe (or not believe) what they want. I’m not refering to anyone on this site. But there will be those that cling to the ‘it is just following instructions’ until the bitter end. Ok . .that’s that
Jan Bogaerts - Jul 3, 2012: When was the last time a microprocessor made an eror. . . Oh . .perhaps the pentium bug of like, what 1991 ??
I think AMD screwed up big time just recently, with an error in one of the instructions.
Yes, AMD did.. Again, humans Either a human making a hardware design error, or a human making a software bug!
About the differing opinons.. yep.. point taken. I admire everyone’s focus on their own approach.. who knows who is ‘right’, time will tell . .. it is just nice to have such a varied set of opinions!
|
|
|
|
|
Posted: Jul 3, 2012 |
[ # 35 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Victor Shulist - Jul 3, 2012:
Ok, are we talking about MIcrosoft Windows here, or a CPU & compiler.
When was the last time a microprocessor made an eror. . . Oh . .perhaps the pentium bug of like, what 1991 ??
When was the last time a properly tested CPU exectued your code incorrectly? Exactly.. doesn’t happen my friend.
When there is an error . . . .it’s your error , not its
Actually, Victor, try googling “soft error rates”. The numbers I’ve heard in the past say that you can expect your CPU to give a bit error every two years or so. I can’t find a source for this figure now, but this guy says you can expect one every 4-5 years from atmospheric neutrons alone. Typically, these errors are described in units of “FIT”, or number of errors per billion (10^9) device hours of operation.
This may seem not terribly relevant for you or I, and it isn’t. But as soon as you scale up to supercomputers employing tens of thousands of cores, error rates get serious and you have to design your architecture with these failures in mind.
Of course, all of this is an aside from the conversation at hand. I just thought it was interesting that, in a sense, we’ve come full circle from worrying about “bit errors” due to actual bugs in our room-sized computers to having such reliable hardware that we don’t even realize bit errors can be an issue, then on to having such massive computing systems that suddenly those ol’ bugs are back again. (In the form of radioactive particles, but hey, that just makes things all the awesomer. )
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 36 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
C R Hunt - Jul 3, 2012: every two years or so
I think we can safely ignore that… just add a wee bit of additional error checking.
My point simply was compared to a human it can pretty much ‘rounded off’ to 0, you know, from a practical engineering point of view….At millions of calculations per second, for 2 years…. one error.. how many calcualtions can a human do before error?
when you take any rule of thumb to ridiculous proportions, of course there are exceptions. And if you NEED that amount of processing. . . recheck your algorithm for efficiency
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 37 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Suddenly, the notion that I “need” a good pair of walking shoes, yet “want” a Bugatti Veyron, and have to “settle” for an old Toyota pickup pops to mind. It’s not my algorithms that needs to be checked for efficiency, but my income stream.
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 38 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Roger Davie - Jul 3, 2012: However what would you say about a system like this :
http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?_r=2&partner=rssnyt&emc=rss
Where the programmer gives the machine the means, but the machine teaches itself…
However impressive, this is what I call ‘peripheral intelligence’. The system does not ‘think’, as it learns to identify a pattern that is designated to be a cat, by a human. It still has no perception of what a cat is, it is only capable of discriminating recognized patterns, based on what was identified at first by a human to be the ‘pattern’ (of a cat) to be recognized. This sort of technology of course has it’s place in AI-development, and is tmo an important piece of the total puzzle.
To bring back the analogy with humans; there is of course intelligence involve in recognizing a cat, but we start thinking about a cat AFTER we have recognized it (either out in the world or inside in our episodic memory).
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 39 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Jul 4, 2012: However impressive, this is what I call ‘peripheral intelligence’. The system does not ‘think’, as it learns to identify a pattern that is designated to be a cat, by a human.
As I understand it from the article, the algorithm is given no information about what pattern it should be looking for. The only thing people do to “teach” it is to make sure there is a cat in every picture they give it. The algorithm just tries to identify what all the pictures (roughly) have in common.
This is no different than a parent pointing out a cat to their toddler. We just make sure there’s actually a cat there when we point and say “cat”. The toddler does the rest: dissecting its environment, deciding what is always around when the parent says “cat”. When the researchers give the algorithm photos, they are essentially pointing to another cat. The analog to saying “cat” is the algorithm’s assumption that the pictures must all have something in common.
Actually, this reminds me of an anecdote from my own childhood (that I’m frequently reminded of by relatives). On a trip to the zoo, my parents wheeled me from animal to animal in a stroller, pointing to each and telling me its name. Finally, at the giraffe enclosure, I voiced my confusion: “That rock is a giraffe?”
Turns out I couldn’t see any of the animals from my vantage point low in the stroller. Even humans can be misled if teachers don’t offer some consistant guidance. In this case, not slipping pictures of rocks in with their cats.
Hans Peter Willems - Jul 4, 2012: It still has no perception of what a cat is, it is only capable of discriminating recognized patterns, based on what was identified at first by a human to be the ‘pattern’ (of a cat) to be recognized. This sort of technology of course has it’s place in AI-development, and is tmo an important piece of the total puzzle.
What would qualify as perception of what a cat is? I’d say being able to identify that a specific picture—taken at a specific angle, under unique lighting conditions and with unique coloring—falls under a category you’ve generated yourself requires critical thinking. Including the development of evolving lists of hypotheses of what qualifies an image or part of an image for your category. And what is thinking but the incorporation, categorization, and generalization of information?
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 40 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Jul 4, 2012: To bring back the analogy with humans; there is of course intelligence involve in recognizing a cat, but we start thinking about a cat AFTER we have recognized it (either out in the world or inside in our episodic memory).
Wouldn’t generating your own amalgam image of the object “cat” count as thinking about a cat after recognizing it?
I’m just trying to flesh out your own definitions of “thinking” and “perception” and so forth, since these are such wiley terms. Whether (and to what extent) google’s algorithm is “thinking” or anything else I couldn’t begin to comment on until I learned more about 1) what their results were, 2) to what degree/consistancy these results were acheived, and 3) additional details about what the algorithm takes for granted and what is generated from the environment (the pictures).
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 41 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
C R Hunt - Jul 4, 2012: What would qualify as perception of what a cat is? I’d say being able to identify that a specific picture—taken at a specific angle, under unique lighting conditions and with unique coloring—falls under a category you’ve generated yourself requires critical thinking.
I would say that my own ‘thinking about a cat’ involves a lot more then ‘identifying the pattern that represents it’. That is ‘seeing’ instead of ‘thinking’.
I ‘see’ (as identifying the pattern) a lot of things each day, without thinking about it. If I was to think about everything I saw every day I would go bonkers. We are pretty good at discriminating the input from our senses, to be able to focus on what we deem important. Seeing is a ‘peripheral’ process, thinking about something is not peripheral.
C R Hunt - Jul 4, 2012: And what is thinking but the incorporation, categorization, and generalization of information?
I do agree with that. But I’m sure that you are aware of the real complexity that lies behind this very general description.
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 42 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Jul 3, 2012: This is called the “AI effect”, and it is a falacy.
The ‘AI-effect’ is indeed a fallacy, because it is wrongly used by people in discussion, exactly the way you do here.
There is a difference between predetermined algorithmic computation, and human thought that is based on handling analogies (and capable of doing that pretty loosely), involving emotional deliberation and influences, and a whole lot more. The whole point is that we (as humans) can handle information without having the correct algorithm for it. We can adapt to such situation by handling it as ‘something’ that ‘sorta’ looks like it, and has ‘some sort of’ similarity, even if we can not exactly pinpoint the ‘why’ it seems similar. Now THAT’S thinking.
This has nothing to do with the AI-effect.
As soon as I see a computer doing internal deliberation about it’s current perception of the situation at hand, I say it’s indeed thinking.
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 43 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Merlin - Jul 3, 2012:
USER:What is the number between twenty one and twenty three? AI: Twenty two.
This is an actual input/output from Skynet-AI. The numbers 21, 23, and 22 do not exist anywhere in Skynet-AI’s programming or database. Neither do the words; “twenty one”, “twenty two” or “twenty three”. The AI writes its own code to understand the natural language input, solve the problem, and produce natural language output.
So let me ask you, in this example, does Skynet-AI “think”?
This would indeed fall inside of a narrow definition of thinking. However, I also think that to be able to ‘think’ about something, you need to be able to ‘comprehend’ what you are thinking about (or at least experience a lack in comprehension). This is also where I believe that purely grammar-based systems will never actually ‘think’, because in the end it is just grammar, not ‘understanding’.
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 44 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
In this narrow example I believe Skynet-AI actually does comprehend the numbers. Possibly better than humans do. And if this falls into a definition of thinking, then we have answered the question. Machines can think. Now it is just a question of expanding the knowledge base.
It is mostly a question of semantics:
Birds fly, airplanes fly.
Fish swim, do submarines swim?
People think, does Skynet-AI think?
If it accomplishes the same task should we give it the same label?
|
|
|
|
|
Posted: Jul 4, 2012 |
[ # 45 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Jul 4, 2012: The whole point is that we (as humans) can handle information without having the correct algorithm for it. We can adapt to such situation by handling it as ‘something’ that ‘sorta’ looks like it, and has ‘some sort of’ similarity, even if we can not exactly pinpoint the ‘why’ it seems similar. Now THAT’S thinking.
What you’ve described here sounds exactly like what the google program does. It had no algorithm for determining what a cat—or anything else for that matter—looks like. It just found chunks of different pictures that ‘sorta’ looked like eachother and built a definition of an object based on that.
And it can generate its own “mental image” of what a cat is that is a unique representation not found in any of the pictures it has seen. Which is exactly what we do when we think about objects we’ve seen before.
I can envision a later version of the program that can process video as well. If the same algorithm, after watching enough youtube videos, were to produce its own unique set of images that portray a cat pouncing, would that be more persuasive than a single static image? Would that seem more like thought?
And if so, how is a series of images fundamentally different than a single image? (Beyond being closer to our own way of interacting with the physical world, with constant visual input rather than isolated frames of input.)
|
|
|
|