Gary Dubuque - Aug 13, 2011:
So I looked up the “real” definition instead of your misrepresentations. BTW, your first bullet is a definition of practicality. Who’s playing semantic games?
Every definition I listed was listed in a dictionary under “functionality”. I intentionally skipped the definition “The quality of being functional.” because it didn’t seem to really add anything besides saying a noun can also have an adjective form. But let’s take your first definition,
Gary Dubuque - Aug 13, 2011:
the set of functions or capabilities associated with computer software or hardware or an electronic device
That’ll do. I meant the capabilities associated with computer software. If your program is generating functions, it has that capability. If it is creating knowledge (your definition of intelligence) then it has that capability.
I think this was clear from the original context, but now we’ve exhausted the subject. If it takes this much back-and-forth to define a clear term like “functionality”, why are we even bothering with “intelligence”??
Gary Dubuque - Aug 13, 2011:
And I never said functionality was a synonym of a function, but you changed the use of the word “function” into something else quite often.
Yes you did. After the post when I chose the word “functionality” (woeful mistake apparently) you jumped in with,
Gary Dubuque - Aug 13, 2011:
BTW, intelligence isn’t a function, it is the creation of function. If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.
What?? What the hell does that have to do with the statement,
C R Hunt - Aug 13, 2011:
Whether or not you want to call this “understanding” or “intelligence” is irrelevant I think, so long as we’re clear on what functionality we’re interested in.
??? All I was trying to state was that “intelligence” and “understanding” were the capabilities we’re interested in.
Gary Dubuque - Aug 13, 2011:
C R Hunt - Aug 13, 2011:
Gary, I like the way you distinguish knowledge and intelligence. Intelligence is the ability to recognize, structure, and manipulate pieces of information. You need intelligence to store/generate/use knowledge, so testing knowledge is a simple way to tell if a human is smart. Unfortunately, computers are great at storing knowledge, so one needs to be more careful about how to test for intelligence. I think that’s why the first thing many people do when they talk to a chatbot is try word problems and the like to test the program’s ability to manipulate new information. (“Create knowledge” as Gary said.)
I never said intelligence is that ability.
I added back in the part of my quote you replaced with ellipsis (in bold). Deleting that part makes it sound like I’m implying knowledge testing is a sufficient test for AI. Which is exactly the opposite of what I said. The sentence before that part is basically a restatement of what you said:
Gary Dubuque - Aug 13, 2011:
To simply answer your wish to distinguish knowledge from intelligence: Intelligence creates knowledge. Knowledge does not make intelligence. Knowledge is an artifact of intelligence which is why we can use it to test for IQ.
So yes. Yes you did say intelligence is that ability.
Gary Dubuque - Aug 13, 2011:
In my mind that is not even a principle (or part of a principle) of intelligence. One knows how to recognize, structure, and manipulate pieces of information. An intelligent entity gains this knowledge.
So your saying recognizing, structuring, and manipulating knowledge…is knowledge? A learned thing that a bot must learn as well? Am I understanding you correctly?? If so, I have to disagree. From birth our brains are growing and pruning connections based on external input—that is, knowledge is causing us to physically change in response so that we can better structure that knowledge. And we never learned how to do it.
For example, an infant must learn how to interpret visual stimuli into a coherent picture of their surroundings. If you blind an infant, it’s brain will never develop the ability to interpret what the eye sees. Neurologists have done experiments with animals where they force them to see the world upside down (via a mirror contraption). The animals still became wired to see the world correctly. Later, once they removed the mirror, the animals stumbled around, seeing the world upside down.
My point is, we don’t have to learn to recognize, structure, or manipulate knowledge. Our brains are hard-wired for it, growing and trimming in response to new knowledge. Why should an AI have to learn to do that?
Gary Dubuque - Aug 13, 2011:
I didn’t talk about a bot, I talked about Victor’s context where he has yet to address the halting issue and why.
Victor has already said he doesn’t understand why there’s a halting issue. I’ve said the same. You have yet to clarify why a bot can not have some internal conditions which constitute that a piece of knowledge has been understood. Humans don’t have a halting issue, and we certainly engage in trying to understand new input.
Gary Dubuque - Aug 13, 2011:
If it seeks foremost to understand, what stops it from meditating on the nature of all things as triggered by the input data, given it has the vast resources of knowledge for really understanding?
Nobody said the bot would be plunged into deep searches into it’s knowledge base, trying to plumb the depths of what it can associate with new input, into infinity. Again, humans have no problem attempting to understand something to a reasonable depth, without getting lost in “meditating the nature of all things triggered by the input data”.
Gary Dubuque - Aug 13, 2011:
I guess you can say “when it makes sense”.
I agree that we shouldn’t resolve the problem of one vague term (understanding) with another (“makes sense”).
Gary Dubuque - Aug 13, 2011:
Oh, I get it now. The understanding is in the knowledge base, not the input.
No, understanding is “in” both. That is, it requires both. What pieces of data support the claim of the new data? What pieces contradict it? What pieces of data do I trust more? Are there any gaps in a piece of knowledge that are filled in with this new knowledge? Can I perform any new logic operations with known knowledge plus this new knowledge? Does my new logical inference make sense given what I know?
Etc, etc.