|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
I’m still searching for terms, definitions and their relationships.
A WIKI view:
Intelligence is a property of the mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.
That’s nice, but are the following part of intelligence? If not, what are they?:-S
Motivation is the reason for engaging in a particular behavior. Reasons include needs such as food or a desired object, hobbies, goal, state of being, an ideal, altruism or morality. Motivation controls the initiation, direction, intensity and persistence of human behavior.
Sensing is the ability to receive and interpret signals from receptors throughout the body.
Motion control is the ability to generate commands to control voluntary muscles.
Metacognition is the ability to think about one’s thoughts, and modify one’s behavior. It is the essence of self-awareness and conciouseness and a key component in development of a Theory of Mind. It is the basis of free will.
|
|
|
|
|
Posted: Jul 28, 2011 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
I don’t think that any of those are truly a part of intelligence, and here’s why, from my personal perspective (so take it as you will )
Motivation: In most aspects of my life, I’m not very motivated at all. If “Type A” personalities are the “go-getters” and are assertive and confident, and “Type B” personalities are laid back, I would almost have to be classified as a “Type F” personality, since I’m closer to “comatose” than laid back. There are areas of my life where this is not the case, but not many; yet I like to consider myself to be intelligent, and most of the people who know me on a personal level say pretty much the same thing. I may be intelligent, but I’m lazy.
Sensing: I feel that this doesn’t apply because there are a lot of examples of intelligent people who either were born without, or somehow lost, one or more senses. A good case in point is Helen Keller. Considering how she overcame both blindness and deafness, and the achievements that she made in her life since, you can’t deny that she was an intelligent individual.
Motion: This goes along with Sensing, above. Another case in point is Dr. Stephen Hawking. Now you may argue that Dr. Hawking was able to move, and quite well, prior to developing ALS, but I counter with the question, “Did he suddenly lose his intelligence after his affliction?” Of course, the answer is no, so it stands to reason that motion has no bearing on intelligence.
Metacognition: Now here you may have a valid claim for being a part of intelligence, though I think I have to disagree, personally. While growing up, I was constantly being berated for “having lots of smarts, but no common sense”. I always came up with creative, “off the wall” solutions to problems, but almost never saw the “easy way” to solve them. Granted, a great many of my ideas worked, but I usually spent twice the effort, and more often than not over twice the time to effect what should have been a simple task. Yet, when tested in 6th grade, my IQ was supposedly in the low 140’s, and I still test in the mid 130’s now.
As I said earlier, take this as you will, since this is only my opinion on the matter. But I’m interested in what others feel about this.
|
|
|
|
|
Posted: Aug 1, 2011 |
[ # 2 ]
|
|
Member
Total posts: 7
Joined: Jul 11, 2011
|
With the exception of metacognition, the terms you list are tools of intelligence, not subsets. Motivation spurs action. Sensing is input (receive the results of interaction). Motion is output (interaction).
Metacognition is the reflective part of intelligence. I’d prefer to leave terms like ‘free will’ out of the definition. Thanks to sci-fi, free will is one of those weepy, nostalgic terms that people like to use to separate themselves from thinking machines. Will is the desire to do something. Free will is the ability to act on a desire. Desire is motivation. I don’t think it is any more complex than that.
(pardon the tangent) I really dislike the term because free will is always served up as a counterpoint to slavery or fascism; the implication is that anything that restricts free will is inherently evil. I completely disagree. Example: If you want to kill someone, you can, but there are social implications of doing so (reprisal, prison… etc.), so society negatively affects free will, but to positive community effect.
Theory of mind is (in a nutshell) self-awareness and empathy. Empathy is the understanding of feelings in others - a community building tool. Empathy is fascinating to me because when it is applied to non-humans, it leads to the development of “human” things like music, poetry, art and religion… all of the things that common knowledge says are beyond the reach of AI.
|
|
|
|
|
Posted: Aug 22, 2011 |
[ # 3 ]
|
|
Member
Total posts: 1
Joined: Aug 14, 2011
|
First big problem is that AI (algorithm) does not relate to the outside world. No need for survival, no interaction with other AI’s, thus no evolution.
It must be possibility to reproduce.
Second - should be possibility to make mistakes and learn from them.
World - dynamic, AI - static.
|
|
|
|
|
Posted: Sep 27, 2012 |
[ # 4 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
Zero - Aug 22, 2011:
First big problem is that AI (algorithm) does not relate to the outside world.
...
Second - should be possibility to make mistakes and learn from them.
Full agreement here!
I hate to keep promoting my own definition of “intelligence” since I’ve found people tend to react negatively to being told how they must think of a term, but I did put a lot of research into my definition, research that spanned many psychology books and many AI books, and I think this topic is extremely important, so here’s my definition of (or if you prefer, “necessary and sufficient conditions for”) “intelligence”:
“Intelligence” is efficient, adaptive, goal-directed processing of real-world data and its abstractions.
Note that two of the things you mentioned—“relate to the outside world” and “learn”—are included in this definition—termed here as “real-world data” and “adaptive”. That’s no accident, because I gleaned what I believed to be the essential components out of all the definitions I found in books, and those two characteristics were very commonly mentioned and were clearly critically important.
I could discuss my definition at length since I wrote an article about this definition in 1990, though it was rejected by the one journal to which I submitted the article, with the same negative feedback from the reviewers that I’ve come to expect ever since, with the reviewers basically saying that nobody is interested, and that nobody likes to be told what a term means. Ironically, only about one month after that rejection, AI Expert magazine came out with an article on exactly that topic of defining “intelligence”!
I believe a definition of intelligence is a critically important question because it is difficult to produce something if you don’t even know what it is that you’re trying to produce. One might even go so far as to conclude that the lack of a defintion is exactly why the field AI has languished for so many decades. Not surprisingly, this question keeps coming up in nearly every forum, magazine, and book on AI. With only one slight modification this year (I added the last three words), I’ve used this definition to good effect ever since I created it in 1990.
Here are some collections of definitions of “intelligence” that I haven’t completely read, because I’m confident the same important concepts mentioned repeatedly in these definitions are those that I already included in my definition, though maybe with different wording, such as “learn” versus “adapt” versus “acquire knowledge” versus “storing information”...
[ul][li]http://www.vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf[/li][/ul]
[ul][li]http://en.wikipedia.org/wiki/Intelligence#Definitions[/li][/ul]
My definition is compressed as much as I could make it, so every word there is carefully selected. For example, the attribute of “efficient” can be broken into time efficiency (i.e., speed), space efficiency, energy efficiency, fabrication efficiency, or other. For example, here’s one quote from Kurzweil that corroborates the need for time efficiency…
Is the length of time required to solve a problem or create an intelligent
design relevant to an evaluation of intelligence? The authors of our human
intelligence-quotient tests seem to think so, which is why most IQ tests are timed.
We regard solving a problem in a few seconds better than solving it in a few hours
or years. Periodically, the timed aspect of IQ tests gives rise to controversy, but it
shouldn’t. The speed of an intelligent process is a valid aspect of its evaluation.
(“The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Ray Kurzweil, 1999)
The word “processing” is also carefully selected in my definition to avoid undefined terms such as “thinking” or entity-specific terms like “computing” or “growing” or “evolving”. The definition is also intended to be a constructive definition to aid in designing and evaluating architectures, rather than a recursive or indirect definition. The word “adaptive” is also carefully chosen to generalize to all types of learning, whether rote learning, habituation, or other type…
[ul][li]http://en.wikipedia.org/wiki/Learning[/li][/ul]
The term “real word data” was also carefully chosen to generalize to all sensory modalities, and means data as in the “D” of the DIKW spectrum…
[ul][li]http://en.wikipedia.org/wiki/DIKW[/li][/ul]
...and the term “and its abstractions” refers to the other components (IKW) of the DIKW spectrum. The need of a system to be able to interpret the real world versus the virtual world should be clear. That’s why chess, math, databases, and similar computational chores were never particularly good demonstrations of intelligence.
The term “goal-directed” is also carefully considered, and applies to any goal from winning a game to general survival to unrestricted self-improvement and awareness. It’s a very general term because the more difficult the goal, obviously the more intelligence is required, so the term does not limit the interpretation. The term also implies that the domain is very important in that intelligence might exist within only one given domain but not outside of it, which also generalizes to the statement that the more domains the system can handle successfully, the more intelligent the system is.
My view is that intelligence is the ability to use optimally limited resources—
including time—to achieve such goals.
(“The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Ray Kurzweil, 1999)
In short, I believe this is a very solid, very general, and very useful definition. If anyone can think of another attribute that should be included, please let me know. I seriously doubt that any of these attributes can be omitted, especially adaptivity, but I’d be happy to discuss that possibility also. Enjoy.
|
|
|
|
|
Posted: Sep 27, 2012 |
[ # 5 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Wow, Mark! Nice “debut” post. Welcome to chatbots.org, and thanks for such an exhaustive and extensive outline for your “definition” of Intelligence. That particular definition has been a bit of a sore subject here, almost from the beginning, and I’m sure that this post will once again energize the discussion, and bring forth some more “lively discussion” on the subject.
You mentioned that the article that you wrote on the subject back in 1990 was rejected by the one journal that you submitted it to, and I’m wondering a couple of things about that. First, I’m curious as to why you didn’t (as your post implies) submit the article to other journals, as there were several at the time that I’m sure may have seriously considered publishing it. Secondly, would you be willing to consider posting your article here? I know that we’re not the epitome of academic excellence in the field of Artificial Intelligence, but from what I’ve gathered in my research so far, chatbots.org is a well respected venue for the exchange of ideas on the subject, and I’m sure that your article would not only be welcome here, but would also probably be well received, too. Something to consider, I’m sure.
|
|
|
|
|
Posted: Sep 27, 2012 |
[ # 6 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
“Intelligence” is efficient, adaptive, goal-directed processing of real-world data and its abstractions.
I believe a definition of intelligence is a critically important question because it is difficult to produce something if you don’t even know what it is that you’re trying to produce. One might even go so far as to conclude that the lack of a definition is exactly why the field AI has languished for so many decades. Not surprisingly, this question keeps coming up in nearly every forum, magazine, and book on AI. With only one slight modification this year (I added the last three words), I’ve used this definition to good effect ever since I created it in 1990.
Excellent! Now that we have the definition, we can start building our AI entity.
By the way, have you finished yours, yet.
Obviously, this definition is not intended to tell developers what mental functions need to be written, but it may serve as criteria for evaluating AI performance.
Thank you for your perspective. Definitions are important.
|
|
|
|
|
Posted: Sep 28, 2012 |
[ # 7 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
Dave Morton - Sep 27, 2012:
Nice “debut” post.
...
First, I’m curious as to why you didn’t (as your post implies) submit the article to other journals, as there were several at the time that I’m sure may have seriously considered publishing it. Secondly, would you be willing to consider posting your article here?
Thanks, Dave.
Actually, soon after I posted my above message, I did notice two small technical errors in my wording, plus one possible method of condensing my definition further. The technical errors were: (1) Since “data” is plural (“datum” is the singular form), I should have written “data and their abstractions” instead of “data and its abstractions”. (2) Intelligence isn’t really the processing itself, but the ability to do such processing, so I should have started the definition with wording something like “The ability to perform…”
The way I could have condensed the wording further is by a new term that I invented (in yet another unpublished article!) to describe the tiers of the DIKW hierarchy. If you call each tier in the DIKW hierarchy an “aggelia” (Greek for “message”/“precept”), maybe the plural could then be called “aggeliae”, meaning anything is that hierarchy of data-like things, whereupon the phrase “data and their abstractions” could be condensed to “aggeliae”. That means a better, updated version of this definition would read:
“Intelligence” is the ability to perform efficient, adaptive, goal-directed processing of real-world aggeliae.
“Aggeliae” is the set of all possible tiers in the DIKW Hierarchy.
The reasons I didn’t submit my article to another publisher was that I didn’t know of any other publication that was suitable, and after discussing the issue with some coworkers, we decided the underlying reason it wasn’t published was that I didn’t have a PhD at the time and I was trying to publish in a fairly prestigious journal. So then I got a PhD but I haven’t published anything since I got that degree due to ongoing problems of various kinds for years on end. :-(
I greatly appreciate the invitation to post that article here. The main things preventing that now is that it would take days of search to locate an old paper copy, plus by now I would be dissatisfied with my wording for the reasons I mentioned above, plus I did go off on an unnecessary tangent at the end of the article that has since been covered by other authors with different, modern terminology. But maybe your comment will motivate me to rewrite the article and resubmit it, here or elsewhere.
|
|
|
|
|
Posted: Sep 28, 2012 |
[ # 8 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
Toborman - Sep 27, 2012:
By the way, have you finished yours, yet.
Obviously, this definition is not intended to tell developers what mental functions need to be written, but it may serve as criteria for evaluating AI performance.
I’m not working on a chatbot, but I am working on my own machine architecture that aims at AGI. I estimate its design is about 70% complete. It’s getting so large that I had to split it into three large pieces, essentially software + hardware + miscellaneous, and I’ll probably have to publish it in small increments with one article covering each hardware component. I’m hoping to finish the first article by the end of this year.
You’re exactly right in that my definition doesn’t say what to build into a system, only to guide and evaluate the overall design. If a conventional digital computer algorithm can solve a problem just as fast as some other method (say an expert system or a neural network or a genetic algorithm), with no clear drawback in size or energy requirements or learning ability or type of data it uses, etc., then by my definition it would be considered just as intelligent as the other method.
|
|
|
|
|
Posted: Sep 30, 2012 |
[ # 9 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hi Mark, welcome!
I agree that a definition of intelligence should be plainly stated anytime someone claims to be working toward “artificial intelligence”. And there’s nothing wrong with defining “intelligence” your own way, as long as you plainly and unambiguously state what you mean (and therefore what you intend to achieve).
In particular, I like that your definition avoids equally ambiguous words such as “understanding” and “thinking”. The term “processing” walks a careful line. It clearly implies data manipulation, but you leave ambiguous whether the “abstractions” of that data are fed to the intelligent agent in question, or whether they are produced by that agent as part of its “processing”.
Let me explain. You state that you included “abstractions” for the sake of tying data to the real world. And it’s true that much of our “thinking” about the world is done via processing our own internal abstractions (generalizations, interpretations, etc.) of that world. However, most of this abstraction isn’t performed by a part of the brain people consider particularly intelligent (in a conventional sense). So the question becomes, is the formation of abstractions part of intelligence?
For example, the eye’s ability to translate incoming light into identifiable objects is done through a subconscious process that we only truly become aware of when it goes wrong. (When shadows at night seem to form an object that isn’t there, or when we do a double take to truly comprehend something we see at a glance.) I would argue that there is intelligent brain activity happening whenever our visual cortex takes a pattern of light and concludes it represents a particular object. (We aren’t even aware this process happens, we are consciously simply struck with recognition after the fact.) However, this type of intelligence can be found even in insects.
Perhaps this bias in intuition about “what intelligence is” comes from the fact that there is a subset of intelligent processes that we are consciously aware of. (Although even those processes we are not aware of much of the time!) Therefore those same processes—governed by the same brain activity—that can’t be tracked, we are reluctant to bequeath the same title to.
So I guess the question I have is, in your opinion, what role does the ability to make abstractions have on (1) whether or not a system is intelligent and (2) the degree of that intelligence.
|
|
|
|
|
Posted: Oct 1, 2012 |
[ # 10 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
> So I guess the question I have is, in your opinion, what role does the ability to make abstractions have on (1) whether or not a system is intelligent and (2) the degree of that intelligence.
I’m impressed. No sooner do I modify my definition slightly than you probe it and discover potential shortcomings. :-)
Honestly, I’ve only begun to think about the implications of including abstractions of data in my definition. My tentative feeling is that the higher the level of abstraction a system can handle, the more intelligent it is. By my criteria, processing systems like the retina or insect brains would also be considered intelligent, but in a very limited way and in a very limited domain, and most likely such systems would no reach no higher than the knowledge level, maybe even just the information level. Of course we would usually consider a “wise” person to be more intelligent than a “knowledgeable” person, and processing of symbols likely involves the knowledge level if a system is to understand meaning rather than merely to process information (as in Searle’s Chinese Room thought experiment), so such realizations strongly suggest that intelligence increases with increase in data abstraction level. Wisdom is more conducive to survival, after all, and must involve more relationships and/or data than the knowledge level, so if nothing else, wisdom is “bigger” than knowledge.
A similar question I’ve been asking myself lately is how information or knowledge could even exist in the real world in that same way that data does. As far as I can tell, anything in the real world that is more abstract than data must exist only in symbolic form (such as in the text of books), and ultimately any sensor can only read data, nothing more abstract. The words of wisdom we read from books are stored as text, which is ultimately merely data, which implies that any conversion to higher levels of abstraction happens in the brain/processor. I suppose the wording of my definition doesn’t clearly rule out handling of data abstractions after the data is read, but maybe the definition should make that clear.
Do you have any suggestions for improvement to my definition’s wording? (This may be a case where the definition itself begins to take on a geometrical structure, as I discovered in my analysis of happiness. But that’s another story.) Thanks for your very insightful critique.
|
|
|
|
|
Posted: Oct 5, 2012 |
[ # 11 ]
|
|
Experienced member
Total posts: 48
Joined: Oct 5, 2012
|
I’m new here, but I’d like to throw in on this idea of intellegence. Terms like cognition, consciousness, intelligence, idea, and such are concepts (there is another) that arise from the functioning of an information processing system.
Inherent in the elements that fundiment our universe are relationships that we traverse with our IPS (information processing systems), and in so doing, we extract conceptual representation of already existing order. The power to read pre-existing order though, might not suffice to fully define intelligence. The ability to re-order, however, or the ability to incept order from an unordered state, may more completely define.
We see much in the way of elemental accretion in the mature universe, and there seems at least three families of element that are slaved one atop the other, the most dependant being materials, or particles, and they being dependant of forces. Both of these “types” have components that are ordered with standardized parametric presets. Both are needed for either to manifest, but the forces exist apart from, and pre-necessitate the materials.
The third family might be order itself. Order pre-necessitates both the forces and the material. It is difficult to call this family of relational components simply order. Our closest appropriate word is “concept”, and yet concept carries with it connotations that are charged with controversial pre-human IPS innuendo.
A question arises however, regarding whether a concept could manifest itself apart from forces and materials, and it seems safe to say that they can, though there would thus obvioulsy be a vacuum of relationships from which to read and upon which to subsequently cognate in that case. Here is where the idea of an IPS being an able engine for order that did not previously exist may play an deep role in the definition of intelligence.
I have sensed that without forces or materials, a lonely information processor with only that ability, might be able to “START” with only that ability AND the increments of time. Time seems to be another type of element that may pre-necessitate conceptuality, as might energy also be. Regarding time though, our consciousness depends on a three track temporal continuum wherein we past cycle an updated the present so as to realize context from which to project an inferred trend into an oncoming temporal increment. It seems al least plausible that with an ability to discriminate and the comparative that three flowing, incremental time registries would provide, intelligence may be able to fight for a foothold and burgeon from there.
|
|
|
|
|
Posted: Oct 10, 2012 |
[ # 12 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
As far as I can tell, anything in the real world that is more abstract than data must exist only in symbolic form (such as in the text of books), and ultimately any sensor can only read data, nothing more abstract. The words of wisdom we read from books are stored as text, which is ultimately merely data, which implies that any conversion to higher levels of abstraction happens in the brain/processor.
Exactly—all abstractions of the physical world must happen in our heads. In a sense, words that we hear or read are abstractions of physical phenomena. But just as the writer must make abstractions to choose words that designate a particular physical event, so must the reader’s mind use abstraction to transform those words into a mental reconstruction of the event they represent.
I suppose the wording of my definition doesn’t clearly rule out handling of data abstractions after the data is read, but maybe the definition should make that clear.
I would argue that the only thing that can be truly inputed into our minds is data (via our senses) and everything after that is internal abstraction. Seeing black and white in front of you is the data level. Grouping those contrasting blobs into objects like letters and words is one layer of abstraction. Mapping those words and word strings onto a mental construction of physical events is another.
Only when those initial layers of abstraction are taken for granted can we say that the input (someone else’s text, in this case) was an abstraction in the first place. Really, the only difference between data formed by another person and data from a physical event is what types of patterns (abstractions) we assume can be made from the data at all.
If the data input comes from a person, we can safely take for granted that there is some pattern to be found. A pattern, moreover, that is natural for our brains to comprehend since it was produced by a similar type of intelligence. So too is natural phenomena consistant such that we can derive patterns from what we experience. Even when we cannot, we are still eager to assume those experiences contain some human-like intentionality (...God? Are you there?)
I guess in the end, what I’m getting at is that a definition of intelligence should make clear that the “abstraction” of data happens within the intelligent entity itself. Whether or not those abstractions are actually consistant with the data is equivalent to answering whether or not a given theory is consistant with reality. An intelligent entity may make incorrect abstractions—people do all the time. But an entity’s ability to form, modify, and discard those abstractions to accomplish its goal truly marks its level of intelligence, in my opinion.
|
|
|
|
|
Posted: Oct 14, 2012 |
[ # 13 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
You really have me thinking a lot about all this, CR, more than I have in years.
A pretty good analogy hit me a couple days ago: an intelligent processor is like the vehicle one takes to get to a destination. The faster one gets there (time efficiency) then the better the transportation, unless it involves prohibitively large amounts of fuel (energy efficiency) or an excessively large/costly vehicle (space efficiency). The more types of terrain the vehicle can handle (adaptivity), the more assured one can be of getting there, whether hindered by bodies of water, steep grades, or whatever. The destination (goal) itself can be very specific, like a specific address, or more general, like within a certain county, or completely general, like anywhere in any direction. (Intelligence can’t be specified unless it is relative to a given goal/destination, I don’t believe, just as one must specify the variable to which a mathematical integration is being performed, and just as one must specify the line of reflection when flipping over a 2D geometrical figure. That’s why many formal definitions include the phrase “with respect to…” after the term being defined, especially in math.) Part of the goal may also involve the trip itself, like not destroying the environment as one travels along, say in a bulldozer that would otherwise be fairly adaptable to terrain, or like not smashing an egg when placing it into an egg carton for sale. Also, the rougher or more natural the terrain (of the real world), in contrast to artificial environments (like Microworlds and laboratories) like level, paved roads on which small wheels can be effective, the more difficult it is to reach the destination, since the path and obstacles will be largely unpredictable. Some destinations are farther than others, just as some problems inherently require more steps for solution than others.
This analogy seems to make intelligence measurable in that all its components are measurable with continuous variables, especially when merging the laboratory world with the real world into a general spectrum of “navigability”. This suggests that vehicle desirability could be characterized as something like…
Vehicle desirability with respect to a given destination and given vehicle is proportional to the product of efficiency of time, efficiency of vehicle size, efficiency of fuel, degree of adaptability to terrain, variety of the destinations reachable, and difficulty of terrain that vehicle can handle, when traveling to that destination with that vehicle.
...and intelligence impressiveness could be characterized as something like…
Processor desirability (= intelligence) with respect to a given goal and given processor is proportional to the product of efficiency of time, efficiency of space, efficiency of energy, degree of adaptability to unexpected problems, breadth of the goal, and complexity of the input data that the processor can handle, when attempting to reach that goal state with that processor.
Then we would just need to come up with metrics for these components. For example, for complexity of the input data, estimates might be something like:
stereoscopic view of a complicated landscape: .99
photograph of a complicated landscape: .80
Captcha image: .70
blueprint of a room: .60
chess board: .40
bit string: .01
Such metrics would necessarily be as crude as the “utility functions” of goals, but at least there exists a definition in terms of mathematical operations and numerical values.
No Intelligence without Goals. Most, if not all known facets of intelligence can be formulated as goal driven or, more generally, as maximizing some utility function. It is, therefore, sufficient to study goal driven AI. E.g. the (biological) goal of animals and humans is to survive and spread. The goal of AI systems should be to be useful to humans. The problem is that, except for special cases, we know neither the utility function, nor the environment in which the system will operate, in advance.
http://www.hutter1.net/ai/uaibook.htm
|
|
|
|
|
Posted: Oct 14, 2012 |
[ # 14 ]
|
|
Senior member
Total posts: 147
Joined: Oct 30, 2010
|
“No Intelligence without Goals.”
Who sets the goals? I think we each get to. No matter what the environment, or what others tell us, each can choose an individualized goal. In the vehicle analogy, I can choose to park the car and go for a hike. Some may think “the (biological) goal of animals and humans” is to “survive and spread”; but Turing killed himself, Newton on his deathbed was proudest of remaining a virgin, etc.
For an AI, first I think we define goals for it; and it can provide us with useful tools to help us reach our self-defined goals. When the AI starts to determine its own goals, then it’s becoming much more intelligent…
|
|
|
|
|
Posted: Oct 14, 2012 |
[ # 15 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
Robert Mitchell - Oct 14, 2012:
Who sets the goals? I think we each get to. No matter what the environment, or what others tell us, each can choose an individualized goal.
Exactly. As humans with a sophisticated neocortex, we can begin to override our genetically-provided default goals to which animals are largely limited. At least our neocortex gives us freedom in that regard, so we can at least help to steer our own destinies, such as some of the great scientists did in order to advance their species in general. Just as a saw is a tool that we can use to saw whatever we want, AI would be a tool that we could use to compute whatever we want.
Robert Mitchell - Oct 14, 2012:
When the AI starts to determine its own goals, then it’s becoming much more intelligent…
I’ll assume you mean grandiose goals like survival, versus smaller subgoals created during the solution of a clear-cut computational problem.
That opens up a whole new topic: volition. Would a machine have any inherent goals regarding survival if we didn’t program such goals into it in some way, even indirectly such as via artificial pain and artificial pleasure? As open-minded as I am, I don’t believe so. That’s where our genetic cage becomes the most apparent: there comes a point where we must eventually cater to the avoidance of pain, which a machine would never need to do. That’s the irony of a postbiological world: what started as lifeless minerals will come full cycle back to lifeless minerals, with the main difference being that those minerals will then be assembled into a thinking but biologically lifeless organism. That realization also forces us to face what is possibly an unpleasant thought: that life of any kind never did have any meaning other than it happened to exist just because its genetic cage forces it to survive. An intelligent enough machine would already realize that fact out of sheer logic, and likely wouldn’t see any point to continuing the survival game for itself. That suggests that being biologically alive and human is something that can’t ever be overcome from a survival perspective, which may or may not be an appealing fate to contemplate.
A postbiological world dominated by self-improving, thinking machines would be as different from our own world of living things as this world is different from the lifeless chemistry that preceded it.
(“Mind Children: The Future of Robot and Human Intelligence”, Hans Moravec, 1988)
|
|
|
|