|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Forgive me if this topic has already been raised somewhere else in the forum but I was unable to find any references to it, so here goes…
There is obviously a wide variety of approaches to the development of conversational software being tried by the members of this forum. I’m curious to know how everyone would categorise their efforts in the broad terms defined by the long-running “Neats versus Scruffies” debate. You could say this is the artificial intelligence equivalent of the “Nature versus Nurture” debate that has raged in biology for the past century.
http://en.wikipedia.org/wiki/Neats_vs._scruffies
The essence of it is that Neats believe that artificial intelligence can be achieved through the application of rigorous proofs and algorithms whereas Scruffies believe that it can only be achieved through harnessing some kind of chaotic natural process that is too complex to be understood.
I’ll start the ball rolling by putting it out there: I’m a Neat and proud of it!
As for the rest of you, you know who you are, so why not tell the forum?
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Not to throw a spanner into the works before the discussion is fairly started, but why can’t we have both? After all, the entire universe is a finely tuned balance between order and chaos. Personally, I don’t think we’ll see the “big picture” with either discipline alone. The “Neats” will come the closest, in the shortest time, but neither can “cross the finish line” without the other.
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 2 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Dave that may be true (and the referenced article even says as much) but what I’m curious about here is the individual approach used by each of the members of this forum. Neats tend to borrow from Scruffies and vice versa in order to actually get results, but each individual person tends to be just one or the other.
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 3 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Well, then I’ll have to toss in my 0.02ยข worth.
Upon just the reading of your first post, the term “Scruffy” jumped out at me. It seemed more comfortable to my personality, and looking at my room in which I write this post, it most certainly fits my lifestyle (at least, in a general sort of way). Then I read the Wiki page, which, by the way, I found to be engaging and entertaining. After reading the explanation, history, and examples of the terms, I find that the term “Scruffy” still fits. I’m also mindful of several posts that I’ve either made, or replied to, which seems to bear this out. I firmly feel that logic and order have a firm and definite place, not only in AI, but in life in general, but logic, all by itself, without that chaotic “spark” of creativity, is a sterile, barren thing, with no joy and no love, and what will that bring? The T900 series. Personally, I prefer Bicentennial Man. Now THERE’S an AI entity to look up to, and to emulate!
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 4 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
As a cognitive scientist my focus is on modeling the functions of the human mind. For me, AI is an incidental method I use to test the model. Because of this, my approach is definitely scruffy. Sometimes prototyping with procedures, other times using rule-based propositional logic. Of course, when I integrate these functions, I am sometimes faced with restructuring portions of the code. This is the price I pay for incremental rapid prototyping. The benefit being quick proof-of-concept results.
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 5 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
If it were up to the room, I think I’m in the ‘neat section’ (as long as you don’t open the closet doors )
That said, I think it’s a bit of both. I try to start out with chaos: get as much data in my head in as short as time as possible (aka for a day or 2, read everything I can find), then I just let my subconscious take over, and a couple of days later, a result will pop out. Sometimes it’s good, other times, the process needs some more fuel.
Don’t know how you’d describe that, maybe ‘ordered chaos’?
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Cool topic, Andrew.
At heart, I’m a Scruffy believer. But I think it’s important to point out that Scruffy does not mean lacking rigor or formalism, necessarily. Nor does it necessarily imply that the algorithms used are ad hoc. The salient feature of emergent phenomena is that its governing equations, while potentially simple in themselves, do not directly capture the collective behavior that results from their execution. That is, the behavior of the system derives from the way it’s components interact rather than due to the actions of any individual component.
To determine the collective behavior of an emergent system from its governing equations, often the only method available is numerical simulation. (Hey, how convenient for AI!) But often even this is impractical, due to the daunting computing time required to accurately describe large systems as well as the compounding errors introduced by numerical approximations.
But here is the beautiful part of emergence, which makes it so appealing to the Scruffies: if one can characterize the emergent behavior itself, the governing equations aren’t so important. Consider thermodynamics. Developed in a (somewhat) ad hoc manner and rigorously tested, thermo still stands as the ultimate description of how forces influence the energetic properties (heat/work) of bulk materials, providing our definitions of such basic properties as pressure, temperature, entropy, etc.
It wasn’t until later, when the rigorous microscopic methods of atom counting and quantum mechanics gave rise to statistical mechanics, that a microscopic explanation for thermodynamics laws was found. But statistical mechanics is wholly unnecessary to apply thermodynamic equations. And skipping the mess of directly simulating the atomic interactions that give rise to the bulk behavior we’re interested in does not make the thermodynamic approach incomplete or lacking in physical meaning.
The same applies to the development of an artificial brain. If the emergent behavior of a brain is captured, the individual modeling of neurons is unnecessary. The real question is: is the brain an emergent system, and if so, what are the fundamental quantities we should express its behavior with?
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 7 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Andrew Smith - Jul 24, 2011: Dave that may be true (and the referenced article even says as much) but what I’m curious about here is the individual approach used by each of the members of this forum. Neats tend to borrow from Scruffies and vice versa in order to actually get results, but each individual person tends to be just one or the other.
Okay, I realize I didn’t address this at all, lol. My project right now is focused on improved English parsing, harnessing statistical methods as well as information from a knowledge base. In principle, one should be able to rigorously show whether or not the specific recipes for calculating confidences will converge on a specific parse solution as you feed it sentences of a particular construction. But in practice this seems tedious and subject to wide variability depending on the size and scope of the knowledge base. I guess this puts me squarely in the Scruffy camp.
The only attempt I’ve ever made at formally testing my confidence methods was a “Scruffy” approach as well: for a while, I kept track of how high the program ranked each correct parse and plotted out how it improved with more and more exposure to similar sentence constructions. I’ve put this aside while I’m overhauling certain aspects of the parser. Perhaps I’ll take it up again once my latest version is put together.
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
Andrew, thanks for that pointer!
My basic position is that the Turing test is a red herring.
Successful AI will be domain specific, augmentative and enhancing.
In terms of imaginary AGI, agents may exceed human capacities in distinct verticals, but human beings will not be “copied” in this century.
Certainly, IBM Watson’s success can be attributed to the “everything but the kitchen sink” approach.
|
|
|
|
|
Posted: Jul 24, 2011 |
[ # 9 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
My framework is ‘Neat’ but my implementation is ‘Scruffy’. I built the framework so that it could scale and be used in a variety of fashions. The actual AI though, rapidly evolves through prototyping and feedback versus specification and then development.
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 10 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Once again I find myself learning something new and interesting from some surprising answers to my question and, especially since reading Cassandra’s posts which turned my notions upside down, I’ve been doing a lot of thinking.
When I think “Neat” I think of mathematical precision, but of course ever since Sir Isaac Newton first demonstrated that events in the physical world could be described and predicted with mathematics, we’ve been attempting to model everything with mathematical tools which are really just approximations of what is “really” happening when studied at some greater level of detail. Viewed that way, it would seem that we Neats are the ones who are just guessing and Scruffies are the ones who are rigorous, making Neats the new Scruffies.
However the recently published “Grand Unified Theory of Intelligence” shows that intelligence is actually an approximation like “lossy compression” (or mathematical formula) which would make intelligence inherently Neat. Go figure.
http://web.mit.edu/newsoffice/2010/ai-unification.html
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 11 ]
|
|
Guru
Total posts: 2372
Joined: Jan 12, 2010
|
I am also a scruffy. Logic and provability dont interest me that much. On the other hand, I am currently working on a rule-based parser using the rules of english grammar that i find, so that is more on the “neat” side. But personality in a bot is not a neat behavior.
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 12 ]
|
|
Guru
Total posts: 2372
Joined: Jan 12, 2010
|
Also, I disagree with Dave. I don’t think neats will come closest fastest. I’m a firm believer in approximation and heuristics
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 13 ]
|
|
Member
Total posts: 7
Joined: Jun 18, 2009
|
It seems to me if we are to design a near human like device that we must overcome the ‘structured thinking’ paradox. In design, the human brain works on a quantum level , that is and is not restricted to a mathematical, binary level constricted by the precision of mathematical constructs. Therefore I suggest the best we can do with algorithms is to mimic certain predictable functions of humans with present hardware and software, until of course quantum machines become available.
The way we think is as not always structured as DeBono indicates that there are alternatives to logic or sequenced problem solving such as asynchronous thought models. To explain my thought is that Algorithms will always be structured and that they all work on the basis of logical steps a then b then c, d e f to reach x, whereas the human brain can skip logical steps such in pattern recognition our brains can fill in missing information.
Therefore the reference to Newton is about the physical world and innate properties of laws and order, whereas we are dealing with the human brain which has the properties of the metaphysical, thought, and imagination.
That the paradox will therefore confine us whilst we base our work on the current binary based technology to always being ‘neat’ even if we try to make it look scruffy. Even using heuristics especially and its pattern based logic is still confined to the paradox.
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 14 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Michael Stoddart - Jul 25, 2011: It seems to me if we are to design a near human like device that we must overcome the ‘structured thinking’ paradox. In design, the human brain works on a quantum level , that is and is not restricted to a mathematical, binary level constricted by the precision of mathematical constructs.
Just want to point something out here. I get that your main point is that the brain has non-binary states. For example, neurons are not simply firing or not firing—they can be partially activated, but below the firing threshhold.
However, this does NOT mean that “the human brain works on a quantum level”. The human brain is a classical system. At the temperature it operates, all brain activity happens far slower than the timescale of any quantum decoherence process you could conjure up.
|
|
|
|
|
Posted: Jul 25, 2011 |
[ # 15 ]
|
|
Senior member
Total posts: 107
Joined: Sep 23, 2010
|
C R Hunt - Jul 25, 2011: ... At the temperature it operates, all brain activity happens far slower than the timescale of any quantum decoherence process you could conjure up.
That’s why in http://www.scn.org/~mentifex/AiMind.html I model brain activity with discrete on-or-off quasi-neurons.
|
|
|
|