|
Posted: Mar 8, 2011 |
[ # 16 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Hans Peter Willems - Mar 8, 2011:
Again, there is not much use in emulating instinct ‘exactly’ like it works in humans. It’s much more useful to look at the function of instinct (i.e. preprogrammed behavior that can bootstrap other stuff) and emulate that. Ask yourself ‘what does it do in a human brain’ and then think about ‘how can we create that effect in a software program’.
I agree and like others I am constantly mulling what is the foundation of concepts that we should build on. What is a “core process” and what should be an outside data set I still go back and forth on. One of the problems is I don’t know what I “don’t know” about how the human brain works.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 17 ]
|
|
Senior member
Total posts: 147
Joined: Oct 30, 2010
|
Hans Peter Willems - Mar 8, 2011:
this is not ‘code’ that needs to be rewritten and tested, it is data that is being evaluated and either validated or discarded, or something in between like ‘not sure yet’.
Code is data.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 18 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Robert Mitchell - Mar 8, 2011: Hans Peter Willems - Mar 8, 2011:
this is not ‘code’ that needs to be rewritten and tested, it is data that is being evaluated and either validated or discarded, or something in between like ‘not sure yet’.
Code is data.
Robert, let’s not drown this debate in semantics
For me (as I made clear before with my OS analogy) the code contains the ‘logic’ needed to process the ‘data’. The data-model does not contain code, it contains concepts (or knowledge, or information, or what else you want to name it).
I’ve been developing (business) software for the last 30 years, so from a software developer’s point of view I think I do know the difference between code and data
Besides that, the idea that ‘code is data’ is one of the pitfalls in AI; it gave birth to the model of software that can rewrite itself, resulting in another concept that the ‘data in our brain’ is actually ‘code’ (as in ‘logic constructs’). The LISP programming language was developed from this idea, to be able to handle ‘logic code’ as if it’s data. We know now that this model didn’t bring the solution we hoped for. I think it’s important for AI development to have a clear distinction between the ‘logic plumbing’ of a brain and the ‘data’ that describes knowledge, experience, feelings, etc.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 19 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Merlin - Mar 8, 2011: One of the problems is I don’t know what I “don’t know” about how the human brain works.
Fair enough. So let’s start with what we do know, and fill in the blanks as we go.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 20 ]
|
|
Senior member
Total posts: 147
Joined: Oct 30, 2010
|
Hans Peter Willems - Mar 8, 2011:
The LISP programming language was developed from this idea, to be able to handle ‘logic code’ as if it’s data. We know now that this model didn’t bring the solution we hoped for.
It’s part of the solution. For example, I’ve been working recently on Daniel Bobrow’s 1964 program STUDENT, which does word problems. It’s not perfect, but it’s a start. What it’s missing I think is the ability to learn, to write new code by itself; or at least to accept code at runtime from the user to teach it. A lot of classic AI programs may be programmed in languages that allow code to be treated as data, but they don’t use that capability. (Does Eliza? Does SHRDLU? Does STUDENT? etc.)
Somewhere in the 1980s AI lost its focus on natural language. That’s what I blame for not having hit Turing’s 50-year deadline for true AI. Watson is good to bring us back to that problem.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 21 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Robert, I do agree that learning ability is very important. But what is even more important is the ability to reflect on knowledge that is already learned, being able to re-assess assumptions made earlier and change perception accordingly.
As for NLP, I still fail to see the importance of that towards AI consciousness or even inteligence; we know for a fact that people who do not possess language traits (for various reasons) are still conscious for every definition of the concept. To me language are patterns, assigned representations to concepts. Nothing more, just that. Humanity hasn’t even settled on one uniform vocabulary and still we can translate from one language to another without loosing (much of) the meaning of the message. However when we translate into some computer representation we suddenly insist that it is no longer understanding and when we translate back to natural language, the message somehow magically regains it’s properties to feed ‘understanding’.
I’m convinced that in the future, conscious AI-minds will communicate with each other in some digital language that we humans are completely unable to understand, but it won’t make us humans, in turn, less conscious that we are now.
|
|
|
|
|
Posted: Mar 8, 2011 |
[ # 22 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Hans Peter Willems - Mar 8, 2011: Merlin - Mar 8, 2011: One of the problems is I don’t know what I “don’t know” about how the human brain works.
Fair enough. So let’s start with what we do know, and fill in the blanks as we go.
Sounds good to me, it is one of the reasons I have focused on math as the next step for Skynet-AI.
Robert Mitchell - Mar 8, 2011: Hans Peter Willems - Mar 8, 2011:
The LISP programming language was developed from this idea, to be able to handle ‘logic code’ as if it’s data. We know now that this model didn’t bring the solution we hoped for.
It’s part of the solution. For example, I’ve been working recently on Daniel Bobrow’s 1964 program STUDENT, which does word problems. It’s not perfect, but it’s a start. What it’s missing I think is the ability to learn, to write new code by itself; or at least to accept code at runtime from the user to teach it. A lot of classic AI programs may be programmed in languages that allow code to be treated as data, but they don’t use that capability. (Does Eliza? Does SHRDLU? Does STUDENT? etc.)
Somewhere in the 1980s AI lost its focus on natural language. That’s what I blame for not having hit Turing’s 50-year deadline for true AI. Watson is good to bring us back to that problem.
Robert, like you I found STUDENT to be fascinating.
(I put a couple of references back in this thread)
I don’t necessarily envision a user being to add new “code” to extend a bot’s math capabilities. But, the ability to use the foundation functionality to perform new tasks should be very doable. Someday I would like to be able to describe a problem and have the bot work out how to solve it.
Maybe this is part of gray area in-between Hans Peter’s and your viewpoints. When Skynet-AI does a math problem it converts back and forth between code and the data. It is hard to say exactly where the code ends and the data starts. In some segments it writes and then runs temporary code to solve the problem. The core concept is “math” and the ability to manipulate math objects and concepts is what makes it able to “understand”.
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 23 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
When we teach a child the basics of math, we don’t teach logic (i.e. code) but we teach concepts (= data). We start with the concept of ‘one’ and then move on to the concept of ‘two’ being ‘two ones together’. Visualizing (like with an abacus) helps a lot to build the conceptual perception. Then we go from there. Only much later we can introduce the concept of ‘logic’ and translate the spatial knowledge and experience of ‘counting’ things into a formulaic representation. And again, a formulaic representation is just that; a representation, and therefor ‘data’. We translate that representation into smaller parts that we can handle in our brain, again on a conceptual level.
This is why computers are so much better at math then humans, because they have the logic build in. However, if the human brain would run a similar system to do math then we would be able to calculate as easy as computers do.
From my standpoint, math is something we ‘learn’, and therefor it must be conceptual data and not be part of the operating system (i.e. the code). So in developing my conceptual data-model I’m looking at math as one of the things that needs to be described in the model.
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 24 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Hans, I suppose, if you were trying to make an aircraft, you’d also only except a design that has flapping wings with feathers?
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 25 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Actually, studies have shown that many animals as well as human babies have an ability to count (that is, to add) and recognize when one group of objects contains more than another. We can even identify when our ability to distinguish numbers visually becomes impaired. (ie, Can you tell the difference between a pile of 17 apples and a pile of 20?) Animal intelligence has been ranked using just this metric. Hans, would you call this behavior learned?
Jan, I think the point is that Hans doesn’t want to build an aircraft. He wants to build a bird. Or at least, something that quacks and walks like a duck. I admire the desire, but I think he is severely underestimating the scope of the undertaking. And most certainly the complexity of the parts that make up a whole, intelligent creature. We are emergent beings, so to speak. The interaction of hundreds of thousands of biological processes comprise us, interacting in ways one would not expect from picking apart each piece of the puzzle.
I’m not saying it’s impossible to mimic the innate processes from which our intelligence emerges, I’m saying it is in no way a matter of breaking the problem down into a handful of “core concepts.”
Edited to add: Of course, I’d love to be proven wrong.
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 26 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
In order to prevent any possible unpleasantness, I’d like to point out that, no matter what we feel the probability of Han Peter’s assertions are, it is still possible for him to be right. I’m hoping, Jan that your “airplane” comment was meant in a humorous vein, and not intended to cause offense. Personally, I think that we can all become a bit thin-skinned when we’re talking about our respective projects, but maybe that’s just me.
When I was growing up, both my Dad and my Papa (Grandfather) used to talk about the “KISS Principle”, which simply stands for “Keep It Simple, Stupid!”, and I see Hans Peter’s pursuit as being in keeping with the spirit of the KISS Principle. If it turns out that his ideas are too simplistic, then so be it. But if it turns out that his approach is right?
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 27 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I had an elaborate reply written to answer Jan’s post but I’ll leave it. It’s a pity that some people see the need to start posting things like that, instead of engaging constructively in the discussion like many others do.
Having said that, this discussion was not started to simply throw some things around; I’m building my project on top of dozens of research papers from recognized scholars in the field. There are several researchers that see NLP the same way as I see it, the same goes for AI consciousness (I urge anyone interested to read up on David Chalmers). I also point again to ‘On Intelligence’; Jeff Hawkins makes a pretty solid argument for several things I mentioned like the fact that the brain is actually not the super powerful processing machine we thought it was (the maximum speed of neurons is not that high) and that the perception mechanism (Jeff calls that the HTM: Hierarchical Temporal Memory) is based on gradually simplifying the percepted information, down to a basic concept. David Chalmers is also talking about ‘invariant representations’ which are roughly in the same domain.
So my ideas are anything but simplistic. There’s a lot of research (by me) going into this. I’m fully aware of the magnitude of the task at hand, and I’m convinced it’s going to be hard to prove all this. In the end I might be wrong, but so far I mostly find corroboration for my views in many research papers that I’m reading. The only difference is that (as far as I can see for now) so far nobody has put all these parts together the way I’m doing.
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 28 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
I’m hoping, Jan that your “airplane” comment was meant in a humorous vein, and not intended to cause offense.
Yes, of course, whenever I make a comment like that, it’s always intended to be with a wink, even though I sometimes forget to add it.
Personally, I think that we can all become a bit thin-skinned when we’re talking about our respective projects, but maybe that’s just me.
Very true, and mine is already so thin.
What I was trying to say was this:
I get the impression that Hans is rejecting any and every known technique known to us for getting the initial step: the transformation from integers to letters, words, and so on. Pattern matching, regular expressions, grammars are all thrown out the door. The only thing Hans puts in place, so far that I have been able to understand, is a system of tagging and learning. He proposes to create an initial ‘diagram’ or whatever, with some initial ‘values’. From there, the system is supposed to be able to learn how it should transform this input into something useful. Ok, that can still be excepted as possible. The reasonable thing to presume next is to have some kind of self changeable or generative code which can emerge from these initial states and which is able to parse and interpret the input stream. But then he reject this as well!? Why?
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 29 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jan, you try to match my ideas to your view of things. As long as you hold on to how you see things you won’t be able to grasp what I’m talking about. I’ve read a few dozen research papers over the last few weeks and by accepting that certain views might have merit I have discovered a wealth of insights into the matter at hand.
I’ll try to make one thing clear to you: I’m not throwing anything out of the door, instead I’m trying to map as much of the researched concepts I come across, onto a (as much as possible) simplified model of ‘reality’. I’m searching for the one governing model for our conscious perception of that reality.
Having said that, it seems obvious to me that both numbers and words are ‘representations’ that we have mapped onto concepts. We have different languages that desribe different representations that are mapped onto the same concepts. Going from there it isn’t hard to understand that a ‘word’ is again just a pattern; it’s a combination of figures that represent characters and together they represent a word. Looking at all this together it is easy to recognize the layers of ‘patterns’ that are put on top of each other with the only result of representing one concept (e.g. a ‘thing’) as another concept (the word or sentence that represents that ‘thing’). So to make clear to someone what I mean I can use the word that describes it, but it’s equally (or even more so) valid to just point at the ‘thing’ so you can see it. Both representations (visionary pattern vs the word that describes it) are inside your brain translated into the same concept.
So to me it seems logical (and this is what Jeff’s On Intelligence is about) that the complex representations (i.e. the complex patterns) are dumped by our brain in favor of much more simple representations of the same thing.
|
|
|
|
|
Posted: Mar 9, 2011 |
[ # 30 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Going from there it isn’t hard to understand that a ‘word’ is again just a pattern; it’s a combination of figures that represent characters and together they represent a word. Looking at all this together it is easy to recognize the layers of ‘patterns’ that are put on top of each other with the only result of representing one concept (e.g. a ‘thing’) as another concept (the word or sentence that represents that ‘thing’).
In software design, this concept is generally described as grammar definitions. I have found the ‘coco/R’ compiler generator, and it’s accompanying paper a very good introduction into the subject. Especially the paper, for which the author got his doctorate’s degree is a very good read for anyone interested in the subject of creating parsers ( + generators).
Edit: I think the paper is of later date then the one I originally read (the doctorate’s thesis), but a good read anyhow.
|
|
|
|