|
Posted: Mar 18, 2011 |
[ # 76 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Gary Dubuque - Mar 18, 2011: To put language in prespective with computer programs, so far I’ve heard of systems that save parse trees or systems that save kernel facts derived from inputs or systems that change other kinds of data that represents what the text they are given is, but I have yet to find an approach that only saves the user’s utterances as it’s data (except one I made about 25 years ago.) Why is it that what a person says has to be translated into something symbolic (and more meaningful) in the computer before it is used in reasoning and other computations if language itself is the message (or “the media is the message” as the quote goes)? In other words, why do we almost always make other abstractions when we already have “the” abstraction directly given to us?
A sentence communicates one complete thought (abstraction), but it also incorporates many smaller abstractions. The advantage of generating parse trees (or, at least the type of knowledge base that I generate) is that each of these smaller abstractions can be considered separately. For example, the sentence,
“I think Gary’s point is a subtle one.”
contains many important ideas besides the central message. It indicates, for one, that I am capable of thinking. And that you are capable of possessing a point (this in itself is an abstract way of communicating that you have a message or idea). Points can be subtle. I can think about subtle points. Gary can have them. By virtue of other knowledge that I have stored, I can decide that “Gary and I are both people” and “People can think” therefore probably Gary can think about subtle points as well. And I can probably have them too. If I wasn’t sure, I could ask if all people are capable of thinking about subtle points.* Or if people who have subtle points are capable of thinking about subtle points.
Okay, enough of this. I think my point ( ) has been made. By breaking down a complex abstraction into its constituent parts, more basic ideas can be learned as well. For you or I, these ideas may seem obvious, but for a bot that has no understanding of the objects about which the sentence is concerned, every scrap of information helps!
* The answer to this question depresses me a bit.
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 77 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
In my design, I store the original user input string along with the parse tree. Because, yes, sometimes the actual literal input can be consulted, or be part of the equation.
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 78 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
A sentence communicates one complete thought (abstraction), but it also incorporates many smaller abstractions. The advantage of generating parse trees (or, at least the type of knowledge base that I generate) is that each of these smaller abstractions can be considered separately.
Exactly my idea.
Also, a sentence can contain information not contained in the words itself, like things that are ‘not said’. This needs to ‘extracted’ somehow. And symbolic representations are easy to use in binary form, usually easier then a list of chars or words.
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 79 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Jan Bogaerts - Mar 18, 2011: And symbolic representations are easy to use in binary form, usually easier then a list of chars or words.
Exactly, this is why I was able to create CLUES ver 3 which pre-compiles the PTSGL (parse tree semantics generation language) which is ASCII source code, into binary (simply 32 bit integers), which gave me a speed boast of literally about 500 X !
Jan, your design also generates trees?
CR: Yes, CLUES also does this, basically the ‘top level’ or entire user input has any number of parse trees, all of which are built on parse trees of substrings of that input, and those in turn, of trees of substrings of those substrings. This makes it possible for the system to consider every conceivable semantic interpretation possible…. which is exactly why I need extremely fast execution time, considering the depth of analysis going on.
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 80 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
C R Hunt - Mar 18, 2011: If I wasn’t sure, I could ask if all people are capable of thinking about subtle points.* Or if people who have subtle points are capable of thinking about subtle points…
* The answer to this question depresses me a bit.
If it makes you feel any better, CR, I believe that the potential is there, in everyone. It just seems that some individuals don’t measure up to their potential. Ok, after typing that out, I’m now depressed! Never mind.
@Jan: The ability to “read between the lines” is probably going to be one of the most difficult challenges to creating “True AI”/“Strong AI”/“Synthetic Intelligence” (that last is my personal favorite, as it connotes created, rather than imitation intelligence).
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 81 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
C R Hunt - Mar 18, 2011: By breaking down a complex abstraction into its constituent parts, more basic ideas can be learned as well.
You are very close to my model with this, but the other way around; in my model I start with very basic ideas (or ‘concepts’, yes there they are again ), and then build more complex (and even VERY complex) concepts on top of those.
Our approach is each from one side, but the resulting model seems (at least to me) pretty close to each other.
|
|
|
|
|
Posted: Mar 18, 2011 |
[ # 82 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Jan, your design also generates trees?
Yes, sort of. I’m building a screencast that shows an entire parse. I’ve got 1 screencast ready, 1 in production, 2 scripts (I’ve been playing Spielberg today)
@Jan: The ability to “read between the lines” is probably going to be one of the most difficult challenges to creating “True AI”/“Strong AI”/“Synthetic Intelligence” (that last is my personal favorite, as it connotes created, rather than imitation intelligence).
No doubt. Stuff for shrinks.
|
|
|
|
|
Posted: Mar 30, 2011 |
[ # 83 ]
|
|
Senior member
Total posts: 257
Joined: Jan 2, 2010
|
Hi,
I’m really late in this discussion.
Let’s say you had a computer program, never mind the details, treat it as a black box. You do not know or have any idea how it works.
One more condition to this question. We don’t know it’s a computer program. In fact, this program has been masquerading, unknown to any of us, as “Jan Bogaerts” here at chatbots.org (sorry Jan…I figured you out buddy!).
Question : would you admit it was understanding, and thinking and learning ?
Absolutely yes!!
Making something appear as being intelligent/conscious, does not validate it as being intelligent/conscious.
I’m kinda thinking this whole discussion on intelligence and consciousness as being a bit irrelevant. Aren’t all of these philosophical considerations a bit mute…if all along we have believed and accepted that Jan was in fact a real person?
Regards,
Chuck
|
|
|
|
|
Posted: Mar 30, 2011 |
[ # 84 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Chuck Bolin - Mar 30, 2011: I’m kinda thinking this whole discussion on intelligence and consciousness as being a bit irrelevant. Aren’t all of these philosophical considerations a bit mute…if all along we have believed and accepted that Jan was in fact a real person?
Regards,
Chuck
Thanks Chuck, you’ve proven my point, it is results that matter, not the philosophy.
Don’t get me wrong, philosophy has its purpose. You just have to know when to apply it.
|
|
|
|
|
Posted: Mar 30, 2011 |
[ # 85 ]
|
|
Senior member
Total posts: 153
Joined: Jan 4, 2010
|
Chuck, you don’t believe Jan is (not) human? What results do you base this on? Where’s your proof?
Beware the wolf in sleep clothing. Nature is full of camouflage. And yes, it makes all the difference in the world if the box is without a soul.
Perhaps reading the book, The Little Prince, can help. At times I feel this is like the king in the little prince’s journey who commands everything. The whole universe follows the king’s decrees. The little prince asks the king if he rules the sun. The king replies he commands it every morning to rise and he warns that a great king knows the best time to make his demands. It wouldn’t do to order the sun to rise in the evening. And thus all the king’s wishes are obeyed.
|
|
|
|
|
Posted: Mar 30, 2011 |
[ # 86 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
One more condition to this question. We don’t know it’s a computer program. In fact, this program has been masquerading, unknown to any of us, as “Jan Bogaerts” here at chatbots.org (sorry Jan…I figured you out buddy!)
That’s a first. Someone once asked me if I was an alien. I said no, but he didn’t believe me. So I guess there’s no point in denying. Though I’m not gonna sent you a piece of me to prove the point.
|
|
|
|