AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

ASTRID: Analysing Symbolic Tags Relates to Intelligent Dynamics
 
 
  [ # 16 ]

Considering the number of times we all hijacked threads in the past (especially Chuck’s thread about Walter), this is a minuscule infraction. raspberry I’ll look into splitting the thread later today, if I can. My Grandma has been quite sick this morning, and I just don’t have the time for more than this short post right now.

 

 
  [ # 17 ]

Dave, only if Victor likes to have those posts in a separate thread then go ahead. Other then that, I’m fine with such small intrusions as long as it doesn’t completely derail the original discussion.

Tonight I’ll post some info myself to get the thread back on topic smile

 

 
  [ # 18 ]

@Jan For examples of numerous projects incorporating this work, see http://www.cs.rochester.edu/~james/

“Most recently, we have been focusing on task/workflow learning systems in which the the system learns a task model from a dialogue with the user that includes a single demonstration of the task. By combining deep language understanding, reasoning, learning and dialog, we can learn robust task models in a matter of minutes.”

You may recall some months ago I posted links to videos of PLOW in action. Here’s the link again in case you missed it.

http://www.cs.rochester.edu/research/cisd/projects/plow/

 

 
  [ # 19 ]

This clearly shows the sheer power and importance of NLP, especially when combined with other algorithms.  PLOW/TRIPS should have its own thread for sure!

 

 
  [ # 20 ]

OK, so back on (my) topic:

About handling grammar:

As I said before, my model has as little coded logic as possible. Nevertheless there must be some sort of parser that handles input. The parser in my model does NOT parse input into ‘grammatical constructs’, instead it parses into a ‘concept pool’. This concept pool is a collection of concepts that can be deduced from the input. It then weights the concepts in this collection against each other, and other previously found ‘connections’ between those and other concepts (= experience) to perceive the meaning of the input. This in itself is a ‘fuzzy’ approach that can handle misspellings, wrongly constructed sentences, slang and mixing in other languages into a sentence.

Based on this model it IS possible (and actually simple) to introduce ‘proper grammar’ into the system because that would be just another ‘concept’ that the AI can learn just like it learns every other ‘concept’. It will then be able to recognise ‘correct grammar’ and ‘incorrect grammar’ as those are again concepts that input is mapped to by the parser.

 

 
  [ # 21 ]

Hans, can you provide a simple example of how a given input is parsed into this “concept pool”? Try as I might, I’m having a bit of difficulty in visualizing the process. With grammar parsing, I can see the relationships within a given input, as it relates to how a computer would analyze and “understand” it. But for me, a concept is too nebulous to define within the context of a computer, and thus my struggle. I’m not doubting you, mind. I’m just failing to grasp what’s going on, is all. smile

[off topic]Happy Birthday to Charles Edward O’Neal II, wherever you are![/off topic]

 

 
  [ # 22 ]
Dave Morton - Feb 15, 2011:

Try as I might, I’m having a bit of difficulty in visualizing the process.

That is understandable because we are talking about ‘catching’ the things that go on beyond ‘just grammar’. This is what is missing from the NLP focus; to have the model be able to store ‘stuff’ that goes beyond ‘naming things’.

Dave Morton - Feb 15, 2011:

Hans, can you provide a simple example of how a given input is parsed into this “concept pool”?


When something is put into the system, let’s say for the sake of argument we do this as text (language), then two processes are involved; First there is comprehension (understanding) and in most cases learning (building experience). Second, if required/requested, there is formulation of the reply based on the ‘experience’ that is being build up in step one.

Now back to my parser; The sentence is broken up into parts that match with ‘concepts’ in the knowledge base. These ‘concepts’ are words, sentence parts or even full sentences that describe a concept. Now this is where ‘symbolic tags’ come in; these tags add ‘context’ and ‘experience’ to a concept. This can be a link to another concept (or many) but it can also be non-verbal information (like sensor readings for example). Based on the combination of concepts (things like ‘semantic proximity’ come into play here) in an input (and other non-verbal information at that instance) the AI will add more symbolic tags to those concepts, i.e. adding ‘experience’ to the knowledge base. Next, in the second step (formulate reply) the AI will build a table of related ‘concepts’ based on symbolic tags of the concepts that are current.

This gives also the effect that the AI is actually ‘rethinking’ information (adding new symbolic tags) that is already in the knowledge base, before forming the response. This means that the AI can come to a different ‘perspective’ then the last time the same concepts were debated with the AI. From my perspective this comes very close to how a real intelligence (i.e. humans) relate and react to the knowledge and experience that they posses. We are constantly rewriting our own ‘database’ in reaction to all the input we get. I think this is one of the core concepts of real learning; not just storing information (how to train a monkey) but to form new insights based on experience.

Now bare with me; most of this stuff I’m formulating as we go. I still have to work out a base vocabulary to even communicate my ideas. Also, the technical implementation of these ideas is yet another part of research that I have to do.

 

 
  [ # 23 ]

A little more on ‘concepts’ and ‘symbolic tagging’:

As I said earlier, a concept is a (short) description of a object or entity. Some examples of concepts:

- Hans Peter
- a male
- a human

Now we link those concepts to build context, using symbolic tags:

- Hans Peter IS a male
- a male CAN BE a human
- Hans Peter IS a human
- a human CAN BE a male
- a human CAN BE Hans Peter

The tags are called ‘symbolic’ because they add a symbolic value (in this case ‘contextual’) to the concepts that are tagged with it. The contexts ‘IS’ and ‘CAN BE’ are ‘concepts’ in their own right, and can be tagged themselves to add more (deeper) symbolic value.

Building a AI reality is done from introducing more concepts into the system over time and have the AI ‘learn’ or ‘experience’ (i.e. using sensory input) by adding more and more symbolic tags to the ‘known’ concepts. Introducing more concepts will automatically lead to ‘deeper tagging’, as the symbolic value of a tag comes from available concepts itself. So as soon as a new concept is being linked to another concept by means of a ‘known’ symbolic tag, the newly linked concept becomes a possible ‘value’ to load new and existing tags in relation to that concept.

 

 
  [ # 24 ]

The only thing that could distinguish symbolic tags from just more text would be if there were specific algorithms that the tags triggered. Like an external stimuli inducing an instinctual reaction, this behavior—or the rules that govern its evolution—would have to be defined by you. As more and more tags are added to the bot’s “response vocabulary”, if you will, the code to handle them must grow as well.

Cyc uses a tagging system and I’m not sure that it adds any depth to the knowledge base to simply replace a word with the same word, but with a symbol or two in front. Don’t get me wrong, distinguishing word senses is important (haven’t delved into this yet myself), but tagging words and calling them something special sounds more like anthropomorphism than artificial intelligence.

 

 
  [ # 25 ]

Thanks for your reply, you are keeping me on my toes so to speak smile

C R Hunt - Feb 19, 2011:

The only thing that could distinguish symbolic tags from just more text would be if there were specific algorithms that the tags triggered. Like an external stimuli inducing an instinctual reaction, this behavior—or the rules that govern its evolution—would have to be defined by you. As more and more tags are added to the bot’s “response vocabulary”, if you will, the code to handle them must grow as well.

External stimuli equals ‘input, especially when we are talking about sensors. The ‘rules’ are defined at first as the core-concepts (AI-instinct) I spoke of before. The code does not have to grow, simply because it doesn’t work in the human brain that way. Processing in the human brain is what you have, but adding knowledge does not change the way we process that knowledge, Still, somehow the human brain is capable to grow it’s reasoning capacity based on learning new things.

This is paramount in my view that ‘writing more code’ does NOT lead to more intelligent systems. The answer has to be found in the data-model because the human brain does not ‘write new algorithms’, instead it constantly rewrites it’s database. When we learn something new, we just add new perspectives to a concept. It changes how we relate to that concept and therefore how we reason about it, but it doesn’t rewrite our base reasoning algorithm. Because that would mean that we would use that new perspective to ALL other unrelated knowledge as well (as would be dictated by the changed algorithm). We know that this is not what happens in humans, it simply doesn’t work that way. Doing a simple mind-experiment on yourself will reveal this.

C R Hunt - Feb 19, 2011:

Cyc uses a tagging system and I’m not sure that it adds any depth to the knowledge base to simply replace a word with the same word, but with a symbol or two in front. Don’t get me wrong, distinguishing word senses is important (haven’t delved into this yet myself), but tagging words and calling them something special sounds more like anthropomorphism than artificial intelligence.

What Cyc does is not exactly where I’m going. Adding a ‘symbol’ in front of text (or a ‘concept’) does not make ‘symbolic tagging’ the way it is in my model. What is different is that in my model the tags are loaded with symbolic information that could link to other related concepts but also can have related sensor values for example. To put in in other words: I’m building a much deeper contextual model.

And as I said before: experience = concepts + context.

Projects like Cyc and NELL are building a rich ‘conceptual space’ but their ‘contextual model’ is rather basic and leads to a very shallow ‘experience model’.

 

 
  [ # 26 ]
Hans Peter Willems - Feb 19, 2011:

The answer has to be found in the data-model because the human brain does not ‘write new algorithms’, instead it constantly rewrites it’s database. When we learn something new, we just add new perspectives to a concept. It changes how we relate to that concept and therefore how we reason about it, but it doesn’t rewrite our base reasoning algorithm.

I have to respectfully disagree, Hans Peter. This may be true when we learn new information, but when we learn to perform a new task (e.g. when one first learns to ride a bicycle, or goes fishing for the first time) our bodies have no reference point with which to perform these new tasks, so our brains have to “write new code” to provide a set of parameters that control our bodies and instruct them in the proper motions required to complete these new tasks successfully, and that “new code” is constantly refined as we continue the learning process, making us more proficient at these new tasks. This is the case from the moment we’re born, till the time we die, and is a very important aspect of the learning process.

 

 
  [ # 27 ]
Dave Morton - Feb 19, 2011:

I have to respectfully disagree, Hans Peter.

That’s a good thing, it keeps the discussion going smile

Dave Morton - Feb 19, 2011:

but when we learn to perform a new task (e.g. when one first learns to ride a bicycle, or goes fishing for the first time) our bodies have no reference point with which to perform these new tasks, so our brains have to “write new code” to provide a set of parameters that control our bodies and instruct them in the proper motions…

This is indeed what many people think, and of course you may be right, but I contest this view. In my perspective those new tasks are not ‘writing new code’ but instead are remapping similar experiences through new ‘tags’ to new concepts and experiences.

Example: riding a bicycle is not an exercise in ‘new movements’ but instead it is an exercise in mapping our ‘sense of balance’ onto new ‘senses of feedback’ (sensory input). There is no ‘new code’ as we are not changing the way that we handle our feeling of balance. Instead we are adding new ‘experiences’ to the ‘concept’ of balance. The same way is fishing nothing else then experiencing the same sensory input like e.g. drinking from a cup, but in a total different context and therefore mapping those sensory input (lifting something, like a cup) to new experiences (fishing), in the process creating new ‘experiences’.

I pointed to the importance of analogies in learning before, try to envision that in relation to the above examples.

 

 
  [ # 28 ]

Alright. I can accept (to a limited degree) your premise with regards to riding a bicycle, but how about a baby, as it progresses from lying on it’s back, to learning to roll over, to crawling, and then to taking it’s first steps? To my way of thinking, there’s no way to avoid “new code”, in this scenario, since we humans weren’t born with the instinct to walk, as some animals are. Thoughts?

 

 
  [ # 29 ]

First of all, I don’t say there is no ‘coding’, of course there are algorithms that do the processing. What I believe is that the ‘coding’ is there when we are born (formed through evolution and instated during the embryonic state). This is mainly what we call ‘instinct’. From my perspective the ‘instinct’ is made up from the ‘coding’ and ‘core-concepts’, stuff we don’t have to learn but know from ‘instinct’. Things like pain, our perception of balance based on gravity, and so on. I think there is a possibility that this ‘code’ is being refined during the early years of infancy, but I’m not sure and/or convinced of that yet.

We are actually born with the instinct to walk. If that wasn’t the case then we would not feel the urge to master it. The difference with those animals is that their system is hard-wired to start walking directly (it has to do with survival), while in humans it is linked to mastering our balance. What is also being forgotten in this context is that most (if not all) animals that start to walk directly after birth, are four-legged and have no toes. Humans of course are special in this regard as well, as we walk straight up and use our toes as sensors to control our balance. This takes some fine-tuning during the learning stage. Our feet, in their linkage to our balance-system, are infinitely more complex then the hoofs of a horse. It makes us capable of walking and running (just like e.g. a horse) but also to climb trees and do a whole host of other things that a horse simply can’t.

Bottom line; our system is much more complex then that of an animal, and just takes longer to master. And again this whole walking-thing is being build on top of earlier experiences (like crawling) and instincts.

So I stay with my premise that there is no new code involved; when we are born we already have the code to use our feeling of balance, the sensors like those in our toes, etc. We only add experiences to learn how to handle it all.

 

 
  [ # 30 ]

Even with such a system Hans, is it very useful to have self modifying code. The ‘instinct’ versus ‘data’ paradigm what you describe, is exactly how I am building my neural network. Yet, I have already found use of self generating code. All the ‘+’, ‘*’,... statements are transformed into executable neurons, which saves me writing yet an extra interpreter.

 

 < 1 2 3 4 >  Last ›
2 of 6
 
  login or register to react