AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Influential AI Talks
 
 

let’s start with this guy . .he’s good smile  He has influenced my bot’s development !

http://www.youtube.com/watch?v=y4y8mTRqXAo&playnext=1&videos=PpBx2ir9Sa8&feature=grec_index

 

 
  [ # 1 ]

Interesting topic, but I simply don’t believe him. He states that the intelligence of a human brain, calculated by AI researchers, is actually to be found in one single neuron, pushing the objective forward for AI developers by a factor 10 ^ 15 (the number of neurons 10 billlion * connections per neuron 10.000). This assume that’s the brain is highly effective and neurons are highly effective, and they don’t.

I would look at the external behavior of intelligence. I don’t believe the next breakthrough in AI will come from natural language research, but from pattern recognition.

Currently, we are able to distinguish objects in sight, in vision: what pixels in my view belongs to which object?
We are developing technology to assign attributes: size, shape, colors and even sounds and smell: the multi-modal approach.
Next we are able to label all possible objects in our observations, simply by combining database. Computers will recognize more objects than people can. Simply because we haven’t seen all objects on earth in our live.

Then we are able to create abstractions: objects that look like. Like cars, like bikes, like houses, like dogs. Computers will cluster objects without even labeling it with words.

The next level of abstraction: add time. Add velocity, add location, add time, add acceleration capabilities. Objects can be compared. The level also says: which objects regulary go together.

And that’s how we continue. It’s about creating abstractions all the time. Eventually, it can be expressed in words. In any words. Language is the end of intelligence, not at the beginning.

I believe we’ll certainly approach and surpass the singularity point in this way, and computers will make human intelligence feel very very tiny.

 

 
  [ # 2 ]
Erwin Van Lun - Aug 6, 2010:

I would look at the external behavior of intelligence. I don’t believe the next breakthrough in AI will come from natural language research, but from pattern recognition.

I believe both are fruitful avenues of research.  For visual/audio recognition, yes, pattern recognition has to be the start. 

I believe absolutely that NLP research should be done in parallel and will contribute significantly.    Whether you are starting from NLP itself, or continuing from pattern recognition algorithms, you must still study the nature of language, how it is used, and only then can you apply and try to determine how your pattern recognition algorithms, or perhaps completely different algorithms, can be used to learn that skill.

Only time will tell whether low-level pattern recognition is a pre-requisite for starting on NLP, but I don’t think it is.  Again, for making sense of visual images, I agree 100%.

Pattern recognition, and all this mathematical and statistical approaches to NLP have been worked on for decades, with next to nothing to show for it.

 

 
  [ # 3 ]
Erwin Van Lun - Aug 6, 2010:

Eventually, it can be expressed in words. In any words.

Of course.  With my bot, English words and even complex phrases are equated with ‘internal standard representations.  One of the other ‘first jobs’ of my bot will be a translator.

Erwin Van Lun - Aug 6, 2010:

Language is the end of intelligence, not at the beginning.

Correct, the algorithms to process natural language are closer to the start.  The information contained in complex natural language sentences, and the understanding and processing of that, is the ultimate goal for NLP.    I can’t imagine learning something like Calculus being limited to simple nouns, like stick and rock, and simple sentences like ‘The dog jumped’.  No, you need algorithms that can understand free form grammar, complex sentences involving adverbs, auxiliary verbs, prepositional phrases, and you need to combine the meaning of all the sentences in a paragraph that explains how to do a complex abstract task.

Another functionality I am currently researching with my bot is self-consumed and maintained commented code.  Basically I will have one or more perl scripts, each will have a purpose,  the actual comments (perl comments - start with #), will be in naturual language.  The comment will basically state, if you need to know this information, that script generates it, in this format, and you need these required facts before you run it.  The NLP engine will be able, when it has an objective, read each comment block of all scripts, determine which one is the right one to use (or which tree combination of calling them, basically assembly them into a tree structure or ‘program’).  If for example the format of the input to that script is different than what it has coming out of another script, it will know to reformat it.

So the ultimate goal is:

Input : 
  objective, find this information

Process:
  Read the documenation for each script (in natural language), find the one which provides it.
  Does that script directly give the result, or perhaps that script in turn, requires some
  facts.  If so, go through the docs and read the purpose, requirements of other script
  until you find one that gives you the facts required of the other.

Output:
  result of script, or entire tree structure of scripts, that give answer

This, I believe is the ultimate goal!  A system, that, using NLP, can truly understand the goal it is given, and with the information it has about (meta data) about its database, and information it has about scripts/external programs/APIs whatever, derive a ‘plan’ on how to find it.

 

 
  [ # 4 ]

When you get right down to the core of it, all natural language processing is is just a much higher level of pattern recognition, where the input isn’t being matched with specific words, but the meanings of the words are matched against possible patterns of type. Knowing that the word “run” is a noun, based on the specific context of a phrase (“King is in the dog run”) comes about because that word was matched against a pattern of meaning, rather than a pattern of exact words.

And I have to agree with Erwin that Dr. Hameroff, while obviously an intelligent individual, with an interesting outlook on intelligence, is overstating the nature of human intelligence and it’s root cause. Dr. Hameroff may well be right about the fundamental functions of neurons and synapses, but I think that, if that’s the case, it’s pretty much got to be a type of built-in redundancy, like a RAID array, where data is stored in multiple locations, to avoid data loss or corruption. After all, brain and nerve cells die every day, throughout our entire lifespans, not just old age, and yet what memories we lose (or seem to) is far less than that which can be accounted for by this loss of cells. This notion makes much more sense to me. Of course, I’m just an old guy who’s trying to “think above his station”, but that’s ok. I’m still entitled to my opinions, and have the right to be wrong. smile

 

 
  [ # 5 ]

Absolutely, this is all great feedback.  I realize that some of his ideas are a bit far fetched ! smile

He is similiar to another individual by the name of Hans Moravec;  I read his book “Mind Children: The Future of Robot and Human Intelligence” - a fascinating book !

http://www.amazon.ca/Mind-Children-Future-Robot-Intelligence/dp/0674576187

And yes, I agree Dave, NLP is about a ‘higher level of abstraction’ of pattern matching but I think those algorithms can be developed, without first requiring the much lower level ones that are required by the visual/audio problem.

 

 
  [ # 6 ]

Victor, it’s my considered opinion that not only can NLP algorithms be developed separately from basic pattern matching, I believe that they should be. I feel that each discipline will have their own specific uses, with regards to the creation of the next generation of Synthetic Entities, where the NLP module will determine the context of the input phrase, and basic pattern matching will help guide the response toward more relevance to the topic, and to provide means to guide the direction of the topic, as well. If NLP will be the heart of the system, then basic pattern matching will be the lungs. Or maybe the liver. The spleen? I don’t know; but it, too, will be equally important.

 

 
  [ # 7 ]

Well put !  I have decided to split up the two levels of matching in my bot… what I am calling

            Closest Spell Matching, CSM (not concerned with now, but a nice to have perhaps next year)
  -and-    Closest Tree Matching, CTM

CTM I will develop now.  Example, once the words are converted to a ‘standard internal representation’ (ISR) (example, ‘great’, ‘awesome’ , ‘wonderful’ , all will boil down to ‘great’, I don’t know, I’ll pick a word, or perhaps a unique integer)

once the user input is parsed into a tree, where each node is not the original word, but the ISR of each word, then, a question input string will be parsed into a tree, THEN, instead of ‘spell’ checker, it will be a ‘tree structure’ checker.. that is, finding the closet match of the tree structure of the original statement, and the question, and it will determine what earlier fact it was told, that is most likely contains the data that the question is asking for in this question.

Example, input original fact, “Bob’s cell phone number is xxx-xxxx”.  That will produce a parse tree where “number” is the subject of sentence, and it is being modified by “Bob’s” and “cell” and “phone”

A question later could be “What’s bob’s number?”.  Well, the parse trees won’t match up 100% because your question is only asking about “number” but only being modified by “bob’s” . .NOT also “cell” and “phone”.  Thus, if that fact is the closest match, it will (and I have this working now to some degree),  ask you “Well, do you mean his *cell phone* number?  And if you answer yes, it gives the data in the original fact, if you answer no, it says “Well, I’m not sure what number you’re talking about then.”

Yes, yes, of course, instead of ‘yes’ and ‘no’ it can be any other word that means the same, or even phrase.    And yes, it will say “his” in the sentence above because it already knows the word “bob” has a properties of…

            firstname = true
            malename = true

Thus it says “his”, else (if it had property femalename=true), it would say “her”.

Oh and before you are thinking “well,you can just handle that with templates” . .no .. because if you say

  “Bob’s phone number is xxx-xxxx, please remember that”

it will know that his phone number is NOT

            “xxx-xxxx, please remember that”

but instead just “xxx-xxxx” because it will parse the entire sentence and know that the number is followed by text “, please remember that” which it knows is a predicate and thus probably not part of a phone number…. and here is where a bit of low-level matching comes in, because another hint it will use is if xxxx-xxxx matches the REGEX (i know you hate them) of a phone number \d\d\d-\d\d\d\d.  And yes,  I will have other regexes for when area code is included and also when letters are used in the number.

There is also CCM - closest case matching.  right now my bot is case-insenstive, but CCM will consider, based on context, and ask itself, “Does case matter ?”  Example, if a word is in all UPPERCASE, it will also consider, that perhaps that is an acronym.

 

 
  [ # 8 ]
Victor Shulist - Aug 6, 2010:

Whether you are starting from NLP itself, or continuing from pattern recognition algorithms, you must still study the nature of language, how it is used, and only then can you apply and try to determine how your pattern recognition algorithms, or perhaps completely different algorithms, can be used to learn that skill.

mmm, I think I don’t agree cool cheese

I think language is the sum of abstractions + similation of conciousness of human entity in a global universe with own intentions and ‘survival’ mechanisms. Once we’ve defined models, algoritms and abstractions for this, it’s ‘just’ a matter a finding sayings to express emotions, intentions, abstractions. And as we know from our normal world, the largest part of our communication is expressed in our body language, and not in what we say.

So we first need to go back to intention and then use language as a tool to our express ourselves (as robots).

 

 
  [ # 9 ]
Erwin Van Lun - Aug 6, 2010:

I think language is the sum of abstractions + similation of conciousness of human entity in a global universe with own intentions and ‘survival’ mechanisms.

I actually don’t believe we will need conciousness in a bot in order for it to have intelligence or be useful.    The main reason is, we don’t even know what conciousness really is, in fact some researchers believe it could be merely an illusion; similiar to the old free-will question.

Also, I do not believe a useful, intelilgent bot will need ‘survival’ mechanisms’.  Just because we know humans, being biological, with emotions, etc are intelligent and ‘survival’ is something we care about, it doesn’t mean that non-biological, in fact, non-living machines with intelligent behaviour, will need to care about ‘survival’.

My bot will care about one thing - finding the most applicable reply to a statement, or question, and favouring the reply that correlates the most amount of information, which will give the most meaningful response.

So the ‘intentions’ you mention, for my bot, will be its current goal, whether that is helping you find what you are looking for, or figuring out your taxes.    The intentions, goals, will be the subroutines that it will call, based on what it determines is the objective, based on the naturual language input. 

Erwin Van Lun - Aug 6, 2010:

Once we’ve defined models, algoritms and abstractions for this, it’s ‘just’ a matter a finding sayings to express emotions, intentions, abstractions. And as we know from our normal world, the largest part of our communication is expressed in our body language, and not in what we say.

that ‘just’ is a HUGE ‘just’ smile  That is where it needs to be able to translate that complex natural language from and to its intentions - and that, I believe, can be developed in parrallel.

I’m taking much more of a pragmatic approach, and not so much concerned with falling into a pointless philosophical debate, and designing a system that will actually DO what people do…. in the true nature of the turing test.  It won’t matter what order the work is done in or even how,  or if the bot has emotions, ‘survival mechanisms’, or whether the NLP is designed before or after, as long as the net result is a machine that can pass the turing test smile

When IBM’s big blue become the world chess champion in 1997, no one cared about how it did it, it won, bottom line. smile  It didn’t have conciousness , ‘survival’ mechanisms’, intentions , except of course for the rules of chess, and the objective.  For a chatbot, the rules of chess are grammar rules, and the objective is combining everything that has been said, it’s knowledge base, and rules to produce a valid response.

 

 
  [ # 10 ]

95% of our behaviour is unconcious. From a computer intelligence perspective 95% of behaviour is automated, based on previous experiences and repeating what it has done before (breathing, automated reactions), only 5% is processed by the concious brain, meaning a proces that actively records our behavior and takes this into account in immediate future behaviour. So noticing you lift your right foot, when learning dancing steps, for example.

In order to behave natural, like human beings, computer should be able to simulate concious from unconsious processes. Altough everything is computed and concious by definition, unconcious behavior should be simulated.

At the same time, humans are learned to get in touch again with their unconcious behaviour, though meditation for example, but that’s a separate discussion.

goals/intentions: interesting topics as well. What’s the difference? Goals is life? Find a life partner? Experience Freedom? Build a better life for your kids? Versus short term taks like complete your tax forms. Intention would be more physical in my definition, for example, if you play football, your muscles tenses before they really start to move. That’s the intention. The same thing occur when you want to speak. Breath changes, eye movements, and then you say something.

Chess: did you see my other movie by incidence?
http://www.youtube.com/watch?v=BDl15GfqvFs

During the AI conference I’ve seen a presentation on general game play: a technique to learn computers new games and win. So instead of programming a computer specifically for one game, general game play is designed to play every game. Quite impressive.

 

 
  [ # 11 ]

In order to behave natural, like human beings, computer should be able to simulate concious from unconsious processes

 
I suppose, to make them ‘appear’ natural.  But I doubt if it is a hard prerequiste.

General game play…fascinating concept.  Yes, i will check out that link for sure!

 

 
  [ # 12 ]

1, I agree 100% that the key method of NPL is pattern recognition.
2, The compute speed of human brain is not as fast as we believed before. Human brain is not massive parallel. Let us think, how many problems can we resolve at the same time? Only few things can be done parallely.

 

 
  [ # 13 ]
Nathan Hu - Aug 18, 2010:

1, I agree 100% that the key method of NPL is pattern recognition.
2, The compute speed of human brain is not as fast as we believed before. Human brain is not massive parallel. Let us think, how many problems can we resolve at the same time? Only few things can be done parallely.

I have to disagree with you about both the “computing speed” of the human brain, and the human brain not being “massively parallel”. On a conscious level, this may appear to hold true, but consider the simple act of walking:

1.) weight is transferred from the left foot to the right
2.) left foot is lifted off the ground
3.) left foot is moved forward a small distance
4.) weight is transferred from the right foot to the left
5.) right foot is lifted off the ground
6.) right foot is moved forward a small distance
7.) go to step 1

That, in and of itself is rather simplistic, and seems to be a series of steps, but now consider the following sub-steps:

1.) weight is transferred from the left foot to the right
[ol]
[li]several dozen muscle groups in the inner thigh of the right leg, each controlling hundreds of muscle cells, contract[/li]
[li]several dozen muscle groups in the outer thigh of the left leg, each controlling hundreds of muscle cells, contract[/li]
[li]muscle groups within widely scattered areas of the torso, arms, neck and other locations perform the necessary minor contractions required to maintain balance[/li]
[li]blood vessels all over the body act to disperse the necessary nutrients to the various muscle groups that are in action, allowing for the extra exertion[/li]
[li]thousands of nerves transmit positional data to the brain, to allow for precise placement of the various joints, assuring no anomalous motions[/li]
[li]the heart speeds up slightly, to provide the needed extra blood flow[/li]
[li]the diaphragm contracts more tightly than previously, causing the lungs to fill more fully, providing more oxygen to the system[/li]
[li]etc.etc.[/li]
[/ol]

And still, this list is very simplistic, because for every simultaneous step I’ve listed here, there are myriad more not listed. Now granted, this all occurs at the subconscious level, but it’s still processed asynchronously, rather than serially.

On a conscious level, I think that environment and education/training are the biggest factors involved in our seeming lack of ability to handle multiple concurrent tasks efficiently, rather than the brain’s lack of ability to provide the needed resources to do so. There are other factors, as well, I’m sure, that affect this; but I really don’t think that this limit is specific to a lack of overall brain functionality.

 

 
  [ # 14 ]

In addition to Dave’s comment,  While your brain is doing all those things in parallel to enable you to walk, you can also be talking on your cell phone at the same time, and observing traffic as you cross the road, which in itself requires processing of visual images coming from the eyes to the brain, which is, in itself an enormously complex process.

The best visual processing algorithms require a machine that has a speed tens of thousands of times faster than the human brain, in order to compensate for the fact that it is serial machine.    The human brain only ‘switches’ at about 10^3 per second, how would it be possible for our brains to do speech recognition, visual recognition, natural language processing, abstract though, motor control, etc, with only 10^3 switches per second if the human mind was a serial machine like a computer? Makes no sense.

If that were the case, we would have strong A.I. capable machines in the 1950’s !

I think the human brain is massively parallel.

 

 
  [ # 15 ]

As for NLP being done by pattern matching, perhaps a small portion of pattern matching may *help*, but pattern matching only works by simple Stimulus/response.  When I read a complex email, a typical one I get at work,  there is no “if-then” simple pattern that I follow to respond to it !!  I have to read it, sometimes re-read it, figure out what the sentence means,  then, when I know what it means, that is only the beginning, what do I do with it ? How do I combine that with knowledge,  and determine a reponse.  Simple pattern matching won’t give you truly AI chatbot, sorry.

For things like visual image processing, audio waveform analysis, yes, I think pattern matching is key.  But NLP and abstract thought, no.

A typical neuron has about 1,000 to 10,000 synapses (that is, it communicates with 1,000-10,000 other neurons, muscle cells, glands, etc.).  There are about 100 billion neurons in the brain.  The human brain,  is serial ??  All those nuerons, with all those connections, and you are saying it is not massively parallel ?!

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react