AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Influential AI Talks
 
 
  [ # 16 ]

Lest there be any misconceptions here, it should be pointed out that the human brain is not *actually* “massively parallel”, but rather, “massively asynchronous”. there’s a vast difference between the two terms, and using the wrong term can give the wrong impression.
{Dave steps off his soap box, and quietly goes to the corner, to watch}

 

 
  [ # 17 ]

Ok, maybe not ‘massively’, but researchers have determined long ago that the brain is no “von neumann machine”.  It was discovered that there was no single, main ‘cpu’ with an ‘address bus’ that fetchs data between memory and the cpu.  It’s not a serial machine like that.

 

 
  [ # 18 ]
Dave Morton - Aug 18, 2010:

“massively parallel”, but rather, “massively asynchronous”. there’s a vast difference between the two terms, and using the wrong term can give the wrong impression.
{Dave steps off his soap box, and quietly goes to the corner, to watch}

{Erwin steps on Dave’s soap box)

Extremly Parallel, Extremely Asynchrnous and extremely sensitive, selection and subtle

 

 
  [ # 19 ]

Ok, I see i’m out of *this* conversation !!!  I’m confused !

 

 
  [ # 20 ]

I’m sorry, victor. The intent isn’t to confuse. What’s the cause of the confusion, that I may help you to understand?

 

 
  [ # 21 ]

Thank you for your comments, I like to discuss with you guys.

Actually, when i talk about patterns, its not simple text pattern like AIML such as “I AM *”. The knowledge will be kept as networks. And the pattern matching are between the networks.

My chatbot developing is based on pattern matching, I will explore the ability of true AI based on pattern matching. The knowledge pattern for both natural language understanding and graph recognition will be intergraded in the coming future. 

Victor Shulist - Aug 18, 2010:

As for NLP being done by pattern matching, perhaps a small portion of pattern matching may *help*, but pattern matching only works by simple Stimulus/response.  When I read a complex email, a typical one I get at work,  there is no “if-then” simple pattern that I follow to respond to it !!  I have to read it, sometimes re-read it, figure out what the sentence means,  then, when I know what it means, that is only the beginning, what do I do with it ? How do I combine that with knowledge,  and determine a reponse.  Simple pattern matching won’t give you truly AI chatbot, sorry.

For things like visual image processing, audio waveform analysis, yes, I think pattern matching is key.  But NLP and abstract thought, no.

A typical neuron has about 1,000 to 10,000 synapses (that is, it communicates with 1,000-10,000 other neurons, muscle cells, glands, etc.).  There are about 100 billion neurons in the brain.  The human brain,  is serial ??  All those nuerons, with all those connections, and you are saying it is not massively parallel ?!

 

 
  [ # 22 ]

At first, we need to clear the meaning of concept of “massive parallel” between our mind. When there are 12G nerve cells in a brain how many are they working at the same time?

Human brain is parallel working. But we also need to consider, how many percent of the nerve cells are working at a particular second. I suspect, there are no more than thousands of nerve cells are active at any particular second. Let us think, why we can only consider one thing at the same time?

My meaning of “massive parallel” is that most of the nerve cells are active when the brain is working. It is definitely no.

The main reason of that the computer can not do many things like human is not the too slow speed. It because we still not find the correct knowledge model.  Human brain may faster than personal computer currenttly, but not too much I believe. Maybe 100 time or 1000 times only.

Victor Shulist - Aug 18, 2010:

In addition to Dave’s comment,  While your brain is doing all those things in parallel to enable you to walk, you can also be talking on your cell phone at the same time, and observing traffic as you cross the road, which in itself requires processing of visual images coming from the eyes to the brain, which is, in itself an enormously complex process.

The best visual processing algorithms require a machine that has a speed tens of thousands of times faster than the human brain, in order to compensate for the fact that it is serial machine.    The human brain only ‘switches’ at about 10^3 per second, how would it be possible for our brains to do speech recognition, visual recognition, natural language processing, abstract though, motor control, etc, with only 10^3 switches per second if the human mind was a serial machine like a computer? Makes no sense.

If that were the case, we would have strong A.I. capable machines in the 1950’s !

I think the human brain is massively parallel.

 

 
  [ # 23 ]

What we need to do is to use a computer simulating a brain. Indeed, the “address bus” is a bottleneck of computer. I also agree that a faster address bus is more important than the speed of CPU for NLP works.

 

Victor Shulist - Aug 18, 2010:

Ok, maybe not ‘massively’, but researchers have determined long ago that the brain is no “von neumann machine”.  It was discovered that there was no single, main ‘cpu’ with an ‘address bus’ that fetchs data between memory and the cpu.  It’s not a serial machine like that.

 

 
  [ # 24 ]

In 1993 my thesis was about artificial neural networks. At that time I studied various neural network models and I experienced with a random model, optimizing ‘weights’ by trying. Simply find higher hills in landscapes by trying. That was quite hard.

What I later learned is that the neural network models I used were to simple. The Rumelhart model was parallel indeed, but that’s not how our brain works. We receive ‘waves’. We are extremely sensitive to changes. A to speed of change, acceleration.

I’ve never seen a chatbot responding to change or acceleration. In real life however, robots do anticipate on change and acceleration. I believe learning should start with physical objects, start to understand the external world, including movement, speed and acceleration. In vision, but also in sound (what do you actually hear when someone starts to talk faster?). NLP is the final part.

So massively parallel: yes and no. Actually when we would apply a fourier transform on the massive parallel input, I might agree.

Just a braindump, it’s early. Hopefully I’m clear.

 

 
  [ # 25 ]

@Dave - Nathan and Erwin have since clarified, thanks though.

Nathan, agreed regarding the address bus being the bottle neck.

Also, knowledge model and algorithms are central.  I don’t like when researchers say,  “Ok, so we see here that, at this rate of increase of CPU speed, we should see human like intelligence in machines by.. . .. year X”.

Assuming that it is only raw speed.  If the correct knowledge representation and algorithms don’t come, then a CPU can be a trillion times faster than the human brain, and STILL just be a glorified calculator !

Oh, and I agree, you guys are great to discuss this stuff with !  I’m not always in 100% agreement, but that’s ok, as a friend of mine says… “healthy debates” smile    I am starting to learn what your definition of “pattern matching” is, and the more I do, the more I’m actually finding I do agree smile

 

 
  [ # 26 ]

That’s why I work on virtual world. Eneities, movements, speed and acceleration are easy to recognize in virutal world. So that I can focus on NLP.

Erwin Van Lun - Aug 19, 2010:

In 1993 my thesis was about artificial neural networks. At that time I studied various neural network models and I experienced with a random model, optimizing ‘weights’ by trying. Simply find higher hills in landscapes by trying. That was quite hard.

What I later learned is that the neural network models I used were to simple. The Rumelhart model was parallel indeed, but that’s not how our brain works. We receive ‘waves’. We are extremely sensitive to changes. A to speed of change, acceleration.

I’ve never seen a chatbot responding to change or acceleration. In real life however, robots do anticipate on change and acceleration. I believe learning should start with physical objects, start to understand the external world, including movement, speed and acceleration. In vision, but also in sound (what do you actually hear when someone starts to talk faster?). NLP is the final part.

So massively parallel: yes and no. Actually when we would apply a fourier transform on the massive parallel input, I might agree.

Just a braindump, it’s early. Hopefully I’m clear.

 

 
  [ # 27 ]

Similar to what Chuck is doing.

Also, when your core NLP stuff is completed, you can disconnect it from the virtual world and connect it to the real physical world via microphone, video camera, pressure sensors, GPS, etc.

Not to mention being able to freeze a virtual world and play events back.

 

 
  [ # 28 ]

Erwin,

I would look at the external behavior of intelligence. I don’t believe the next breakthrough in AI will come from natural language research, but from pattern recognition.

I don’t quite agree. When a chat bot is developed that can chat or converse in a forum such as this…that will be quite a significant breakthrough. I’m not convinced that CPUs must be any faster than they are now. I’m thinking that the missing ingredient consists of a blend of techniques, algorithms, data mining, etc. More speed and parallel processing could certainly help.

However, to you point, when we decide to ‘port’ that awesome chat bot program into a physical robot with sensory inputs the bot will just stand there.  The ‘pattern recognition’ system must be designed and developed so the bot can interact and react as quick as a human…in terms of vision, sound, tactile senses.  I should include ‘smell’ too. That technology is probably pretty far out there.

So, I predict chat bot breakthrough long before the pattern recognition of smell, sound, and vision breakthrough occurs.

Of course Erwin, that’s just the output of an old guy with 12G neuron processing….with most of them misfiring 90% of the time. =)

Excellent thread and discussion!!

Regards,
Chuck

 

 
  [ # 29 ]
Chuck Bolin - Aug 19, 2010:

Erwin,
I would look at the external behavior of intelligence. I don’t believe the next breakthrough in AI will come from natural language research, but from pattern recognition.

We’re all going to attack Erwin’s quote .. lol, just kidding Erwin smile

Actually, I would like to find out what you really meant though.

Do you mean:

(a) that natural language research won’t bring about a significant break through in AI at all

or

(b) it will, but it won’t be the next significant breakthrough, maybe the one after that, but not the next one ?

I think a system that can converse in NL is probably the highest forms of strong AI.

I mean, a machine that, using statistical analysis, finds out:

red ball, bin 1
red ball, bin 1
blue ball, bin 2
blue ball, bin 2
red ball,bin 1

where does red ball go?

that’s not much compared to NL where free form ideas and abstract thought is involved, with very complex ambiguity.  Following the statements in a conversation, and understanding as a whole what it means, and figuring out how to utilize existing knowledge in order to form a response.

I therefore highly disagree, NL research should be top priority.

We already have systems that, using statistical analysis and other types of algorithms, can predict things.  No one really considers that intelligence though, I know I don’t.  It’s just another form of conventional ‘number crunching’ really.  The path to strong AI is semantic computing & NLP. smile

Now, pattern matching, fuzzy logic, predication . ..those things are well suited and pretty much a requirement for visual image and speech recognition.

But speech recognition, visual image recognition is the start, not the end… it will provide the input.  Higher reasoning algorithms then need to be developed that take that information in.

When the patterns of audio wave forms are analyzed, the words are extracted, then the higher logic of NLP comes into play.

 

 
  [ # 30 ]

Well, I hate to add to Erwin’s woes, but I, too, must take minor exception with that quote.

Now I may be wrong, but it’s my considered opinion that NLP IS pattern recognition, though it processes the recognized patterns in a far different manner than what “standard” pattern recognition does.

At the very core of things, NLP recognizes much smaller patterns consisting of individual words, rather than phrases. the difference between NLP and “phrase recognition” (a much more suitable description than “pattern recognition”, IMHO) is that phrase recognition simply finds the best match for all phrase patterns stored, and returns the response associated with that “best match” phrase, sometimes building a response from bits and pieces of “best match” phrases, depending on the complexity of the algorithms used.

NLP, on the other hand, takes each individual pattern of words entered, derives a “best match” meaning for each word, arranges them, if necessary, into something it understands, and then builds a response, using the known meanings of all of the words it has stored in it’s database/word lists as a guide. Obviously NLP is vastly more complex than phrase recognition, at least in principle, but it still shares, at it’s very core, a type of pattern recognition.

I know that you all know this stuff already, but it points out the reasoning behind my statements, so… smile

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react