AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AI Philosophy; When does an AI ‘Understand’?
 
 
  [ # 31 ]

Hans,

interesting!  I will have the equivalent thing as what you refer to as “core concepts”.  But in my case core concepts are mapped to semantic trees.  (So [user input] becomes [1,more parse trees] becomes [semantic tree] becomes [concept])

then concepts are sliced and diced, compared, hierarchy of there relationships figured out, etc, and then a conclusion may be reached, which is basically a semantic tree, which ends up being converted to output parse tree and then finally a string of text returned to user.

The system will be able to connect the semantic trees of facts you entered before with other facts, and with your question (or statement).  This connection is made via the concepts, the concepts are the glue that connect the semantic trees together.  Semantic trees are a result of apply semantic rules about the world to the chosen grammar tree.

You will undoubtedly be adding the concept of simultaneity to your core concepts ?

 

 
  [ # 32 ]
Victor Shulist - Feb 11, 2011:

The system will be able to connect the semantic trees of facts you entered before with other facts, and with your question (or statement).  This connection is made via the concepts, the concepts are the glue that connect the semantic trees together.  Semantic trees are a result of apply semantic rules about the world to the chosen grammar tree.

I think our goals are pretty close to each other in this respect, although our implementation differs like night and day. We’ll see which direction gives the best result, or maybe more ‘dramatic’; which works and which won’t work.

Victor Shulist - Feb 11, 2011:

You will undoubtedly be adding the concept of simultaneity to your core concepts ?

I don’t know what you mean by that. I looked at http://en.wikipedia.org/wiki/Simultaneity but didn’t see anything there that might relate to building a mind-model. Maybe you can elaborate.

 

 
  [ # 33 ]
Hans Peter Willems - Feb 11, 2011:

I think our goals are pretty close to each other in this respect, although our implementation differs like night and day. We’ll see which direction gives the best result, or maybe more ‘dramatic’; which works and which won’t work.

Exactly, “stay tuned” as they say!

Hans Peter Willems - Feb 11, 2011:
Victor Shulist - Feb 11, 2011:

You will undoubtedly be adding the concept of simultaneity to your core concepts ?

I don’t know what you mean by that. I looked at http://en.wikipedia.org/wiki/Simultaneity but didn’t see anything there that might relate to building a mind-model. Maybe you can elaborate.

By that I mean, since you are including the idea of time, and undoubtedly the idea of an event, then the idea of more than one event occurring at the same time .. as one of your “core concepts”.  Or perhaps this is a combination of core concepts ?

You gave me an idea for another thread, a poll actually.

 

 
  [ # 34 ]
Hans Peter Willems - Feb 11, 2011:

‘Time’ is actually the first ‘core-concept’ in my AI-mind model; a computer has a build-in ‘sense of time’. A human has a ‘biological clock’ that gives us a sense of time, this is one thing we don’t have to create in software. A computer already has it’s own clock that is actually more accurate then a human clock.

The next step for me is to map this concept of time to other time-related concepts, like morning, evening, sooner, later, yesterday, tomorrow, etc.

So I agree this might be the first step towards strong AI. And in my case this is exactly where I’m starting to build my model.

I agree that for an AI, time is a fundamental concept. I would also submit that “Math” is also a core concept.

Hans Peter Willems - Feb 11, 2011:
Merlin - Feb 11, 2011:

I am hoping that in the near term, my bot will be able to “understand” in the same way that a spreadsheet knows math or a GPS system knows location and directions.

I’m not sure what you are trying to say here, because those examples are the exact opposite of ‘understanding’ and perfect examples of pre-programmed or ‘canned’ behaviour.

My premise is that “understanding” is built like a pyramid. It starts with the ability to recognize and manipulate symbols. For a computer AI, that foundation is clock ticks (Time) and Math (add, subtract, multiply, divide, store & retrieve numbers). Adding new domains of knowledge require more units of measure, but the basic manipulation is the same. Making those base concepts bridge to natural language is an AI first step. The ability to recognize when a user is asking a math or time question, and to respond appropriately would show understanding.

I would say that a spreadsheet knows what a math problem is and how to solve it (with a limited vocabulary). A GPS knows your location and how to give you directions. Do you believe it understands less than if you stopped at a gas station and asked a human for directions?

 

 
  [ # 35 ]

A GPS knows your location and how to give you directions. Do you believe it understands less than if you stopped at a gas station and asked a human for directions?

Excellent question.  It doesn’t know what part of the country or world you are from, so GPS can’t tailor its answer to make it more understandable.  People call land formations different things in different parts of the world.

 

 
  [ # 36 ]

Ignore that last post.

 

 
  [ # 37 ]
Merlin - Feb 12, 2011:

I would say that a spreadsheet knows what a math problem is and how to solve it (with a limited vocabulary).

I think the difference in our view here comes from our different perspectives:

Premise: knowledge = to know

You: knowledge = information (a spreadsheet has ‘information’ on how to process math), therefore a spreadsheet ‘knows’ how to use math.

Me: knowledge = comprehension/understanding (a spreadsheet doesn’t ‘comprehend’ math at all), therefore a spreadsheet doesn’t ‘know’ how to use math.

From my perspective a spreadsheet doesn’t ‘know’ anything more about math, then a bicycle ‘knows’ about transportation or moving (but that is what is does, like a spreadsheet does math) or a clock ‘knows’ about time. They are both just machines consisting of encoded rules that are executed.

 

 
  [ # 38 ]

Thinking some more about this…

In knowledge-management we work with the concept of ‘tacit knowledge’, information that is pretty hard to capture in strict rules or models but is paramount in actually ‘understanding’ things. I think in AI this might be the missing link and thinking about tacit knowledge I realise that the model I’m designing is very much tailored at capturing ‘artificial tacit knowledge. I’ve stated several times here on the board that ‘knowledge comes from experience’, and this is very true for tacit knowledge in humans.

So thanks for your perspective Merlin, as it gave me another pointer that I’m moving in the right direction with my model.

 

 
  [ # 39 ]
Hans Peter Willems - Feb 12, 2011:

Thinking some more about this…

In knowledge-management we work with the concept of ‘tacit knowledge’, information that is pretty hard to capture in strict rules or models but is paramount in actually ‘understanding’ things.

Could you give an example of tacit knowledge—I don’t think I’ve seen that before.

 

 
  [ # 40 ]

http://en.wikipedia.org/wiki/Tacit_knowledge gives a good example—riding a bike.

The explicit knowledge is concept of balance, the tacit knowledge is skill, the skill of your mind reacting to the motion, the mind reacts to your movement by gravity , to the left and right, and makes the required adjustments, very delicately , by sending the required signals to your muscles, to maintain balance.

so,

task : bike riding

explicit knowledge: 

If (your weight is leaning to much to left) Then (put weight more right)
If (your weight is leaning to much to right) Then (put weight more left)

pretty simple explicit rules.  But it is pretty difficult for me to communicate in such a way that your brain has that skill to react, by feeling weight changes and sending signals to your muscles, in just the right time, in just the right order, right amount of force.

To me, this is just very low level coding, very complex and *flexible* and adaptable to change.

 

 
  [ # 41 ]
Hans Peter Willems - Feb 12, 2011:

Thinking some more about this…

In knowledge-management we work with the concept of ‘tacit knowledge’, information that is pretty hard to capture in strict rules or models but is paramount in actually ‘understanding’ things. I think in AI this might be the missing link and thinking about tacit knowledge I realise that the model I’m designing is very much tailored at capturing ‘artificial tacit knowledge. I’ve stated several times here on the board that ‘knowledge comes from experience’, and this is very true for tacit knowledge in humans.

So thanks for your perspective Merlin, as it gave me another pointer that I’m moving in the right direction with my model.

You are welcome Hans. This thread has also helped me solidify some of my own thoughts.
My goals in Skynet-AI is to continue to convert human knowledge into explicit knowledge which forms the basis of the bot’s “understanding”.
(Would this be the same as your “artificial tacit knowledge”?)

Some view tactic and explicit knowledge as completely different things. Others, view it as a continuum. I believe in knowledge conversion (the ability to convert back and forth between the tactic and explicit).


 

 
  [ # 42 ]
Toby Graves - Feb 12, 2011:

Could you give an example of tacit knowledge—I don’t think I’ve seen that before.

Victor gave a great rundown of the basics of tacit knowledge, I couldn’t have given a better introduction to it :D

Merlin - Feb 12, 2011:

Some view tactic and explicit knowledge as completely different things. Others, view it as a continuum. I believe in knowledge conversion (the ability to convert back and forth between the tactic and explicit).

I’m also not completely convinced that explicit knowledge and tacit knowledge are strictly separated. In knowledge management this is an issue that is being debated for a long time still. However, developing my AI-mind model gives me some interesting insights into this as I’m getting deeper into the specifics of my model; it’s becoming clear to me that current knowledge management implementations are seriously flawed in trying to handle ‘tacit knowledge’.

So I think there are overlaps between explicit and tacit, but I don’t think everything is convertible between the two knowledge types. I rather think that certain knowledge is maintained on the ‘border’ of the two types; knowledge that has both explicit and tacit markers. The AI-mind model should cater for this as well.

 

 
  [ # 43 ]

Do you think, Hans, that perhaps tacit knowledge maybe only applies to humans?  I say this because, it is a rather subjective definition.  I mean, one definition I saw had words to the effect “knowledge that is difficult to communicate to others”.  For me, a bot can be given knowledge in two ways, Facts, like in a database or what have you, and then executable code.  That code can be high level, or right down to the assembly language, or machine code for very very precise timing.

I think it is actually easier to give a computer more detailed instructions than a human!  Perhaps 20 years in software development, I’m biased smile

 

 
  [ # 44 ]
Victor Shulist - Feb 12, 2011:

Do you think, Hans, that perhaps tacit knowledge maybe only applies to humans?

To the contrary, I’m convinced that to attain ‘strong AI’ we need to implement ‘something’ that emulates human tacit knowledge. This is actually what I’m working on.

Victor Shulist - Feb 12, 2011:

For me, a bot can be given knowledge in two ways, Facts, like in a database or what have you, and then executable code.

We are getting to the core now: in my model the database (or data-model) doesn’t ONLY contain ‘facts’. My data-model is aimed at storing much more then just facts, but especially to store ‘experience’ in relation to ‘facts’ and ‘contexts’. The challenge is to design a unified data-model that can handle any description of ‘experience’ in relation to facts and contexts.

 

 
  [ # 45 ]
Victor Shulist - Feb 12, 2011:

Do you think, Hans, that perhaps tacit knowledge maybe only applies to humans?

Your reasoning is suggestive that it might only apply to humans. While telling humans exactly how to maneuver themselves to perform an action does not suffice for them to be able to do it, this is because of our limited ability to translate “frontal lobe” knowledge into cerebellum-controlled actions. Only when we turn that conscious knowledge into “muscle memory”, or tacit knowledge, can we perform the actions.

We are getting to the core now: in my model the database (or data-model) doesn’t ONLY contain ‘facts’. My data-model is aimed at storing much more then just facts, but especially to store ‘experience’ in relation to ‘facts’ and ‘contexts’. The challenge is to design a unified data-model that can handle any description of ‘experience’ in relation to facts and contexts.

That is the big challenge. Handling “experiences” is what I hope to accomplish in my “story” level of memory storage. It incorporates knowledge base facts along with temporal, spacial, etc. elements of an experience. But the exact layout I have not settled on, so good luck to us all.

 

 < 1 2 3 4 > 
3 of 4
 
  login or register to react