AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

ASTRID: Analysing Symbolic Tags Relates to Intelligent Dynamics
 
 

OK, time to give my project it’s own topic smile

What’s in a name:

The name for my project reflects the core idea that is driving my research: The human mind uses what I call ‘symbolic tagging’. This is not to be confused with ‘part-of-speech tagging’ (in AI research) as that is purely lexical-based. In my ‘symbolic-tag’ model the tags are ‘loaded’ with other things then (or besides) lexical values; context, perspective, point-in-time, feeling (sensors), interaction (conversation) and probably more when I get to it.

So this is actually my thesis, and I’m hoping to write a book about it somewhere in the not to distant future.

Some basic assumptions I’m working from:

1. The model will have minimal logic in hard code. The only stuff that will be coded into algorithms is handling of the ‘core-concepts’. I described this in other topics before; these core-concepts are like the ‘instinct’ of the AI.

2. The ‘logic’ that will be operating the AI-mind will be constructed from information in the data-model. So the data-model being able to describe logic (instead of hard-coding such logic into static algorithms) is paramount to create an evolving AI. The only way for AI to have ‘evolving understanding’ is if it’s capable to build it’s own rules (like we humans do).

3. I’m working towards a ‘conversational’ AI. Not because I want a ‘chatbot’ (although there is ‘fun’ in that) but because conversation is a proven psychological method for learning to AND testing a ‘conscious entity’. In addition to a conversational interface I’m adding ‘sensors’ to the model to give the AI more ‘concepts’ to relate to, that way creating a richer conversational environment. The sensors will add to the vocabulary for interaction with the AI and will be linked to the core-concepts.

4. Because of the previous points, my current ‘testing environment’ is not a programming language but a graphic modelling tool; I’m building a mind-map (using X-mind) to describe the data-model that will be the driving force in the AI that I hope to develop later, based on this. I also use another schematics application (yEd) for developing process-descriptions for handling the core-concepts reasoning engine. NOTE: those applications are Open Source and free to download an use. Just ask and I’ll post links, or google for it.

I leave it here for now, more to come as I get further into my research.

 

 
  [ # 1 ]

I just found a paper that describes ‘Emergent Semantics’; although it is not specifically aimed at AI research, it describes some important basics that I’m using in my model.

Emergent semantics refers to a set of principles and techniques analyzing the evolution of decentralized semantic structures in large scale distributed information systems. Emergent semantics approaches model the semantics of a distributed system as an ensemble of relationships between syntactic structures.
They consider both the representation of semantics and the discovery of the proper interpretation of symbols as the result of a self-organizing process performed by distributed agents exchanging symbols and having utilities dependent on the proper interpretation of the symbols. This is a complex systems perspective on the problem of dealing with semantics.

The paper is here: http://people.csail.mit.edu/pcm/papers/EmergentSemantics.pdf

 

 

 
  [ # 2 ]

Another thing I want to implement into my model is the handling of ‘Analogy’ (http://en.wikipedia.org/wiki/Analogy). This seems pretty important to me as analogies are used in teaching to broaden the perception and understanding of concepts. It will also give the AI more flexibility with interpreting input. I first thought that synonyms where most important, and they still are important, but analogies go much further.

 

 
  [ # 3 ]
Hans Peter Willems - Feb 13, 2011:

Another thing I want to implement into my model is the handling of ‘Analogy’

I also place a high importance on the importance of analogy in my bot.

 

 
  [ # 4 ]

Are you planning to make something similar like this guy’s mind map then?

 

 
  [ # 5 ]
Jan Bogaerts - Feb 13, 2011:

Are you planning to make something similar like this guy’s mind map then?

Wow, nice find Jan. Thanks for that.

My current mindmap is very different from that schematic, but equally complex right now. However, I’ve already started to extract a simplified version that will eventually be the design for the software version. Btw, your link is more of a ‘concept-map’ then a ‘mind-map’, and I use concept-mapping as well (that’s what I also use yEd for).

Nevertheless, downloaded it. There’s a high probability that there are some things in there that I can use to shape my ideas further.

 

 
  [ # 6 ]

Hi Hans,

James Allen’s research group at Rochester University were exploring this notion about ten years ago, so you are in good company. One of his students published a very detailed thesis on this topic which not only provides a framework for this kind of language processing, but also algorithms for discovering the properties and relationships needed to use it.

“A Practical Semantic Representation For Natural Language Parsing”
by Myroslava O. Dzikovska

http://www.cs.rochester.edu/u/myros/thesis.pdf

It makes for fascinating reading and given the tremendous success that has been demonstrated by this group using these ideas, I would say that you are definitely on the right track.

Cheers,
Andrew

 

 
  [ # 7 ]

Adrew, thanks for the heads up and the linked document. More knowledge to digest for me smile

I skimmed the document quickly and there is definitely a lot in there that I need to take into account and/or can use in my own research. However, it also strengthens my feeling that I’m on to something new. I still haven’t found any previous research yet that points to ‘information tagging’ as a means to build ‘experience’ (as opposed to ‘knowledge’). But I still have a whole lot of information to work through before I can even begin writing my own thesis.

So currently I’m still in discovery-mode. If anyone else has links to documents, articles and the likes, in relation to my ideas, then please post a link here. Any input is very much appreciated.

 

 
  [ # 8 ]

I found a short video of a presentation that comes very close, on several points, to the direction that I’m taking:

http://vimeo.com/7977427

 

 
  [ # 9 ]

@Andrew: Did they ever built the parser, or is it just a theory?

 

 
  [ # 10 ]
Andrew Smith - Feb 14, 2011:

Hi Hans,
“A Practical Semantic Representation For Natural Language Parsing”
by Myroslava O. Dzikovska

http://www.cs.rochester.edu/u/myros/thesis.pdf

It makes for fascinating reading and given the tremendous success that has been demonstrated by this group using these ideas, I would say that you are definitely on the right track.

Cheers,
Andrew

Agreed.  Incredible document! I read about 1/2 the document last night and will finish this evening.    My implementation has much in common with this.  I’m trying to stay away as much as possible from selectional restrictions though, but I know it increases performance, especially when there is a permutation explosion of parse trees.  Experimentation will see how much of it I use and how much I can ‘get away with’.    I think there will be a balance of exhaustive search and selectional restrictions, and that balance will be a direct result of some bench testing.  I think how well an algorithm is coded, as well as things like taking advantage of parallel processing and especially space/time trade (low cpu paid by higher memory usage) can help.  I myself intend on pushing the absolute limits of exhaustive search (so that selectional restrictions doesn’t result in limiting its parsing abilities).  When I reach that limit, then I will look at selectional restricts, heuristics, statistics, etc.

Reading a document is one thing, but I think the only way to get an idea of how well it performs is to use it.  Like Jan asked,  have they coded an implementation of this yet (the doc is dated 2004, so I’m assuming yes)?  If so, is it open source and has anyone here used it? 

I’m very interested in knowing the amount of effort it takes to create and maintain the selectional restrictions logic—and more importantly how much flexibility is lost when limiting the parsing scope like that.

Now a bot that autonomously learns to figure out its own selectional restrictions… boy we’d have it made, especially if it can learn that via NL conversations!

 

 
  [ # 11 ]

Damn… Edit button existence has expired ... I was just going to add to the above - that it makes no sense to NOT take advantage of space/time trade off -(Ref url = http://en.wikipedia.org/wiki/Space-time_tradeoff )- in a time where a machine with 4 GIG of ram is 300 dollars.  And for that matter systems with two cpu with 4 cores each, for total of 8 cores.    Based on the economy of computing power being so cheap, I think it makes sense to push searching as much as possible, then look at logic to restrict the searches.

 

 
  [ # 12 ]

Hans - I’m hijacking again right?  sorry, Dave perhaps move this to new thread called TRIPS

 

 
  [ # 13 ]

I’m very interested in knowing the amount of effort it takes to create and maintain the selectional restrictions logic—and more importantly how much flexibility is lost when limiting the parsing scope like that.

I did this for the scanner and the grammar parsers, in the form of filters. It’s a huge pain in the xxx to get right at first, but it pays, hugely.  Since I’ve got the filters defined locally on the data objects, maintenance has been ok, filters tend to be of the same structure, with minor variations, so it’s easy to understand once you know the principle.
It turns out you don’t really loose flexibility, many paths can be deemed invalid, based on some look-backs, look-aheads. Also, it allows you to do tricks which are otherwise impossible to do using regular parsing techniques.
There are, by the way multiple ways that you can use this restrictive selection: as a hard filter: allowed/not allowed, or by using weights: some situations are less likely, or others more likely. Once you have a result with a weight that’s bigger then the resuls still being calculated, you can stop the calculation of those. It can speed up things incredibly.

 

 
  [ # 14 ]
Jan Bogaerts - Feb 14, 2011:

@Andrew: Did they ever built the parser, or is it just a theory?

Jan, I think he already answered that:

Andrew Smith - Feb 14, 2011:

It makes for fascinating reading and given the tremendous success that has been demonstrated by this group using these ideas, I would say that you are definitely on the right track.

 

 
  [ # 15 ]
Victor Shulist - Feb 14, 2011:

Hans - I’m hijacking again right?  sorry, Dave perhaps move this to new thread called TRIPS

Affirmative.

 

 1 2 3 >  Last ›
1 of 6
 
  login or register to react