|
Posted: May 28, 2012 |
[ # 16 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.meta-guide.com/home/bibliography/google-scholar/concept-mapping-dialog-systems
Concept Mapping & Dialog Systems
Uchiha, I’ve decided this is an intriguing premise…. Could chatbots read mindmaps, and if so would it be useful??
I didn’t find a specification for the MM file format, but know its an XML variant.
For instance, AIML is also an XML variant, and we know chatbots are fine with AIML.
It’s relatively straightforward to translate between XML variants with XSLT.
So, if we had an MM interpreter for AIML chatbots, what would it tell us?
Responses are coded into AIML; but, mindmaps usually only contain concept relations.
Mindmaps are basically tree structures. Lots of NLP programs traverse tree structures.
So, how would specific AIML responses physically associate with the mindmap concept tree?
|
|
|
|
|
Posted: May 30, 2012 |
[ # 17 ]
|
|
Member
Total posts: 16
Joined: Mar 10, 2012
|
I need a database for my C++ associations. Libxl isn’t very efficient…
And maybe UML for mind mapping?
http://www.youtube.com/watch?v=RMuMz5hQMf4&feature=fvwrel
|
|
|
|
|
Posted: May 30, 2012 |
[ # 18 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Instead of ‘mind mapping’ I would take a look at ‘concept maps’. First of all they are designed to draw ‘world views’ (i.e. a personal representation of perceived reality), and secondly they allow for the modeling of predicate logic.
http://en.wikipedia.org/wiki/Concept_map
|
|
|
|
|
Posted: May 30, 2012 |
[ # 19 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.meta-guide.com/home/bibliography/google-scholar/uml-unified-modeling-language-dialog-systems
UML (Unified Modeling Language) & Dialog Systems
> http://www.meta-guide.com/home/bibliography/google-scholar/language-modeling-dialog-systems-2011
Language Modeling & Dialog Systems 2011
= = =
Uchiha, it does seem that UML is perhaps associated with dialog systems even more than concept maps, or mind maps for that matter. UML would also seem to be more in line with Wolfram SystemModeler.
However, this brings us from “visualizing” to “modeling”. There seem to be at least two different kinds of modeling associated with dialog systems. UML would represent a kind of physical modeling; whereas, there is also statistical or probabilistic language modeling. And, there seem to be quite a few tools associated with language modeling and dialog systems.
This is an interesting tangent for me; as, it relates to my theories about the “clutch” or transducer mechanisms involved in the conversion of words to images, and images to words in the human brain/mind. Might not there be some connection between this “physical” modeling and the linguistic modeling ??
And, how might sitemaps and/or mindmaps assist with this physical/visual or linguistic modeling ??
Modeling => Language
Language => Modeling
|
|
|
|
|
Posted: May 30, 2012 |
[ # 20 ]
|
|
Member
Total posts: 16
Joined: Mar 10, 2012
|
So does anyone have any ideas on how I should start programming? Or should I continue with design.
|
|
|
|
|
Posted: May 30, 2012 |
[ # 21 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.google.com/search?q=modeling&tbs=dfn:1
The art or activity of making three-dimensional models
The devising or use of abstract or mathematical models
> http://www.meta-guide.com/home/bibliography/google-scholar/statistical-package-dialog-systems
Statistical Packages & Dialog Systems
> http://www.meta-guide.com/home/bibliography/google-scholar/mathematica-dialog-systems
Mathematica & Dialog Systems
= = =
According to Google’s definition of modeling, math ought to mediate between the visual and the linguistic….
visual <=> math <=> linguistic
This would be in line with the Mathematica software behind most Wolfram products.
Basically, we may be looking at math mediated conversions between words and images, and vice versa….
Uchiha, I’m a practical person. I actually don’t like the fact that too many people entering this arena try to reinvent the wheel from scratch…. I recommend thinking about practical tools. I believe the most practical tools today are web APIs, at least for me. So, think about which piece of the puzzle most interests you, then try to provide a solution by making a tool (API) for that process, which everyone else might be able to build upon, and could even make you some money…. See my recent videos on “Open Chatbot Standards for a Modular Chatbot Framework” => http://www.meta-guide.com/home/open-chatbot-standards .
|
|
|
|
|
Posted: May 30, 2012 |
[ # 22 ]
|
|
Member
Total posts: 16
Joined: Mar 10, 2012
|
So I should make it modular and get help from people to make modules?
Sounds like a fine idea, but how would I make it modular and get all the modules to work together?
|
|
|
|
|
Posted: May 30, 2012 |
[ # 23 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.meta-guide.com/home/application-programming-interfaces
Application Programming Interfaces (APIs)
> http://www.meta-guide.com/home/artificial-intelligence-middleware
Artificial Intelligence Middleware
> http://www.meta-guide.com/home/bibliography/google-scholar/protege-ontology-editor-dialog-systems
Protégé Ontology Editor & Dialog Systems
> http://www.meta-guide.com/home/bibliography/google-scholar/eclipse-ide-dialog-systems
Eclipse IDE & Dialog Systems
= = =
There should always be balance between the macro and micro levels; don’t get lost in either. Just work on a few modules that you care most about. Then mix and match with other people’s modules (APIs) to achieve your desired functionality. The way to integrate APIs is with middleware, IDEs, or frameworks. Myself I’m focussing on cloud based middleware, IDEs, and frameworks - PaaS & IaaS, for distributed multi-agent systems based on Grid computing.
Probably, Protégé and Eclipse are the two most popular high level frameworks for dialog systems. Protégé is twice as popular as Eclipse. I have yet to find any cloud services offering either Protégé or Eclipse, much less any reasonable substitutes in the cloud….
|
|
|
|
|
Posted: May 30, 2012 |
[ # 24 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.meta-guide.com/home/bibliography/google-scholar/geclipse
g-Eclipse Project - Tools for Cloud and Grid Computing
> http://www.meta-guide.com/home/bibliography/google-scholar/emi-european-middleware-initiative-glite
EMI (European Middleware Initiative) & gLite (Lightweight Middleware for Grid Computing)
= = =
Uchiha, you got me motivated to dig a little deeper into this particular mystery…. There seem to be two European funded projects, g-Eclipse and EMI (formerly gLite). g-Eclipse “aims to build an integrated workbench framework to access the power of existing Grid infrastructures”. And, the European Middleware Initiative “aims to deliver a consolidated set of middleware products based on the four major middleware providers in Europe”, now incorporating gLite into its EMI 2 Matterhorn product. EMI has videos online at => http://www.youtube.com/user/emieurope/videos .
|
|
|
|
|
Posted: May 30, 2012 |
[ # 25 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Marcus, I don’t want to discourage anything, but throwing loads of modeling tools, middleware, API’s and development tools on one big pile might be actually slowing down a project. I suggest it’s better to work from the problem domain and find the right tools to tackle the things you come across.
As an example, at the MIND|CONSTRUCT project we are very carefully building our toolset as we go, taking one step at the time and evaluate only the tools that are available for that particular step. This gives a good fundamental development platform that should scale into the anticipated complexity along the way.
BTW, we are adding the next part to our toolset; the STOMP messaging framework. We ar going to use it to build the virtual nervous system, and in addition to that we will use it to develop a messaging API for external stuff (at first mainly sensors) to connect and communicate with our ‘mind’. I’m going to do a blogpost on the STOMP framework somewhere in the coming days.
|
|
|
|
|
Posted: May 30, 2012 |
[ # 26 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.meta-guide.com/home/bibliography/google-scholar/stomp-simple-text-oriented-message-protocol
STOMP (Simple Text Oriented Message Protocol)
= = =
Hans Peter, I’m definitely interested in learning more about what you are doing with your ‘strong-AI engine’ http://mindconstruct.com , as well as about how you are doing it. I’m a strong believer that there should always be balance between the macro and micro levels, not to get lost in either. I think you’re on the right track developing for the “Internet of Things”.
STOMP is used in message-oriented middleware for event-driven architecture. Would you say it’s comparable to OpenJMS (Java Message Service) used in UIMA (Unstructured Information Management Architecture) for IBM Watson? How did you finally decide on STOMP?
|
|
|
|
|
Posted: May 31, 2012 |
[ # 27 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
> http://www.youtube.com/watch?v=VgNk-VLVPvc
Henry Markram: The Origins of the Human Mind
= = =
This is a cool new Henry Markram video, where he talks about the geometries of modeling within the human brain (starting at 3:40), in the context of the European Human Brain Project http://humanbrainproject.eu .
|
|
|
|
|
Posted: May 31, 2012 |
[ # 28 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Marcus Endicott - May 30, 2012:
STOMP is used in message-oriented middleware for event-driven architecture. Would you say it’s comparable to OpenJMS (Java Message Service) used in UIMA (Unstructured Information Management Architecture) for IBM Watson? How did you finally decide on STOMP?
OK, this is a little primer on the blogpost I’m working on
First of all, because we choose Python as our main language for development, I started looking around for messaging solutions. First stop was AMQP, as this seems to be somewhat of an accepted standard:
http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol
From there I went looking for implementations of AMQP in Python and came across RabbitMQ:
http://www.rabbitmq.com/
... which in turn has support for STOMP, so that’s how I stumbled upon it.
So far, the main reasons to go for STOMP are:
- there are several Python projects for implementing STOMP
- STOMP seems to be VERY fast, it scores pretty high in several shootouts
- STOMP is based on the HTTP protocol, so it uses a simple text-based message format
Especially the last point is important: because building AI at the level that we are working on is incredibly complex, any simplification in the tools and middleware is very welcome. Besides that, “simplicity is always more scalable then complexity” (I have to claim that quote, now that I made it ).
EDIT: I forgot to answer your question: indeed STOMP is comparable in functionality and application to OpenJMS. Thanks for the pointer btw, I wasn’t aware that they use that in Watson.
|
|
|
|
|
Posted: May 31, 2012 |
[ # 29 ]
|
|
Member
Total posts: 16
Joined: Mar 10, 2012
|
I’m lost guys. I’ve got alot reading to do, and I don’t think I’m going to be good at this project without much experience in programming. I agree with Marcus, everything would be smoother if the project was modular, but I’m not exaclty clear on how I should go about this. Bear in mind, I’m just 16, don’t expect me to understand too much.
|
|
|
|
|
Posted: May 31, 2012 |
[ # 30 ]
|
|
Senior member
Total posts: 498
Joined: Oct 3, 2008
|
Uchiha, you’ve been doing okay so far, and gave me some good ideas! Nobody knows it all, otherwise it would have already been done. We’re also reaching the limits of my knowledge. I had to do some new research to find out about gEclipse and EMI. What I’ve really gotten out of this thread was the geometric mediation of the words to images, and images to words conversions; and, it seems Henry Markram at least was already aware of that.
C++ is a good language to proceed with. I’m working on a webpage of C++ resources in AI and NLP, and will let you know when that’s ready. I’m not a great programmer myself, which is why I’m always looking for tools to help me. If I were you, I would start getting any C++ dialog system (chatbot) actually working. AIML is as good a place to start as any. At least by the time you reach the limits of AIML, you will have a much better idea of what’s involved. That said, there’s an awful lot that can be done with AIML; I’m not even sure the limits have been reached there. I sincerely believe that for 95% of the people in the world something like the AIML Superbot is all they will ever need, in terms of dialog system.
Hans Peter, that was a great reply about your STOMP journey, and I look forward to reading your whole blog. Just settling on which programming language, much less which messaging protocol is already great progress. I love the way your website lists your hardware platform, as well as the solution stack. And, I’ll quote you on that, “simplicity is more scalable than complexity”.... ;^)
|
|
|
|