|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
http://phys.org/news/2013-01-biological-mystery-boost-artificial-intelligence.html
From brains to gene regulatory networks, many biological entities are organized into modules – dense clusters of interconnected parts within a complex network. For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over “the evolution of complexity.”
Although it’s a little bit removed from what any of us are trying to achieve here, I think this finding is extremely relevant to just about everything. I’m curious to know how different chatbot developers break their systems down into modules, if at all. Do you visualise your software as layers or chunks and how are they interconnected?
|
|
|
|
|
Posted: Jan 30, 2013 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Personally, I think that taking a “modular approach” to things is a good idea, and in some respects it’s essential to many of the projects that I’ve been working on lately. With respect to my work on the Program O project, I’ve been focusing on making the structure of the platform’s next version more modular in design, rather than the often “messy” conglomeration of functions and methods that make up the current framework. By separating the code into classes and modules, I’m seeing improvements in efficiency and performance, with the added benefit of having code flow that’s far easier to track and understand. By breaking the code into various modules, I can also make changes to one section of code without the current attendant risk of “breaking” code in some other function. Now if I could just find a way to apply this to my personal life, as a whole…
|
|
|
|
|
Posted: Jan 30, 2013 |
[ # 2 ]
|
|
Senior member
Total posts: 107
Joined: Sep 23, 2010
|
http://mind.sourceforge.net/aisteps.html is a list of the mind-modules in the Mentifex AI Minds:
http://www.scn.org/~mentifex/AiMind.html in English;
http://www.scn.org/~mentifex/Dushka.html in Russian;
http://www.scn.org/~mentifex/mindforth.txt in English;
http://www.scn.org/~mentifex/DeKi.txt in German.
http://en.wikipedia.org/wiki/Modularity_of_mind is the only Wikipedia article for which I can claim to have
written the text of the very first version of the page.
http://code.google.com/p/mindforth/wiki/InFerence is the newest Mentifex True AI mind-module and it creates new ideas by reasoning from old ideas.
|
|
|
|
|
Posted: Jan 30, 2013 |
[ # 3 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Andrew Smith - Jan 30, 2013: http://phys.org/news/2013-01-biological-mystery-boost-artificial-intelligence.html
Although it’s a little bit removed from what any of us are trying to achieve here, I think this finding is extremely relevant to just about everything. I’m curious to know how different chatbot developers break their systems down into modules, if at all. Do you visualise your software as layers or chunks and how are they interconnected?
Interesting article Andrew.
For Skynet-AI, the system is very modular. I visualize it as a Neural Net with neurons, and bundles or clusters of neurons that are triggered by input in the conversation.
To quote from the JAIL (TM) (JavaScript Artificial Intelligence Language) Thread:
The best method to describe how it works is a prioritized neural net (although different than traditional neural nets). A neuron can be crisp or fuzzy and has a analog priority. Neurons can be bundled together (like the math module). They can fire functions (like accessing a web site), fire and continue processing (similar to AIML THINK tag), fire and return a response, or branch to a bundle (and then continue if need be).
|
|
|
|
|
Posted: Jan 31, 2013 |
[ # 4 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
InFerence is the newest Mentifex True AI mind-module and it creates new ideas by reasoning from old ideas.
congratulations Arthur. You finally started working on inference. syllogisms are a good stating point. only 20 more deductive rules to go. then you can start on the inductive and abductive rules. then you can address case-based reasoning and analogy to start the 15 or so problem-solving procedures.
nice beginning. keep up the good work.
|
|
|
|
|
Posted: Jan 31, 2013 |
[ # 5 ]
|
|
Senior member
Total posts: 107
Joined: Sep 23, 2010
|
Thank you, Toborman. I wish I could learn what the “20 more deductive rules to go” are. Yesterday and today I have been working on porting the http://code.google.com/p/mindforth/wiki/InFerence module from the English MindForth into the German Wotan AI. As a premise for inferences, recently I appended some new ideas to the http://code.google.com/p/mindforth/wiki/DeBoot German bootstrap sequence, including “Frauen haben ein Kind” for “Women have a child.” Then the human user can type in that some person is a woman, as in “eva ist eine frau”, and the AI will create a silent inference that “Eva has a child”—which may or not be true. The inference is fed into the
http://code.google.com/p/mindforth/wiki/AskUser module so that the AI will pose a yes-or-no question for the
human user to confirm the inference. The http://code.google.com/p/mindforth/wiki/KbRetro module retroactively adjusts the AI knowledge base with the confirmation or negation of the inference.
Meanwhile the Mentifex AI project has been branching out. One fellow has been translating the Russian AI User Manual
into http://code.google.com/p/mindforth/wiki/RussMan in Russian. For some reason, the http://www.scn.org/~mentifex/Dushka.html Russian AI has been receiving more vists than the AI Minds in English or German. http://www.chatbots.org/ai_zone/viewthread/240/ is still the main AI Mind thread here on wonderful wondrous Chatbots.Borg!
|
|
|
|
|
Posted: Jan 31, 2013 |
[ # 6 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
As a premise for inferences, recently I appended some new ideas to the http://code.google.com/p/mindforth/wiki/DeBoot German bootstrap sequence, including “Frauen haben ein Kind” for “Women have a child.” Then the human user can type in that some person is a woman, as in “eva ist eine frau”, and the AI will create a silent inference that “Eva has a child”—which may or not be true. The inference is fed into the
http://code.google.com/p/mindforth/wiki/AskUser module so that the AI will pose a yes-or-no question for the
human user to confirm the inference. The http://code.google.com/p/mindforth/wiki/KbRetro module retroactively adjusts the AI knowledge base with the confirmation or negation of the inference.
Using the “square of opposition” can also be useful in asserting the negation of the inference.
Inferences from the Square of opposition
1. When a proposition of singular type is encountered, it is assumed to be true if neither refuted nor opposed.
2. When a proposition of singular type is encountered, it is assumed to be false if refuted or opposed.
3. When a classification proposition of singular type is encountered, it may be associated with characteristic proposition to infer its particular proposition.
4. When a characteristic proposition of singular type is encountered, it may be associated with classification proposition to infer its particular proposition.
5. When a proposition of particular type is encountered and its negation is unknown, then one may infer the probable universal proposition and remove the negative universal proposition.
6. When a proposition of particular type is encountered and its negation is known, then one may remove the universal proposition.
Singular: Tom is a person. Tom is alive.
Particular: some person is alive.
Universal: all persons are alive.
|
|
|
|
|
Posted: Jan 31, 2013 |
[ # 7 ]
|
|
Experienced member
Total posts: 62
Joined: Jun 4, 2011
|
I wish I could learn what the “20 more deductive rules to go” are.
Deductive methods
For those of you working on AI engines, here is a starting set of deductive processes.
P, q, r, and s are propositions. > means implies, - means not, v means or, & means and.
To use these methods you must convert your sentences into propositions.
These methods can be used to create or validate arguments, or infer propositions.
Name… Method
Modus Ponens… p > q, p; therefore q
Modus Tolens… p > q, -q; therefore -p
Chain… p > q, q > r; therefore p > r
Disjunctive1… p v q, p; therefore q
Disjunctive2… p v q, p; therefore q
Addition1… p; therefore p v q
Addition2… q; therefore p v q
Conjunctive1… -(p & q), p; therefore -q
Conjunctive2v -(p & q), q; therefore -p
Simplification1… (p & q); therefore p
Simplification2… (p & q); therefore q
Adjunction… p, q; therefore p & q
Reductio1… p > -p; therefore -p
Reductio2… p > (q & -q); therefore -p
Complex constructive… p > q, r > s, p v r; therefore q v s
Complex destructive… p > q, r > s, -q v -s; therefore -p v -r
Simple constructive… p > q, r > q, p v r; therefore q
Simple destructive… p > q, p > r, -q v -r; therefore -p
|
|
|
|
|
Posted: Feb 12, 2013 |
[ # 8 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
Andrew Smith - Jan 30, 2013: For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place.
Evolution just uses strategies that work when it creates complex systems, especially biological systems. Modules work very well for designing any large system, but there exist a few other strategies for handling complexity. There aren’t many such strategies altogether, though, at least not strategies that are commonly mentioned. The only such strategies I know about are:
hierarchy
modularity (/ clustering / grouping / summarizing?)
locality
regularity
(compression / data reduction / dimensionality reduction?)
(focus of attention?)
These strategies seem to be best known to circuit designers, though programmers, data miners, programming language/paradigm designers, AI system designers, and semantic web designers commonly use these strategies, as well:
Modularity, hierarchy, and interaction locality are general approaches to reducing the complexity of any large system.
http://dl.acm.org/citation.cfm?id=1233417
strategies used:
o hierarchy
o regularity
o modularity
o locality
http://www.eet.bme.hu/~benedek/Vlsi_Design/Lectures/Automatic_Synthesis_of_Digital_Circuits.pdf
The hierarchical design approach reduces the design complexity by dividing the large system into several sub-modules. Usually, other design concepts and design approaches are also needed to simplify the process. Regularity means that the hierarchical decomposition of a large system should result in not only simple, but also similar blocks, as much as possible.
...
Modularity in design means that the various functional blocks which make up the larger system must have well-defined functions and interfaces. Modularity allows that each block or module can be designed relatively independently from each other, since there is no ambiguity about the function and the signal interface of these blocks. All of the blocks can be combined with ease at the end of the design process, to form the large system. The concept of modularity enables the parallelisation of the design process. It also allows the use of generic modules in various designs - the well-defined functionality and signal interface allow plug-and-play design.
...
By defining well-characterized interfaces for each module in the system, we effectively ensure that the internals of each module become unimportant to the exterior modules. Internal details remain at the local level. The concept of locality also ensures that connections are mostly between neighboring modules, avoiding long-distance connections as much as possible. This last point is extremely important for avoiding excessive interconnect delays.
http://lsmwww.epfl.ch/Education/former/2002-2003/VLSIDesign/ch01/ch01.html
Regularity controls the manner in which sub-modules are chosen. The strategy is to avoid replacing a complex system design with a complexity of sub-modules. The designer attempts to divide the hierarchy into a set of similar blocks.
http://users.ecs.soton.ac.uk/bim/notes/cad/guides/hierarchy.html
Scalability of open-ended evolutionary processes depends on their ability to exploit functional modularity, structural regularity and hierarchy. Functional modularity creates a separation of function into structural units, thereby reducing the amount of coupling between internal and external behaviour on those units and allowing evolution to reuse them as higher-level building blocks. Structural regularity is the correlation of patterns within an individual. Examples of regularity are repetition of units, symmetries, self-similarities, smoothness, and any other form of reduced information content. Regularity allows evolution to specify increasingly extensive structures while maintaining short description lengths. Hierarchy is the recursive composition of function and structure into increasingly larger and adapted units, allowing evolution to search efficiently increasingly complex spaces.
http://kmoddl.org/papers/JBPC08_Lipson.pdf
|
|
|
|
|
Posted: Feb 17, 2013 |
[ # 9 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Hierarchy, regularity, locality, generalisation, parametrisation etc.
From a computer science perspective, these are all techniques that are useful for developing abstractions. Complexity never goes away, not if you want to solve the real problem instead of an imaginary one, and the development of abstract models is the most powerful means that we have of managing that complexity.
For example, I’ve been doing my language development using a context free grammar (CFG, expressed in Chomsky Normal Form). Using this abstraction I can keep the code separate from the data. Without such a separation I would have to perform all the parsing operations using a (mostly hand-written) recursive descent parser. Not only are such parsers hard to write and difficult to modify, they are slow and inefficient, even with the addition of memoization to partially emulate the operation of a true chart parser.
Instead I wrote an Earley parser (better) and a robust industrial strength Generalised Left to Right (GLR) parser (best) which can compile and rapidly execute CFGs containing millions of rules. This leaves me free to concentrate on the nature of the language that I am parsing rather than the code that does it. Quite aside from the ease of use and efficiency this affords, it opens the way to even higher levels of abstraction which would be impossible to contemplate with a hand-coded parser.
Some time ago I posted about using this method to simply handle more than nineteen thousand distinct use cases of English verbs (there are actually a lot more than that) with just a few hundred lines of Common Lisp to generate the rules for the CFG. This activity was analogous to hand-writing a parser for a language model referred to in the literature as Generalised Phrase Structure Grammar (GPSP).
Unfortunately there don’t seem to have been any significant workable improvements on GPSG that I’ve been able to find in the literature at this stage. I’ve looked at Lexical Conceptual Structure (LCS), Logical Form Grammar (LFG), Head-driven Phrase Structure Grammar (HPSG) and a bunch of others, but none of them stand out as being particularly practical or even an obviously correct improvement on GPSG.
At best they have been made to work by bolting on what is essentially still just GPSG but with awful syntax (e.g. English Resource Grammar or ERG), or at worst stochastic and statistical methods which may or may not be all that far removed from heuristics (better known as guesswork) such as Enju which I posted a link to yesterday.
So I’ve decided to continue my research by developing yet another constraint based language for GPSG in the hope of breaking some new ground with it. I think that this is what modularity is really all about. It’s about having a concept like algebra that you can use to explore more powerful and distant concepts like imaginary numbers and calculus.
|
|
|
|