Uchiha - Mar 10, 2012:
Looks like we need modules.
Purpose processing - multiple sub modules - long term and short term purpose
Task list - long term and short term
Computer Language - prolog, c++, etc integration.
Visual - general tracking - advanced hardware and programming required
Audio - Will be very hard
Text>Direct
Language processing - audio, visual and direct text
Logic Processing - further sub modules
Prediction
Knowledge & Information database - further Sub modules
Creativity Processing.
Any other modules we need?
I’m working along these lines. I have a multi-agent system with a controller (subbot.org/controller/) and agents. The controller submits input (asynchronously) to each agent and selects the highest-scoring response. Scores are assigned by the agents, and can be modified by the user at runtime.
The agent-assigned score represents the agent’s confidence in its response; the user can modify those scores while interacting with the bot.
“Long-term purpose” is tricky with my system. For example, i have a Wolfram Alpha agent (subbot.org/waagent) that can take a while to respond (due to network latency, speed of the wolfram site, complexity of the question, etc.). I want the bot to respond immediately with another agent’s response, then when the Wolfram Alpha agent finishes processing, return its response. (The “long-term purpose” is, that the bot remembers a question across subsequent interactions, and answers the question when it can.) In practice I find that I get a lot of extra responses that I want to hide, so I want to improve the ability of the user to customize and filter the bot’s output.
“Task list”: i have a TODO agent (subbot.org/todoagent).
“Computer Language”: the agents are standalone command-line programs, so they can be in any language. I have agents in C, java, python, ruby, php, perl; the StudentAgent (subbot.org/studentagent) is in Logo (but I don’t use that one anymore).
Logic processing - i have a logic agent (subbot.org/logicagent) that does simple aristotelian syllogisms using natural language input. I’ve also experimented with if-then statements, modus ponens, unification…
My vision is for everyone to write standalone programs that can handle specific domains. The agents could then easily attach to a controller that submits input to them and collects their responses (asynchronously). Then the user can use feedback to select the agents they like most.
If someone writes a general intelligence algorithm that can handle all domains, that agent could supercede all the others and become the only necessary one.