Genesis - Oct 17, 2011:
You said your AI has reached AI complete yet you are programming more things into it?
Yes, I am smoothing out the basic True AI functionality.
> And looking at the above, its “NOT” very intelligent is it?
The transcript above looks like it came from the
http://www.scn.org/~mentifex/AIMind.html Tutorial AI
which currently lags behind the MindForth AI.
When the JavaScript artificial intelligence (JSAI) says,
“ROBOTS REPAIR WHAT DO I REPAIR”
two things are going wrong. The direct object “satellites”
is not activated highly enough, so the AI switches to the
WhatAuxSVerb module that asks a question with a specific
verb in the verb-slot. The other thing going wrong is that
the subject “robots” has gone astray and “I” took over.
> Quite frankly, take a look at this photo. I took it
> before you changed your site. You said your AI
> was AI-complete in 2008. So which is it?
Nice photo. In 2008, January, the Mentifex AI
was “AI-Complete” in that it was finally able to
think with “spreading activation”, but the thinking
had a tendency to derail into erroneous assertions.
Now on 9 October 2011 the AI is “AI-Complete”
with a more solid ability to think without the
derailments of a train of thought. Meanwhile I
am going through the free AI source code and
clearing out any element that has a disruptive or
corruptive influence on the new thought process.
> There are parts of your theory of the mind that are
> some-what true. But your implementation is what’s lacking.
> For example, your AI is based on language. But humans
> are not based on language, language is only how we
> express known concepts and is a learned concepts in itself.
> And since we use language alot, its become dominant in our
> memory that when we see an object, the first thing that
> activates in our mind is its “name”.
Sounds good to me.
> Secondly, grammar is a learned thing as well and the reason
> your AI has such problem conjuring up a cohesant thought is
> because it has a flawed grammatical system. Any grammar system
> that has predefined boundaries and isn’t dynamic is flawed.
MindForth and the JSAI have internally “predefined boundaries”
in the form of the inherent English grammar rules, but these AI
Minds are free to absorb tens of thousands of “triples” or
facts in the SPO (Subject - Predicate - Object) format.
> Thirdly, your AI “cannot” understand because there is
> no meaning to “text” without a grounded concept.
The text that the AI Mind understands is grounded in
the knowledge that the text conveys. Of course, as in the
http://code.google.com/p/mindforth/wiki/VisRecog
module, roboticists are invited to ground the AI in
sensory experience beyond the text-boundaries.
> Lastly if you keep your AI running. All it’s regurgitating is
> “I am a person or I am arnold or who is God” there is nothing
> of substance here yet you want this to be a course in universities?
http://cyborg.blogspot.com/2009/09/sciencemuseum.html
is about the Science Museums that might host the AI Minds,
not just university AI labs. And the AI does not just “regurgitate”
the concepts pre-coded into it. By asking questions, the AI
builds up a knowledge base fror free-ranging discussion.
> You have to do a-lil better than that. All I see and as you
> have stated is “18 years of programming.”
Once the disruptive, corruptive elements have been
cleared out from the free AI source code, and the MindForth
advances have been ported into the JavaScript AI,
the goal of “self-referential thought” will be achieved.
> You are now programming how to ask questions, but
> “asking questions” is a LEARNED thing! You say come
> back later, but that just means “come back when
> I program more things into it”
I mean, when I have made the AI function flawlessly.
> Everything we know since birth is learned. From what
> letters are, to words, to sentences, to a statement,
> to a question, to how to answer the questions appropriately,
> so on and so forth.
The AI is like the mind of a baby, but not “tabula rasa”.
> There are many questions you should ask yourself.
> When is your AI (without the “G”) going to be scaled up,
Thought is thought and does not need to “be scaled up”.
> when is it going to carry out an intelligent, cohesant conversation?
When I have removed the elements that were disruptive.
> when is it going to do anything intelligent at all?
You and the other users will be the judge of its intelligence.