MindForth Programming Journal (MFPJ) 2010 November 7
Sun.7.NOV.2010—Improving the Tutorial Messages
Today in the AI Tutorial display mode
we have replaced the rather tentative
and hesitant message
“EnCog starts to think a sentence.”
with the more forceful idea
“EnCog thinks a thought in English.”
Since the wiki-word “EnCog” stands for
“English thinking” (cogitation), the new
line above is more explanatory. Likewise we
have replaced the reasonably informative message
“Noun & verb activation must slosh over onto logical direct objects.”
with a more explicitly descriptive statement of what must happen:
“Disparate verb-node activations slosh over onto candidate objects.”
Users will then be able to discern that
different time-nodes of the same verb are
sending different levels of activation to
candidate direct objects, because thinking
about a given subject for the verb causes
extra activation to build up on the verb-node
associated in memory with the particular
subject and the particular object. The word
“disparate” perfectly describes the different
activation “spikes” moving from a verb to its
candidate objects.
Mon.8.NOV.2010—PsiDecay Helps Not Much
We still have the problem of too much activation
building up on predicate nominatives during a
KB-query using be-verbs. Perhaps we should try
to repeat a variation of the also-ran knock-out
from the 5nov2010 MFPJ, knocking out predicate
nominatives that fail to win selection during
the response to a KB-query.
In the original also-ran knock-out, it was easy
to identify the also-rans, because they were
trying to win selection in the NounPhrase module.
When we have a be-verb KB-query such as “what
are you”, the selections are going on at the end
of a SpreadAct transfer of activation-spikes,
and the activations have already been accumulating
as multiple, identical queries are made to the
knowledge base. We would not mind if the
activations were merely helping to identify a
logically valid predicate nominative, but the
accumulating predicate-nominative activations
are eventually interfering with subject-selection
in the response-phase. We could perhaps knock the
activations down with a lot of PsiDecay calls,
and we could perhaps further target the judicious
use of PsiDecay by linking multiple PsiDecay calls
together with the phenomenon of the inhibition
of a selected predicate nominative. In other words,
not only would a winning predicate nominative be
subjected to neural inhibition, but at the same
time the non-winning candidates would have some
of their built-up activation taken away by the
means of a flock of calls to PsiDecay.
When we install a flock of six calls to PsiDecay
inside a NounPhrase conditional clause that inhibits
a predicate nominative, the gap between correct and
incorrect query-response subjects narrows to a
disparity of only two points, between activations
of 48 for “I” and 50 for “ROBOT”. The subject
“ROBOT” is still erroneously selected, because of
its slightly higher activation. Let us see what
happens when we increase the flock of calls to
PsiDecay from six calls to nine calls, so as to
attempt to eliminate the two-point gap. Oops!
Now we have an even larger gap, between 48 for
“I” and 64 for “ROBOT”, but it seems to be
a displaced gap, that is, the AI manages to
make one or two more responses to “what are you”
before derailing into the selection of “ROBOT”
as an erroneous subject of the response.
When we add about seven more PsiDecay calls,
we no longer get the erroneous subject, but
the AI gradually loses its ability to recall
predicate-nominatives from its knowledge base.
Tues.9.NOV.2010—Identifying Sources of Stray Activation
As the query-response also-rans build up too
high an activation, we must see whether the high
activation comes from EnParser, or from ReActivate,
or from SpreadAct. These three modules, if not more,
have the power to impose an activation upon a Psi
concept. NounAct and VerbAct also have the power.
When we input a KB-query to the AI, the input words
cause new concept nodes to form. EnParser sets a
basic level of activation on the new node, and
ReActivate puts activation onto previous nodes
of the concept. There are perhaps a lot of
adjustments to be considered here.
We have been letting ReActivate additively impose
incremental activation on the old nodes. Perhaps
we should switch to an absolute activation occurring
in ReActivate. It was perhaps not good practice to
rely on a lowering-test for excessive activation
towards the end of a loop in ReActivate. There
may need to be additive incremental activation
only when SpreadAct lets subject-noun and verb
activation additively combine on verb-nodes
immediately prior to slosh-over onto candidate
direct objects. So let us first change ReActivate
to stop imposing incremental activation. When we
do so, tests of SVO KB-retrieval still work.
However, we still get a high build-up of also-ran
activations during be-verb KB-queries. Perhaps we
should use EnParser and ReActivate to lower
MindGrid activations in general.
Since we still get stray activations too high
during be-verb KB-queries, we may have to tone down
the high activations being imparted by the SpreadAct
module. Or maybe wo could try NounAct first. Hmm, we
tried not NounAct but VerbAct, and we lowered the
arbitrary “verbval” value from 20 to 15. Suddenly
our stray activations were only four points out of
line, not nineteen points. Let us try lowering
“verbval” even further. No, that did not work.
We tried using one more PsiDecay call from within
NounPhrase, and the ploy seemed to mork. However,
it was like pushing the appearance of stray
activation down one more rung on a ladder.
In our attempt to get exhaustive KB query-responses,
we obtained one more valid KB-response, but stray
activation kept us from obtaining an exhaustive
series of logically valid responses. Instead of
“I AM AN ANDRU”, we obtained “ANDRU IS AN ANDRU”,
apparently because “I” had only 48 points and
“ANDRU” had 75 points of built-up activation.
However, since we are letting only SpreadAct use
incremental activation, we now know better
where the problem most likely lies. Let us try
using one more PsiDecay in NounPhrase, and see
how large a gap results. No, adding just one more
PsiDecay call in NounPhrase caused a lot of trouble.
Maybe we should adjust SpreadAct values instead.
Tues.9.NOV.2010—Calling PsiDamp From BeVerb
We obtained some good results when we went into
VerbAct and started reducing the high-level “spike”
values going from VerbAct into SpreadAct. The gap
narrowed between the illegitimate subject and the
preferred subject. Therefore let us try reducing
even more high-end “spike” values in VerbAct. No,
it was counterproductive.
Here is an idea. Because we now (since 5 October 2010)
have neural inhibition as a mechanism which lets a
KB-query rotate exhaustively through available responses,
we need a kind of sine-wave of deep thought in response
to be-verb KB-queries. As the query activates possible
answers and one is not only selected but also goes into
inhibition, we need a way for the simultaneously but
not winningly activated also-rans to subside immediately
in their activation, so that there is very little or no
residual activation. It is enough if each repeated or
lingering query activates candidate responses de novo,
so that the stray activations do not build up in such
a way as to overwhelm the query-response system.
Now somewhat later, we have not gotten the idea to
work, but still we have made immense progress.
In our attempt to isolate the cause of the build-up
of stray activations on be-verb KB-query also-ran
predicate nominatives, we discovered that the be-verb
form “AM” was not being psi-damped after each use and
was therefore building up considerable residual
activation. Since SpreadAct lets subject-noun and
verb-node activations combine cumulatively, too much
activation was apparently passing from the un-psi-damped
“AM” verb to the “ANDRU” predicate nominative.
We put a call from the end of the BeVerb module to
the PsiDamp module so that verbs of being would be
psi-damped, and immediately we began to get better
results. We no longer got erroneous subjects in
response to our KB-queries. However, not everything
was working perfectly. We sometimes had to repeat a
query multiple times to draw out the exhaustive KB
answers. We had been unaware of a major defect in
the AI lacking a call from BeVerb to PsiDamp, and so
perhaps minor glitches crept into the AI codebase.
Since we now have our best ever working AI, we may
hope to eradicate more and more glitches while
improving the overall performance of the AI Mind.