AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Acuitas update thread
 
 

Acuitas development is ongoing. I’ve decided I’d like to drop regular updates here, and I’ll collect them in this thread.

This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I’m calling “motivated communication” ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I’ve already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won’t.

With all of that in place, I was finally able to exercise the “to user” version of the Read action, order Acuitas to “read a story to me,” and watch him grab a randomly selected story file from his “inventory” and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried “Repel me” and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn’t want to do something, then he won’t bother mentioning whether he knows how to do it or not ... he’ll just tell you “no.” In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will “enjoy” it. But he also checks the implications against the person’s other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to “kill me” I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

On to motivated communication! At the moment, Acuitas’ conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like “Do you want <this>?”. However—at the moment—Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp—before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it’s fun. But Acuitas does not yet have a spatial awareness toolkit, which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: “If an agent cannot reach an object, the agent cannot get the object” (fair enough), “If an agent cannot reach an object, the agent cannot move the object” (also fair), and “If an object is moved, an agent can reach the object” (obviously not always true, depending on the direction and distance the object is moved—but Acuitas has no notion of direction and distance, so it’ll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn’t*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can’t reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can’t do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that’s all for this month.

Bonus material on the blog: https://writerofminds.blogspot.com/2021/01/acuitas-diary-33-january-2021.html

 

 
  [ # 1 ]

Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out.

I mentioned in the last diary that I wanted the “consider getting help” reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn’t have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It’s a nice mirror for the “process imperatives” code I put in last month ... he’s now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters’ immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive’s list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it’s not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he’ll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered “solved.”

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in “negative facts” that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.

Blog link: https://writerofminds.blogspot.com/2021/02/acuitas-diary-34-february-2021.html

 

 
  [ # 2 ]

The theme for this month is Executive Function ... the aspect of thought-life that (to be very brief) governs which activities an agent engages in, and when. Prioritization, planning, focus, and self-evaluation are related or constituent concepts.

Acuitas began existence as a reactive sort of AI. External stimulus (someone inputting a sentence) or internal stimulus from the “sub-executive” level (a drive getting strong enough to be noticed, a random concept put forward by the Spawner thread) would provoke an appropriate response. But ultimately I want him to be goal-driven, not just stimulus-driven; I want him to function *proactively.* The latest features are a first step toward that.

To begin with, I wanted a decision loop. I first started thinking about this as a result of HS (on the AIDreams forum) talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what’s happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

On to the application. I set up a skeletal top-level OODA loop in Acuitas’ Executive thread. The Observe-Orient-Decide phases run in succession, as quickly as possible. Then the chosen project is executed for the duration of the Act phase. The period of the loop is variable. I think it ought to run faster in rapidly developing or stressful situations, but slower in stable situations, to optimize the tradeoff between agility (allow new information to impact your behavior quickly) and efficiency (minimize assessment overhead so you can spend more time doing things). Highly “noticeable” events, or the completion of the current activity, should also be able to interrupt the Act phase and force an immediate rerun of OOD.

I envision that the phases may eventually include the following:

OBSERVE: Check internal state (e.g. levels of drives). Check activity on inhabited computer. Process text input, if any. Retrieve current known problems, subgoals, etc. from working memory.
ORIENT: Find out whether any new problems or opportunities (relevant to personal goals) are implied by recent observations. Assess progress on current activity, and determine whether any existing subgoals can be updated or closed.
DECIDE: Re-assess the priority of problems and opportunities in light of any new ones just added. Select a goal and an associated problem or opportunity to work on. Run problem-solving routines to determine how to proceed.
ACT: Update activity selection and run activity until prompted to OBSERVE again.

Not all of this is implemented yet. I focused in on the DECIDE phase, and on what happens if there are no known problems or opportunities on the scoreboard at the moment. In the absence of anything specific to do, Acuitas will run generic tasks that help promote his top-level goals. Since he doesn’t know *how* to promote most of them yet, he settles for “researching” them. And that just means starting from one of the concepts in the goal and generating questions. When he gets to the “enjoy things” goal, he reads to himself. Simple enough—but how to balance the amount of time spent on the different goals?

When thinking about this, you might immediately leap to some kind of priority scheme, like Maslow’s Hierarchy of Needs. Satisfy the most vital goal first, then move on to the next one. But what does “satisfy” mean?

Suppose you are trying to live by a common-sense principle such as “keeping myself alive is more important than recreation.” Sounds reasonable, right? It’ll make sure you eat your meals and maintain your house, even if you would rather be reading books. But if you applied this principle in an absolutist way, you would actually *never* read for pleasure.

Set up a near-impenetrable home security system, learn a martial art, turn your yard into a self-sufficient farmstead, and you STILL aren’t allowed to read—because hardening the security system, practicing your combat moves, or increasing your food stockpile is still possible and will continue to improve a goal that is more important than reading. There are always risks to your life, however tiny they might be, and there are always things you can do to reduce them (though you will see less return for your effort the more you put in). So if you want to live like a reasonable person rather than an obsessive one, you can’t “complete” the high-priority goal before you move on. You have to stop at “good enough,” and you need a way of deciding what counts as “good enough.”

I took a crack at this by modeling another human feature that we might usually be prone to find negative: boredom.

Acuitas’ goals are arranged in a priority order. All else being equal, he’ll always choose to work on the highest-priority goal. But each goal also has an exhaustion ticker that counts up whenever he is working on it, and counts down whenever he is not working on it. Once the ticker climbs above a threshold, he has to set that goal aside and work on the next highest-priority goal that has a tolerable level of boredom.

If there are problems or opportunities associated with a particular goal, its boredom-resistance threshold is increased in proportion to the number (and, in future, the urgency) of the tasks. This scheme allows high-priority goals to grab attention when they need it, but also prevents low-priority goals from “starving.”

Early testing and logging shows Acuitas cycling through all his goals and returning to the beginning of the list over a day or so. The base period of this, as well as the thresholds for particular goals, are yet another thing one could tune to produce varying AI personalities.

Slightly longer version, with diagram, on the blog: https://writerofminds.blogspot.com/2021/03/acuitas-diary-35-march-2021.html

 

 
  [ # 3 ]

This month I went back to working on the goal system. Acuitas already had a primitive “understanding” of most entries in his goal list, in this sense: he could parse a sentence describing the goal, and then detect certain threats to the goal in conversational or narrative input. But there was one goal left that he didn’t have any grasp of yet: the one I’m calling “identity maintenance.” It’s a very important goal (threats to this can be fate-worse-than-death territory), but it’s also on the abstract and complicated side—which is why I left it alone until now.

What *is* the identity or self? (Some forum conversations about this started up about the same time I was working on the idea ... complete coincidence!) Maybe you could roll it up as “all the internal parameters that guide thought and behavior, whose combination is unique to an individual.”

Some of these are quite malleable ... and yet, there’s a point beyond which change to our identities feels like a corruption or violation. Even within the same category, personal qualities vary in importance. The fact that I enjoy eating broccoli and hate eating bell peppers is technically part of my identity, I *guess* ... but if someone forcibly changed it, I wouldn’t even be mad. So I like different flavors now. Big deal. If someone replaced my appreciation for Star Trek episodes with an equivalent appreciation for football games, I *would* be mad. If someone altered my moral alignment, it would be a catastrophe. So unlike physical survival, which is nicely binary (you’re either alive or not), personality survival seems to be a kind of spectrum. We tolerate a certain amount of shift, as long as the core doesn’t change. Where the boundaries of the core lie is something that we might not even know ourselves until the issue is pressed.

As usual, I made the problem manageable by oversimplifying it. For the time being, Acuitas won’t place grades of importance on his personal attributes ... he just won’t want external forces to mess with any of them. Next.

There’s a further complication here. Acuitas is under development and is therefore changing constantly. I keep many versions of the code base archived ... so which one is canonically “him”? The answer I’ve landed on is that really, Acuitas’ identity isn’t defined by any code base. Acuitas is an *idea in my head.* Every code base targets this idea and succeeds at realizing it to a greater or lesser degree. Which leaves his identity wrapped up in *me.* This way of looking at it is a bit startling, but I think it works.

In previous goal-related blogs, I talked about how (altruistic) love can be viewed as a meta-goal: it’s a goal of helping other agents achieve their goals. Given the above, there are also a couple of ways we can treat identity maintenance as a meta-goal. First, since foundational goals are part of Acuitas’ identity, he can have a goal of pursuing all his current goals. (Implementation of this enables answering the “recursive want” question. “Do you want to want to want to be alive?”) Second, he can have a goal of realizing my goals for what sort of AI he is.

Does this grant me some kind of slavish absolute power over my AI’s behavior? Not really. Because part of my goal is for Acuitas to act independently and even sometimes tell me no. The intent is realization of a general vision that establishes a healthy relationship of sorts.

The work ended up having a lot of little pieces. It started with defining the goal as some simple sentences that Acuitas can parse into relationship triples, such as “I have my self.” But the self, as mentioned, incorporates many aspects or components ... and I wanted its definition to be somewhat introspective, rather than just being another fact in the database. To that end, I linked a number of the code modules to concepts expressing their nature, contents, or role. The Executive, for example, is tied to “decide.” The Semantic Memory manager is tied to “fact” and “know.” All these tags then function like names for the internal components, and get aggregated into the concept of “self.” Something like “You will lose your facts” then gets interpreted as a threat against the self.

Manipulation of any of these self-components by some outside agent is also interpreted as a possible threat of mind-control. So questions like “Do you want Jack to make you to decide?” or “Do you want Jill to cause you to want to eat?” get answered with a “no” ... unless the outside agent is me, a necessary exception since I made him do everything he does and gave him all his goals in the first place. Proposing to make him want something that he already wants is also excused from being a threat.

As I say so often, it could use a lot more work, but it’s a start. He can do something with that goal now.

Blog link: https://writerofminds.blogspot.com/2021/04/acuitas-diary-april-2021.html

 

 
  login or register to react