I know that its hard to provide accurate feedback since the actual architecture is not readily available, but in general terms the question is;
Does the following snippet represent a learned behaviour. There is nothing in the modules that is programmed to respond to a concept that is not understood that produces this response. The default response (if nothing produces a direct response) is some variation of “Im sorry, I couldnt think of a response, would you mind rephrasing the question”. If there is a major meltdown, the response is “Sorry, spaced out there” So this response, which started coming up a while ago, Is something that the AI has started doing on its own. To be honest with the self configuration it becomes increasingly difficult to track why certain things happen, but usually I can glean some idea of why a certain path was chosen, but this is unique. Referring to the following conversation;
What’s your name: Hello.
Vince: Hi
What’s your name: How are you?
Vince: not bad
What’s your name: My name is Fletch.
Vince: Hello fletch
fletch: Why do you look so sad?
Vince: why…what?
fletch: You look unhappy.
Vince: you…what?
Again I understand that its difficult without understanding whats going on, but in general terms, has the neural net configured itself to recognize that it does not understand an interrogatory\statement and has it programmed itself to ask for more information?
And if this is what has happened, what does it represent? (Being careful not to go overboard) Is this at least possibly a step towards independent thought?
Thanks in advance for your input which has in the past always proven to be insightful.
Vince