And now for something totally different…
@Wakko, disregard these comments on your style, ideas, accomplishments, etc. In this part of the internet, most newcomers get this treatment. For example Steve says, “prove it, can I have some?” Andy says “that’s not what I’ve learned from my hard earned efforts - this is better to try…” C R says “Interesting, now scientifically you’re saying this… carefully examine what you’re proposing so I can assimilate it.” Dave says “I may be mistaken, but really, to me, it is this…” Merlin says “This is really that. This is not so… The community (and so do I) thinks this way…”
Of course, me on my high horse have been there. I’m the old man who asks questions to convince you of what I believe…
Artificial life adapts to the environment in which it lives by using the “tools” it has. You can see many constraints in defining how an entity could react. For a chatter bot, there are the rules of conversation. Content is much less defined in those rules, so we observe implementations that can do the things that are defined, like parsing inputs for understanding and signalling topics and subject changes and performing dialog like an actor in a play. Thinking (intelligence) is not defined yet. We may know it when we see it, that is, the behavior. An artificial life form that determines its own destiny, sets its own goals, is next to impractical in a virtual world that has no grounding for such goals. This is the crux of the matter, what would a chat bot think about?
If we would have continued, instead of you bowing out, I would have suggested how survival is related to having “space”, that is, preserving your territory that sustains you. And that threatening the loss of that territory leads to the need for social interaction and thus language and thus the understanding of motives and the comprehension of what the others think and the modelling of how they work and the negotiation (which is evident in these blogs) of oneself and thus intelligence. I would have suggested that by stressing your artificial creatures properly, you may eventually get them to talk (only don’t torture them until you know they can talk and think.) The bottom line comes down to making creatures that deal with such training and conditioning by thinking. So we have come full circle, that is, whether we use artificial life or not, we have to create the creature. We have to build into it the behaviors that develop a mind. Can you build this thing from some chemistry kit? Is it abstract enough to fit into the workings of a computer?
Very few, if any, here are working on general problem solving modules or belief networks that project into action or genetic algorithms for functional programs. A few are doing neural nets, but mostly to filter noise out of their pattern matching. The operations basically are search for a recorded response and format it pretty for output or store some data and then report on that recorded information. Again, it is rare for some one to describe what a chat bot would “think.” The illusion they offer is using deduction and inference in the pattern matching as that is the bot “thinking”. Mostly that is for classification though, not planning and strategy and “survival.” I wonder if any bot represented here calculates plot. I’m guessing, at the very best, they identify plot.
“The best way to predict the future is to invent it.”
— Alan Kay