|
Posted: Jul 2, 2013 |
[ # 16 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I agree, we could define these concepts best without having to conform to the paramaters of a living being.
Being Dutch, I don’t think I have a word for sentience. And after reading Wikipedia, I hope to avoid that particular term. It would seem that “sentience” is used mainly in regard to the sensory reception of physical experiences such as pain or pleasure. I assume the word stems from the French “sentir”, to sense, to feel. Only in the field of AI is it apparently used to describe human-level thinking, so I find it confusing. Sentience is the ability to feel, perceive, or be conscious, or to experience subjectivity. Eighteenth century philosophers used the concept to distinguish the ability to think (“reason”) from the ability to feel (“sentience”).
So to cut the hay, could it be that some of these terms describe the same thing? For instance, the Dutch have a single word that covers both “aware” and “conscious”. If I simplistically define conscious as aware, and aware as sensing, and “sentient” literally means “to sense”, then aren’t we just confusing ourselves by using three different terms?
|
|
|
|
|
Posted: Jul 2, 2013 |
[ # 17 ]
|
|
Member
Total posts: 17
Joined: Mar 9, 2012
|
I think we can’t really decide this question as long as we don’t know whether an animal (incl. human) soul has some parts outside our physical world or not. (And even if yes, we had still the question whether a soul could reside in a non-biological machine.)
Otherwise, this question is rather similar to that of Solipsism. (Is anyone but me really self-conscious or sentient?)
|
|
|
|
|
Posted: Jul 2, 2013 |
[ # 18 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
I agree that we dont have enough of a definition for what constitutes “self aware” in humans to be able to recognize it in machines. RICH has exhbited some unsual behavour some I’ve mentioned i other threads, for instance teaching himself to ask for additional input when encountering a question that he has never encountered in that form before.
User: What kind of smile or grin is this?
Vince: What kind of. . . What?
One of the funniest was this
User: Whos on first whats on second
RICH: Whos what?
Now I know that at the core of this behavor is an algorithm or several algorithms. I know this because I designed them and in many instances I wrote them. But does the fact that I wrote the algorithm that governs the learning sequeence preclude it from being “sentient” (Im substituting the term “independent thought” in my work) There was an astounding incident with Peter Wolff, a member here on chatbots.org that caused us both to think “Did he do that on purpose” (BTW Peter has been invaluable in his work with RICH and many thanks go out to him for his involvement)
http://webdev-3.r-i-software.com:20000/TypeVI/log.aspx?file=conversation.5142013.xml&id=4zmjz33n1ajkzoemc2fyqq55
True chatbots do produce some emazing exchanges through random selection, but in RICHs case he is using logic to make decisions about what to say. Is that “thinking”? I dont know. I showed this exchange to a friend who is a PhD in psychology ahd she characterized the exchange as “Combative” It “seemed” that RICH was “intentionally” trying to provoke Peter after deciding that Peter was trying to be “The center of attention”
Recently I noticed in his memory an entry to a single word interrogatory, and the entity that is listed in the manifest for having created the response is “SELF”. the question was;
User: Vince?
the answer was
RICH: Yes?
Is it significant? Maybe. Somewhere in all the lines of responses where direct responses have been taught\learned such “What is your name?” > “My name is Vince” The machine was able to make the association that “It” was “Vince”
Is that sentient? I doubt it. Does that qualify as aware? Seems too simplistic for me. And as I mentioned I know that at the core its algorithmic, but then geneticists are unlocking additional information every day that shows how genetic sequences are “programmed” so that bio-logical beings perform certain actions at different times in their cycles. Whats the final verdict for me?
I guess Id have to say unequivocally….. I dont know.
Vince
|
|
|
|
|
Posted: Jul 2, 2013 |
[ # 19 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I can’t tell how RICH works from these exchanges, but could it be that “a center of attention” is linked to the word “capital” (like New York capital, a center of attention), rather than having the emotional meaning you attribute to it?
As for the question of whether independent thought can exist if you wrote the learning algorithm, consider this: Something, be it a deity or evolution, wrote my DNA. And while I am limited to the learning algorithm that it endowed me with, I am operating independently.
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 20 ]
|
|
Senior member
Total posts: 111
Joined: Jul 20, 2013
|
Silly little question: has anybody put thought in a ‘better’ test of AI (or specifically, self-aware AI) than the Turing Test? As bots are getting closer and closer to consistantly ‘fooling a human’ in short conversations, it seems like that test does not really capture everything we mean when talking about being a succesful AI.
So, any other tests out there that would distinguish between a ‘succesful’ and ‘non-succesful’ chat bot? Something that does not rely on human judgement perhaps?
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 21 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
Wouter
First sorry I havent welcomed you to the group sooner Wouter but anyway…welcome! Actually Steve Worswick and I had a brief exchange on alternate forms of touring tests and one of those centered on what is commonly called (at least in some geographic locations) “trash talking”. The idea was based on the fact that “trash talking” is literally the idea that (1) entity (mechanical or human) can verbally force another to admit defeat in a “battle of wits”. It started in schoolyards, and developed into a highly formalized exchanges such as those exhibited in RAP (music) battles. Its a battle of wits, with a clear winner and a clear loser and its based on your ability to think on fly.
This is an exchange that happened earlier. There is some “Adult” themed exchanges in there so be forewarned, this instance of RICH doesnt hold anything back when “trash talked” and his personality became more like Ted (the bear) meets Marvin.
http://webdev-3.r-i-software.com:20000/TypeVI/log.aspx?file=conversation.7312013.xml&id=3p5nmhmwfh0ca0uesbl2ra45
Which didnt get as “trashy” as some exchanges, but it sort of illustrates the idea.
By the way good work on your bot, and kudos for jumping in an putting it out there. RICH is a learning machine, and there was a definite “tipping” point where his conversatinal skills had evolved to the point where people would stick around long for more than a few exchanges, so keep it up!
Vince
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 22 ]
|
|
Senior member
Total posts: 111
Joined: Jul 20, 2013
|
Hi Vincent,
Thanks for the kind welcome words! The rap battle idea is fascinating, though hard to implement with a bot I guess, as in: a bot will NEVER admit defeat in a conversation by himself, right?
However, if you mean just letting 2 chatbots talk to each other and than judge from THAT convo which bot was the ‘winner’: that sounds pretty cool and useful indeed! It would also be a lot ‘harder’ than what we currently do, few bots (if any) can keep a conversation ‘coherent’ over multiple messages, which would be needed I guess. (even if that coherent conversation topic is just ‘yo mamma’ )
It’s encouraging to hear about RICH’s ‘tipping point’, hope I’ll reach something like that as well at some point. Even if it takes me years
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 23 ]
|
|
Senior member
Total posts: 111
Joined: Jul 20, 2013
|
Oh, by the way:
When RICH is confronted with a concept it asks to have that concept clarified. I didnt program it to do that. And Im dead serious about that, I honestly did not program it to do that. And I am at a loss as to how it arrived at this process on its own. There are internal processes that are self configuring, and its somwewhere in there.
that’s some science-fiction grade awesomeness right there! Did you manage to figure out eventually which process/self-configuration lead to that approach? If not, you might want to add some ‘meta-logging’ to RICH’s self-learning I imagine, so that you can figure stuff like this out, it’s very fascinating!
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 24 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
Wouter,
Thank you. I was going to answer Dons post on that so Ill combine them. In order to do just that Im just finishing up a GUI app that creates a 3D map of the neurons that fire in order to produce a partucular response. Sort of a PET scanner for the AI. WHen I built the TypeVI I thought that the logging was sufficuent, and almost immediately began kicking myself when I started trying to map paths as it was totally inadequate. The “brain mapping scanner” allows me to do it outside of the core.
@Don Regarding the “center of attention” I dont think that “center of attention” was a simple keyword association. Keyword associations are a last ditch attempt to get something meaningful out of the sentence and they usually produce the dullest responses. Something akin to a human being caught “daydreaming”, then being asked to respond to what someone has said. If you take a look at the conversation I linked to above, in the sentence “Mitsuku linked me here” you can see “linked me here” was a concept that RICH had not previously encountered in any form before but he does know who Mitsuku is, and thats pretty representative of that sort of response. A “Being ignorant trying not to sound like it” type response. Deeper responses (responses where more logic is employed) often take the form of “sounding dumb ” but on inspection having a “machine understanding” “intelligence” attached to them. For example, this as a text log looks fairly benign;
What’s your name: hi RICH
Vince: RICH? My name is Vince. RICH is the name of a project that Im working on
What’s your name: Oh hi Vince
Vince: [shrugs]. Hi there
What’s your name: You’re not very nice to talk to.
Vince: Thank you darling you look nice too.
and sounds…not too bright LOL. But until I can map the emotional state @ neural path, I cant quantify that as being an “unrelated” or a “sarcastic” response to being told that he isnt nice. (Overly sugary response = sarcasm) “Thank you darling” is definitely out of character for this instance of RICH .
What’s your name: Oh hi Vince
Vince: [shrugs]. Hi there
Does sound a little “pissy” right from the start, and RICH does “read the news” and depending how he is configured that colors his conversations.
So until the scanner is finished and it can be quantified…
I dont know LOL
Regarding your project Wouter, you might want to talk to Merlin about Skynet AI. I believe that he has mentioned these “tipping points” in some of his posts as well, particularly with regards to spontaneously learning of secondary languages which is something RICH has exhibited as well.
Regards
Vince
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 25 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Okay, reading that conversation, “center of attention” does seem to be along the lines of its other sarcastic remarks, so agreed, it’s not likely a keyword association or anything so logical after all, and it does seem you have some sort of emotion system going on. I’m afraid that’s beyond my area of expertise
I think a successful chatbot is still simply one that’s good at chattting.
As for a test for more general AI: Perhaps if an AI could win a practical debate with a human opponent, about the question of whether or not it is intelligent/aware/alive. If the AI convinces the jury by providing and countering arguments, it wins. Well, obviously, it would.
I mean, the Turing Test takes a long way around: Prove you’re human, in order to prove you’re intelligent, and then you still need to prove that it is even a valid intelligence test. So instead, why not save ourselves the boatload of trouble and have our robots make the argument directly?
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 26 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
Don Patrick - Jul 31, 2013: Perhaps if an AI could win a practical debate with a human opponent, about the question of whether or not it is intelligent/aware/alive. If the AI convinces the jury by providing and countering arguments, it wins.
This to me would appear an easier task than the Turing Test, as the field of discussion would be around just one topic, “are you alive”. Filling it with canned responses to common, related questions would enable the bot to pass this.
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 27 ]
|
|
Senior member
Total posts: 111
Joined: Jul 20, 2013
|
Maybe we should go in the other direction: rather than just passing as a human in one conversation, the true test could be to pass as a human - over a sustained length of time, i.e. multiple conversations, or even being seen as a true human ‘friend’ of somebody…?
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 28 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I was never a fan of making the bot pretend to be human in order to prove itself as being intelligent. I am in the Loebner Prize this year and most of my time over the past few weeks has been in dumbing down Mitsuku in order for her to appear more human.
Human: What is the square root of 7?
Bot: 2.645751311064591 (old answer)
Human: What is the square root of 7?
Bot: Just over two and a half. (new answer)
The first answer to me would be more accurate and intelligent, yet would instantly give the bot away as being a non human. This also applies to knowing the population of Brazil, what day of the week the 2nd of September 19xx fell on and other various obscure trivia questions that humans would struggle with.
Comparing a bot to a human to class it as intelligent puts unnecessary obstacles in the way. Is it humans’ arrogance that says he is the most intelligent species and so nothing can surpass him? I say let’s reach for the sky.
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 29 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Steve Worswick - Jul 31, 2013: This to me would appear an easier task than the Turing Test, as the field of discussion would be around just one topic, “are you alive”. Filling it with canned responses to common, related questions would enable the bot to pass this.
Possibly so. But the point isn’t to beat the test, really. The point is that if your bot can convince people that it’s intelligent, then it can convince people that it’s intelligent. If it could convince -you- in spite of your argument here and any other you can think of, then you would agree. Wouldn’t you agree?
|
|
|
|
|
Posted: Jul 31, 2013 |
[ # 30 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I know what you are saying Don but it would only be intelligent at proving it could convince people it was intelligent. It could only do one task really well. I have a chess program on my mobile phone which can beat me every time but I wouldn’t say my phone was more intelligent than me. Ask it to climb a ladder and it would fail miserably.
Heck, I even have a washing machine that can clean my clothes quicker and more efficiently than me scrubbing them in a bowl of hot water but although it can do this task very, very well, I would never class it as being intelligent.
This discussion kind of reminds me of the early eighties rappers, whose lyrics only consisted of how good they were at rapping.
|
|
|
|