AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

An example of a thinking machine?
 
Poll
In this example, is Skynet-AI thinking?
Yes 5
Yes, but... (explain below) 1
No, but if it did... (explain below) it would be. 6
No, machines can’t/don’t/will never think. 2
Total Votes: 14
You must be a logged-in member to vote
 
  [ # 91 ]

Also, I noticed you carefully evade obstacles that don’t fit your model. Such as, how are you planning on doing sequence if everything has to be binary?  (Merlin’s example of sequential number counting is one)

 

 
  [ # 92 ]

yet another question. If everything is reduced to a binary choice, how do you define the question for that choice? I mean, it’s one thing to answer questions, another to ask them.

 

 
  [ # 93 ]
Jonathan Charlton - Jul 18, 2012:

You can choose to make your answers hard to predict, but that begs the question of whether you are choosing to answer the question, or whether you are choosing to provide random responses.  If you choose to answer the question you are being asked, then your choices are in response to the objective of the question.

That would make it a test of knowledge, not a test of conversation. It shows that you totally misunderstand the aim and scope of the Turing-test. The Turing-test is NOT a test of knowledge, it’s a test of conversational skills. That is also why it is not only questions and answers, it’s about interaction (and therefore hard to predict, that’s the whole point of the test). You try to scale down the test to a ‘predictable’ scale, and that is exactly what Turing tried to avoid with his proposal. The test is specifically aimed at testing the capability of a system for mimicking human interaction. So your test is definitely NOT a Turing-test.

Jonathan Charlton - Jul 18, 2012:

Whether I am happy or sad is independent of whether or not I have a chemical response.  I choose stress, and thereby produce the cortisol.  It’s not the other way around.  Choosing your emotional state is binary it selects A versus ‘A.  Emotions are a binary choice.

It seems your view (and probably understanding) of human neurology is equally scaled down as your version of the Turing-test. You idea that we choose to have a certain emotional state and then our chemical state follows that decision, is totally nonsense and downright ignorant. Mind you, I’ve been studying emotion intensively, both in humans and in AI, for several years now (my own system is being developed on top of this), so I do have at least some knowledge in this area.

You make some bold statements in regard to several pretty complex scientific fields, so this begs the question; did you actually study any of these fields, and if so, to what extend. Your system is primarily modeled on financial markets, which have very little in common with human neurology. Also, from your perception of the Turing-test it seems that you didn’t actually read Alan Turing’s paper ‘Computing Machinery and Intelligence’ and/or any of the seminal papers debating his test proposal.

 

 
  [ # 94 ]

Am I correct in assuming your “Turing Test” is something like the following?
I will use the classic Loebner Prize question, as a demo:

Judge: Would it hurt if I stabbed you with a towel?
Room1: Yes
Room2: No

Judge: The answer is “No”.
Judge: Would it hurt if I stabbed you with a towel?
Room1: No
Room2: No

This is simply NOT how a Turing test works. If this is all you claim to be doing, most bots can already be corrected on the fly and I see nothing ground breaking in your work. I certainly see no evidence of something like this being able to pass a Turing test.

 

 
  [ # 95 ]
Jonathan Charlton - Jul 18, 2012:

Regarding your emotions, I’m sorry, they are a choice.  Whether I am happy or sad is independent of whether or not I have a chemical response.  I choose stress, and thereby produce the cortisol.  It’s not the other way around.

You “choose stress?! You should become a psychiatrist. Telling people with depression or stress related illness to “choose” to be happy instead would surely be a major medical breakthrough and one worthy of a Nobel Prize.

 

 
  [ # 96 ]
Genesis - Jul 19, 2012:
Victor Shulist - Jul 19, 2012:
Hans Peter Willems - Jul 18, 2012:

I have said this before: a chatbot is actually a grammatical expert-system (of some sort), and like with all expert-systems, the ‘intelligence’ is brought in by the builders of the system. The bot just follows the predetermined rules. So I agree that it is (still) not thinking.

Yes, you HAVE said that before, and it was no more true then than it is now lol.  I don’t care what kind of software you come up with, anyone can ALWAYS argue that “the computer is still just following instructions”.

I think what he’s trying to say is that there is a difference between a system that can go beyond its programmed boundaries versus one that is imprisoned by its own code. Whether or not they are following a pre-set rules i think we can both quite conclude is irrelevant.

Thank you for that correct assessment.

Also, what Victor still doesn’t understand, is that in my system there are NO predefined rules to handle specific functionality; there is no grammar in the system, there is also no predefined ontology in the system. It consists purely of the basic buiding-blocks for intelligence and consciousness. It can learn grammar (in any language) in the same way as it can learn robotic motor control to handle walking or whatever. It handles knowledge and experience on a conceptual level, and because of that it even understands the concept of ‘not understanding something’ (this is handled by ‘shallow conceptuality’). It learns pretty much exactly like a human does, but because we can ‘plug-in’ large amounts of data into a computer, it doesn’t need the years of training a human does.

Bottom line; when my system does show a certain utility, that utility is NOT defined by the developer, it is not preprogrammed. The only predefined utility in the system is to exist and be able to learn by experience (same as a human).

 

 
  [ # 97 ]

I guess he’s trying to make us choose to believe him. mmm.

 

 
  [ # 98 ]

You know what I just can’t understand: if you truly have a system that passes the turing test 100% of the time (even by your narrow definition) and if it can really be used for multiple domains (other than predicting stock values, at the time you tested it), combined with the multiple military credentials that you have been throwing around, then how come you are still pitching this idea on a public forum? Hell, how come you are even allowed to acknowledge the existence of such technology?
Surely, the US military must see the value of a system that can predict the outcome of a battle before it began, given the large amounts of data that they have on previous battles? They must also realize the dangers that it poses if terrorists were able to predict what to do best?

A system such as the one you have been describing is in essence ‘an oracle’: given data on previous occurrences, it is able to predict with 100% certainty what the outcome will be of similar events.
Something smells….

 

 
  [ # 99 ]
Merlin - Jul 19, 2012:

Hey CR,
By the way, how did you vote on the original question? Does Skynet-AI think in the example given?

I have some more questions before I vote. smile

You said,

Merlin - Jul 19, 2012:

The numbers 21, 23, and 22 do not exist anywhere in Skynet-AI’s programming or database. Neither do the words; “twenty one”, “twenty two” or “twenty three”.

and then you said,

Merlin - Jul 19, 2012:

The first thing the bot needs to do is determine if this is a “Math related” query.
Most of this input and response in this case happens in a module that focuses on math. Skynet knows the basic building blocks of word based numbers which allow it to understand numerical word input. It translates the words into digits and then translates strings of digits into what we think of as numbers.

(emphasis mine) How does it translate words into digits and so forth if those numbers aren’t actually anywhere in Skynet’s programming? Or is “twenty” and “one” stored, but “twenty one” is not? And if that is the case, how did it learn to add one plus twenty to get the value of “twenty one”? Did you teach it via language input or did you code this?

(By the way, no one answer to any of these questions necessarily means Skynet is or isn’t thinking, I just want to understand what it’s doing better before forming an opinion.)

Merlin - Jul 19, 2012:

It can recognize the basic math concepts (add, subtract, multiply, divide). The concept of a number “between” 2 others when it relates to a math question has the input transformed internally into:
(23+21)/2?
(this surprised me a little when I saw this structure because 21 and 23 are in a different order than the text input, come to find out that is the way I taught it)

(emphasis mine) How did you teach this?

 

 
  [ # 100 ]
Hans Peter Willems - Jul 19, 2012:

Also, what Victor still doesn’t understand, is that in my system there are NO predefined rules to handle specific functionality;

What Hans still doesn’t understand is that, if you give the computer any instructions at all, in any way, shape or form, it is executing your code.  Whether you choose to have it learn its first language or provide the information necessary, directly, is irrelevant.  That has no bearing on whether or not the system, later, when trained about new information, either from raw data, or from natural langauge input and knowledge of english grammar, is able to use that to draw new conclusions, even create hypotheses, generate its own code from its rich NL based understanding and powerful knowledge acquistion abilities, and be taught its 2nd, 3rd, etc language in that same, direct way.

We can think of it this way… imagine a child, which doesn’t need to learn its first langauge by being fed, over and over, the long, sugglish way humans do, but came with a kind of ‘built in’ language that allowed it to be given, directly, the required information it needs to understand some natural language , English or whatever.  Then, it learns its first language 100 times faster than other children, THEN you move on to teach it about the wrold, and it goes on to make conclusions,  generate its own programs, learn other languages (since language is just other information about the world, same as basic things like ‘water is wet’).    Hans seems to be saying that having this ‘bootstrap’ initial ability to be directly given this knowledge somehow invalidates or precludes whatever abilities it could develop after - talk about narrow mindedness.

Hans says “can learn”, but yet uses the word “when…”. . .  is there any I/O proof of this functionality now, or is he just hoping it will work.

So, question for the room/thread….  if two systems, A, and B.  A is given raw data, and determines how to understand language, and B is provided, fast, effective, direct information about how to understand a language….  then both A and B are able to, down the line, draw conclusions that the designer did not foresee, learn about the world via language, even write programs and create its own NL if-then’s to generate its own theories… does it matter.. I can’t see that it would.    Hans is basically saying, ‘no bootstrap language allowed’, pretty silly requirement.

 

 
  [ # 101 ]
Steve Worswick - Jul 19, 2012:
Jonathan Charlton - Jul 18, 2012:

Regarding your emotions, I’m sorry, they are a choice.  Whether I am happy or sad is independent of whether or not I have a chemical response.  I choose stress, and thereby produce the cortisol.  It’s not the other way around.

You “choose stress?! You should become a psychiatrist. Telling people with depression or stress related illness to “choose” to be happy instead would surely be a major medical breakthrough and one worthy of a Nobel Prize.

Yep that’s a bit insulting too.  I have to take daily medication for depression and have suffered with it for most of my adult life.  Telling me I chose for that to happen, well it’s bull shit.

 

 
  [ # 102 ]
Jan Bogaerts - Jul 19, 2012:

how come you are still pitching this idea on a public forum? Hell, how come you are even allowed to acknowledge the existence of such technology?
.

because its bullshit, (Roger I stold your world. .but it really fits here, usually I don’t like to be so blunt.. but I’m also calling bullshit on this one) an oversimplified version of the test, change the ruels so you win.  We have two people in here now that are claiming to build strong ai with ZERO proof, just vaporware built on old ideas and desperate attempts to understand human cognition like emotion smile

 

 
  [ # 103 ]
C R Hunt - Jul 19, 2012:

By the way, no one answer to any of these questions necessarily means Skynet is or isn’t thinking, I just want to understand what it’s doing better before forming an opinion.

Victor Shulist - Jul 19, 2012:

So, question for the room/thread….  if two systems, A, and B.  A is given raw data, and determines how to understand language, and B is provided, fast, effective, direct information about how to understand a language….  then both A and B are able to, down the line, draw conclusions that the designer did not foresee, learn about the world via language, even write programs and create its own NL if-then’s to generate its own theories… does it matter..

CR, I’ll give you more details a bit later,  but for now let me echo some of Victor’s comments and ask…

If I told you it was totally programmed in assembly language, would you vote one way?
or
If it was completely trained via Natural Language would you vote in another direction?

We don’t care if humans learn via English or French. Should it make a difference if an AI learns via the C++ language. Why wouldn’t C++ = French?

 

 
  [ # 104 ]
Victor Shulist - Jul 19, 2012:

We have two people in here now that are claiming to build strong ai with ZERO proof.

Victor, I think you are being a bit hash. Many of the members of this board have never publicly demonstrated their theories with a proof of concept model that others could test.

We need to be able allow people to come and openly discuss concepts even before they have coded the solutions. I like to think of it as meeting of the old Greek Scholars. smile

I remain an “Eager Skeptic”.

In Jonathan’s case he has given a time frame: “That’s what I’m hoping to demonstrate in an an academic environment this August.”

In Hans case he has an active project going on.

In the end the proof will be self-evident. It will prove (or disprove) their points.

I hope the discussions around projects like these help me think through some things, even if the results of those projects never see the light of day. For example:
While taking the AI courses heavy emphasis was placed on probabilistic learning (Bayes Rule). I even added it into Skynet-AI (although I have yet to use it for anything). It is an area I am currently exploring. Similarly, Hans’s “handling of analogies to fill in the holes” is similar to what I am currently doing within Skynet’s AI.

My hope is we can all review and shed light on new/current concepts and continue to explore how we “think”. wink

 

 
  [ # 105 ]
Merlin - Jul 19, 2012:

If I told you it was totally programmed in assembly language, would you vote one way?
or
If it was completely trained via Natural Language would you vote in another direction?

The language with which the program manipulates information doesn’t matter. What matters (to me) is whether or not it can break down a new task that it has not encountered nor been programmed directly to handle into pieces that are solveable with the tools it has. Of course, given that it is ultimately capable of the task. No person save MacGyver could be expected to learn to fly with nothing but a paperclip and a rubberband, for example. wink And given that it has examples of the task from which to learn.

If it can—through trial and error or more sophisticated guessing—develop its own algorithm (combination of actions it knows how to take) to complete the new task, then that is one level of thinking. If it can further refine its algorithm based on more examples of the same type of task, even better.

You can see how this is different from programming a new ability directly. I like to think of the latter as more like giving your bot a new “limb” or new innate ability. It is combining those innate abilities to handle new or familiar situations that I consider the act of “thinking”.

I’m not asking you these questions about Skynet-AI, incidentally, because I think it is important to know the in’s and out’s of an algorithm before determining if it constitutes thinking. Rather, I’m asking as an alternative to testing Skynet-AI myself. I could instead develop new tasks that require the same “innate skills” as defining an average and see if Skynet-AI is capable of thinking up a new solution to the task. (Finding the median of a group of numbers, for example.) Often finding a point where the algorithm fails can give insight into how the bot is approaching the problem.

———————————————————————————————————-

Since this board is often fond of specific examples (hi Victor smile ), let me try to illustrate what I mean.

So let’s say your bot has an innate sense of numbers (fine and dandy—humans do too after all). It can add and subtract and multiply and divide. (Though in principle the latter two could also be taught using the first two.) You teach your bot that the average of 21 and 23 is 22. It comes up with the following algorithms:

x = [21,23]
x = x+[1,0] = [22,23]
average = x[0] = 22

x = [21,23]
x = add(x) = 44
average = x/2 = 22

Both of these algorithms are comprised of a list of “innate” skills. Both give the right answer. Coming up with these hypotheses just from the input (no matter how you organized the input—assembly or lithuanian) is a type of thinking.

Now, suppose I tell it that the average of 10 and 20 is 15. Then a higher-level thinker should be able to discard the first hypotheses. And if I tell it that the average of 25, 45, and 50 is 40, then it should discard the second as well. Then develop a new algorithm that encompasses all the examples, perhaps:

x = [25,45,50]
y = count(x) = 3
x = add(x) = 120
average = x/y = 40

Not being able to do one of these later steps doesn’t negate the fact that it was thinking when it developed the first set of hypotheses. Even the first hypothesis, though clearly false, demonstrates thinking (to me). And even if it doesn’t have the skills to develop the correct algorithm (say, it doesn’t know how to ‘count’), it can still show some level of thinking by developing one that is only partially correct. Again, this maps to humans and animals. I don’t have the ability, at a glance, to distinguish a pile of 1000 dollar bills from 1050, but I’ll still try to think of a way to pick the larger pile, for better or worse. smile

Edited to add: The opinions expressed in this post are of course just my own. Merlin asked what would prompt me to vote that yes, Skynet-AI was “thinking” when it answered a particular question. What constitutes the level or method of thinking for an animal is of course another question entirely, and certainly a more complicated one which I’m not equipped to address.

 

‹ First  < 5 6 7 8 9 >  Last ›
7 of 15
 
  login or register to react