AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

An example of a thinking machine?
 
Poll
In this example, is Skynet-AI thinking?
Yes 5
Yes, but... (explain below) 1
No, but if it did... (explain below) it would be. 6
No, machines can’t/don’t/will never think. 2
Total Votes: 14
You must be a logged-in member to vote
 
  [ # 106 ]

Hi Merlin, Victor, and Jan,

First Merlin,

Regarding {B} it is assumed to have a probability of 1 (it represents a certainty within the Luce and Raiffa example).  So your choices are between a certain outcome {B} and uncertainty outcomes {A, C}.  I should have stated that prior and hope that clears up the misunderstanding there.

Regarding the first input, you don’t have to forget about it.  I’ll demonstrate in just a second.  I just want to get to your other point first.

Let us first begin with the difference in applying a Turing Test (in your version - the Merlin Version) to you at two years old versus you now.  At two, you lack the ability to communicate, but you still have the ability to reason and learn, else whence did your ability to learn language derive.  This is why the “Terrible Twos” are called that.  Children at that age are frustrated in that they know what they want but cannot communicate it.  There is a difference between knowledge and intelligence.  Now consider you example of naming a number.  Ask your a two year old to name a number; they can’t thus they fail the Turing Test.  That doesn’t follow correctly because the two year old is still intelligent, he/ she merely doesn’t have the knowledge of the words.

Also, pairwise comparison is a simple way to consider two sets such that A is something defined as, say, “Barack Obama,” and ‘A is all that is not Barack Obama such that these two sets represent {A} and {B}.  One may ask whether or not a card drawn from a hat which maintains many sheets of paper with a name on each paper of the item which may be in either {A} or {B} was in fact in set {A} or set {B}.  The computer and human may then learn the prior probability that something is in either set by iteration.

So , “Give me a number,” computer responds, “Blue,” the interrogator responds “no, blue is not a number,” Blue is assigned a 0, “Give me a number,” the computer responds, “Butterflies,” the interrogator responds “no, butterflies is not a number,” Butterflies gets a 0.  Going through a list of everything (a single iteration) results in 0, 1 for those which are not and which are, respectively, numbers.  The second time around, the question is asked, and the computer will provide a number from the set of those things which have a 1, and that will be a number, until someone says, no, that thing is not a number.

Getting to plurals, ask a 3-year old what plurals are; they won’t be able to tell you.  That’s not because they aren’t intelligent, or don’t think, but because they don’t know what a plural is. They haven’t learned it yet.  It is over many iterations of feedback that they learn what a plural is.

So the point, Merlin, is that to control the Test, you must begin with a question which both computer and human are equally ignorant to, and see then if they differ in their ability to learn the concept.  Questions like the position of an electron, the location of a moving target, or even putting humans in a maze just like the rat mazes with a piece of cheese, those all work, because both human and computer begin from the same point in their knowledge.


Now Victor,

Victor my version of the Turing Test is valid.  Your assessment that my version of the Turing Test is invalid is incorrect.  Strong AI has been achieved, whether you are aware of it or not; differentiating between Strong AI as the ability to learn and Strong AI as consciousness, is non-sequitor because you can’t define consciousness yourself.  Regarding feedback being disallowed by the Turing Test, Turing sites a conversation with a computer.  Conversation requires feedback.  When a child is in school, they listen to a teacher, and then they are tested on that which they have learned.  The teacher grades the test; the child receives feedback and then learns from the mistakes.  That is discourse, that is a conversation whether the feedback from the children to the teacher is provided over the course of a semester, or whether that is provided in the moment.

Regarding your objection to the feedback in which you state,

“You say allowing the interogator to provide feedback wasn’t specifically disallowed by Turing.. I think common sense dictates that it is… perhaps the computer should be able to ask a human for the appropriate reply ?  well, that wasn’t specifically disallowed by Turing was it ?  Snap out of it.”

Again, your point is a complete red herring.  Here is why.  There is absolutely no reason why the computer should not be able to ask the human interrogator for an appropriate reply.  ABSOLUTELY NONE.  Were the computer to ask this of the interrogator, it should help the interrogator ascertain that the computer is in fact a computer.  The computer asking the interrogator for an appropriate response however is completely in line with the Turing Test, it just means that in doing so the computer is more likely to fail it, but of course, the Turing Test allows for the computer to fail.  Furthermore, it does not follow that my program is asking the interrogator anything.  The interrogator is asking both the human and the computer for information; both provide some; the interrogator provides feedback.  This is distinctly different from the example you provide (which isn’t a violation of the Turing Test anyway - it’s criteria for the computer failing).  So I don’t need to snap out of anything.  You need to formulate logical arguments as opposed to logical fallacies such as red herrings and non-sequitors.

Regarding prediction, no-one is predicting- that is impossible for human and computer because prediction involves ascertaining the future which is a set of information that has yet to occur/ be created.  Thus what people are doing is instead calculating probabilities of events - that isn’t predicting the future, that is recognition of a preference.  By your argument that a computer must understand the information it is asserting to be defined as asserting, that just isn’t necessarily true.  When you are incorrect, you must recognize that you are incorrect and incorporate that into what it was you previously thought you asserted as fact.  Further applying this towards the next event asserts that you have an understanding of the application of the information you possess.  I just differ on what it is you define as thinking.  Many people confuse thinking with remembering - there is no such thing as thinking.  “Thinking” is accessing memory and observing that it doesn’t match the present, updating your memory with the new information, and repeating the same experience of being wrong.  Humans decide, they don’t think. 

Hi Jan, regarding the Turing Test which is what I am rehashing, no, you are not correct in presuming that all I have is theory.  I have a program, as I have told you; it is called Mu, and it defeats the Turing Test.  Given I have stated this several times, am I correct in presuming that you haven’t read the several times I have told you that I have a program called Mu, and that I assert defeats the Turing Test?

I recognize that you don’t agree that what I have stated as the Turing Test is the Turing Test, but just because you don’t agree doesn’t mean that you are correct.  I recognize that on my end, but I’ve been spending most of my time defending my version of the Turing Test, and still, no-one has provided any strong argument against it for which I have not provided a viable defense.  Furthermore, I recognize that because I’m on a chatbot site, that there may exists a prior prevalence toward one particular definition of the Turing Test.  I don’t argue that an expansion of decisions requires an expansion of response, but at the core level of learning (which many seem to confuse with prediction which is the equivalent of reading tea-leaves) which is that of a single decision, the Turing Test is defeated, and AI at that level is very strong.

 

 

 

 
  [ # 107 ]

In the same way that I chose to suffer from depression I also choose to ignore you too from now on Jonathan - you are clearly a machine and have failed to prove that you are human.  Good day to you and have fun in August.  I only hope your programmer included a sense of humour to be able to cope with that.

 

 
  [ # 108 ]

Regarding {B} it is assumed to have a probability of 1 (it represents a certainty within the Luce and Raiffa example).  So your choices are between a certain outcome {B} and uncertainty outcomes {A, C}.  I should have stated that prior and hope that clears up the misunderstanding there.

That makes more sense.

Also, pairwise comparison is a simple way to consider two sets…

This I agree with, A or NOT A.

I recognize that you don’t agree that what I have stated as the Turing Test is the Turing Test, but just because you don’t agree doesn’t mean that you are correct.

Of course the corollary to that is just because you say something is a Turing Test doesn’t mean that you are correct. You may be creating a very interesting test to see if there are differences in learning between a human and Mu, and Mu defeats this test thereby proving that learning is the same (although how/if it handles uncertainty the same as humans is still an open issue). But, because of the restrictions, I wouldn’t consider this a Turing test in the spirit of the “Imitation game” or any of the typical descriptions of the test. There are no restrictions on the questions that can be asked.

So the point, Merlin, is that to control the Test, you must begin with a question which both computer and human are equally ignorant to, and see then if they differ in their ability to learn the concept.
If we are able to restrict the domain of the test (like the question at the start of this thread), then Skynet-AI passes. It will in fact pass virtually any math related addition or subtraction questions. The only problem, is that it is infallible, where humans are not. To more closely emulate a human I would need to inject errors into the responses.

 

 
  [ # 109 ]

Hi Jan, regarding the Turing Test which is what I am rehashing, no, you are not correct in presuming that all I have is theory.  I have a program, as I have told you; it is called Mu, and it defeats the Turing Test.  Given I have stated this several times, am I correct in presuming that you haven’t read the several times I have told you that I have a program called Mu, and that I assert defeats the Turing Test?

The thing I was trying to get at: if you dangle some sweets before us, aren’t you going to let us take a bite? I would like to chat with it, see how well it does in the turing test.

 

 
  [ # 110 ]
Jonathan Charlton - Jul 19, 2012:

Victor my version of the Turing Test is valid.  Your assessment that my version of the Turing Test is invalid is incorrect.

Jonathan, as you didn’t answer my question as to your actual knowledge of the Turing-test, I assume you didn’t read Alan Turing’s paper.

Your perception of the Turing-test is wrong because Turing didn’t actually device the test; he merely proposed to use a (then) existing test, called ‘the imitation game’ and around that time played between humans, as a means to test the capability of a machine to be able to replace a human in the said game.

First of all, nowhere in Turing’s paper he mentions that the answers of the questions are being fed back to the players, and neither does the description of the ‘imitation game’ imply in any way that this is part of the game. So you just made that up, to make your hypothesis work.

Secondly, Turing explicitly describes that giving wrong answers, like I suggested I could do myself in such a test, is an actual part of the game:

It is A’s object in the game to try and cause C to make the wrong identification.

Your description of the test is NOT consistent with the test described by Turing in his paper, hence your test is not a Turing-test by any measure.

 

 
  [ # 111 ]

CR,
I created a lengthy response earlier, but the forum ate it.
I’ll try to provide answers in more digestible chunks.

C R Hunt - Jul 19, 2012:

The language with which the program manipulates information doesn’t matter. What matters (to me) is whether or not it can break down a new task that it has not encountered nor been programmed directly to handle into pieces that are solveable with the tools it has. Of course, given that it is ultimately capable of the task. No person save MacGyver could be expected to learn to fly with nothing but a paperclip and a rubberband, for example. wink And given that it has examples of the task from which to learn.

If it can—through trial and error or more sophisticated guessing—develop its own algorithm (combination of actions it knows how to take) to complete the new task, then that is one level of thinking. If it can further refine its algorithm based on more examples of the same type of task, even better.

Although Skynet-AI has these types of capabilities, they were not used in this case. The example you describe is more similar to “General Solvers”. That would be what Jonathan Charlton framework could be like with a data set and a feedback loop. Given enough input the AI may be able to derive its own algorithms that best fit the model of the input. In fact I think Chuck did upload a spreadsheet that attempted to do what you describe.

Skynet-AI was taught more like we teach our children. You know the best algorithm and you want the bot to apply it in the broadest correct way possible.

 

 
  [ # 112 ]
C R Hunt - Jul 19, 2012:
Merlin - Jul 19, 2012:

The numbers 21, 23, and 22 do not exist anywhere in Skynet-AI’s programming or database. Neither do the words; “twenty one”, “twenty two” or “twenty three”.

and then you said,

Merlin - Jul 19, 2012:

The first thing the bot needs to do is determine if this is a “Math related” query.
Most of this input and response in this case happens in a module that focuses on math. Skynet knows the basic building blocks of word based numbers which allow it to understand numerical word input. It translates the words into digits and then translates strings of digits into what we think of as numbers.

(emphasis mine) How does it translate words into digits and so forth if those numbers aren’t actually anywhere in Skynet’s programming? Or is “twenty” and “one” stored, but “twenty one” is not? And if that is the case, how did it learn to add one plus twenty to get the value of “twenty one”? Did you teach it via language input or did you code this?

This is where JAIL (JavaScript Artificial Intelligence Language) comes in. It was designed to make it easy to create conversational AI. In some case it looks more like programming in others it looks more like English.

JAIL focuses on concepts and how to transform and manipulate them. In some ways it is like Hans’ analogy. Skynet-AI understands that a “twenty” is the same thing as a “2” followed by a “0” and a “one” is the same thing as a “1”. All of these are just strings. It also knows that two word numbers together can be added and represented as a single numeric. By teaching it the algorithm directly, it is much more efficient. It is just like how we teach you children.

20
+
 
1
___
21 

The AI transforms the input in an effort to simplify and translate it into an internal representation that it can better understand. This allows it to process numbers that it has never seen before.

You teach your bot that the average of 21 and 23 is 22.
Although Skynet-AI can handle this type of input:

USER:What is (twenty three plus twenty onedivided by two?
AITwenty two

You don’t want to teach it that, and that is where JAIL/Skynet-AI differs from pattern matching bots. What the goal is to teach it the process to go through to arrive at an answer in the general case.

Given:
What is the number between twenty one and twenty three?

It needs to recognize it as a question, discriminate between it and similar questions, narrow the domain to arrive at an answer, and generate a result.

USER:What is the number between twenty one and twenty three?
AITwenty two.
USER:What is the difference between twenty one and twenty three?
AITwo

So what I taught it was:
The number between two numbers is (Number A plus Number B) divided by 2.

C R Hunt - Jul 19, 2012:
Merlin - Jul 19, 2012:

It can recognize the basic math concepts (add, subtract, multiply, divide). The concept of a number “between” 2 others when it relates to a math question has the input transformed internally into:
(23+21)/2?
(this surprised me a little when I saw this structure because 21 and 23 are in a different order than the text input, come to find out that is the way I taught it)

(emphasis mine) How did you teach this?

What I found I actually taught it was:
The number between two numbers is (Number B plus Number A) divided by 2.

It only took 1 line to express this concept in JAIL.

 

 

 
  [ # 113 ]
Merlin - Jul 19, 2012:

JAIL focuses on concepts and how to transform and manipulate them. In some ways it is like Hans’ analogy. Skynet-AI understands that a “twenty” is the same thing as a “2” followed by a “0” and a “one” is the same thing as a “1”.

I would say that’s a bit more like a synonym instead of an analogy. In my perspective, analogies are more ‘loose’ and very much subject to interpretation (and therefore pretty much subjective in nature). Example:

- a ‘car’ is synonymous to a ‘automobile’ (so actually unambiguous).

but…

- a ‘shopping cart’ is analogous to a ‘F1 racing car’ ... IF the analogy fits the criteria of the context in which the analogy is used (like; they both have four wheels).

Handling analogies like these means you have to introduce a weighted model to describe the validity of a certain analogy in relation to a specific context. You also need some sort of conceptual knowledge representation that is capable of describing the ‘sub-concepts’ that might lead to ‘analogous parameters’ linked to a concept. This is specifically one of the things my system does.

 

 
  [ # 114 ]

Hi CR smile

Merlin - Jul 19, 2012:

We don’t care if humans learn via English or French. Should it make a difference if an AI learns via the C++ language. Why wouldn’t C++ = French?

Hay Merlin, that’s a pretty decent argument there.  I tend to agree—that it shouldn’t make the slightest difference.

Merlin - Jul 19, 2012:

In Jonathan’s case he has given a time frame: “That’s what I’m hoping to demonstrate in an an academic environment this August.”
In Hans case he has an active project going on.

Ok, perhaps English is not his first language then, since he used present tense (Hans … grammar DOES matter, especially in a forum lol).

and Jonathan, states he has software that passes the turing test “100% of the time” !!  So, I disgree, both of these individuals stated they have achieved AI.  But perhaps they’re GRAMMAR was a bit off and they intended to use future tense rather than present.


Jonathan Charlton - let’s agree to disagree perhaps.  I’ll stick with the original rules of the test.  And even if you want to interpret it another way, to me anyway, if I can’t sit down with your program and have a discussion, one on one with it, in naturual language, then it doesn’t pass the Turing Test.  Call that my own rules if you like smile  I too, then, can interpret the TT as I see fit, like you.  I judge a computer’s intelligence as how well it does at understanding, that is, making sense of the input, and one way is with natural language understanding, which most people believe to be an AI-Complete problem.  So, let’s wrap that up.  I’m sure your software is superb at stocks , but not at NLU, perhaps its a nifty mathematical module, but that’s not NLU.  But you have every right to interpret that is intelligence, everyone seems to have their own idea smile


. . .by the way, what 2 people voted “no computer will ever think’ . .they need a slap . .  just kidding.

 

 

 
  [ # 115 ]

Jonathan Charlton, I shared your URL on facebook’s “Strong Artificial Intelligence” group… see what those lads have to say about it.


Hans wrote

      “This is specifically one of the things my system does.”

yes, this is an essential ingredient.. but we need examples Hans, examples.  In my project, GLI (General Language Intelligence - formerly CLUES), see my thread, GRACE handles this.. . except I just refer to this as “context sensitive synonyms”.  Grace, for example, if you state “Bob went to a party”, and then ask right away “Did Bob go to a big celebration” , it will assume that since both are social gatherings, they are the same, but also point out that initially you just said “party”, but now you are saying “BIG”  party/celebration (yes, grammar matters, because perhaps you really what to do if bob went to a BIG (adjective modified) party/celebration. . .grammar matters smile

 

 
  [ # 116 ]

Deputy Andy

“OK Boss”
sorry, just wanted to add some ‘comedy relief’ .. .and something for people to look at that are just skimming this thread smile

URL : http://www.imdb.com/title/tt0796264/

Andy is a strong A.I. system will full NLU and NLU based inference.

Image Attachments
deputy-andy.jpg
 

 
  [ # 117 ]
Hans Peter Willems - Jul 18, 2012:

Putting ‘rules’ into a system, so the system can infer such a difference without ‘understanding why there is a difference’, is not education, it’s cheating wink

..........

Also, if putting rules into a computer system is equal to teaching, then we teach a wheel to role by making it round wink

 


Question - what is the meaning of putting the word rules in single quotation marks ?

Do you mean, they really aren’t rules??  that is what I always thought putting things in quote marks means..


Let’s get one thing straight… giving a computer rules *IS INFORMATION*... we give children information when we eduate them.  There is no difference.

Whether you give that information in if-then rules of the “BASIC” programming language, or assembly language, or 1’s and 0’s or speech, or visual, or whatever other means…  *RULES ARE INFORMATION*.

whether you give your program raw data and it infers rules…. .or whether you give the rules directly .. in either case YOU ARE GIVING IT INFORMATION…..  information in “IF / THEN” form is the same as information in raw-data form.

Information is information, whether it be “if ./ then” statements or a stream of numbers that are fed into an artificial neuron network.


feeding a raw corpus of language into a system, or feeding it direclty with grammar rules. . . BOTH ARE INFORMATION.

people need information . .. and it is not called “cheating” to provide that info . .so it shouldn’t matter if it is being fed into a computer or a human.

Your wheel analogy is another one of your great over-simplificatoins. . . with a computer, you are not changing the hardware. .  when you bend a material into a wheel you are not inputting information into it, you are changing the hardare. . .the analogy is ridiciolus.

Changing the physical form of a material into another shape versus having a computer input electrical signals is quite different.

By the way, to say “without understanding” . .requires you define the word understanding.  To me, understanding, at least in an AI context, will always be limited to relational understanding.  Probably, although I can’t say for sure, I’m not a neuroscientist, is most likely also simply relational.

 

 

 
  [ # 118 ]
Victor Shulist - Jul 20, 2012:

but we need examples Hans, examples.

You keep using the fact that I do not yet show my results in public, as some ‘proof’ that there are (probably) no results. That still is a bit stupid, I don’t have any obligation to you to show my system, as to earn some rights to talk about it here.

Victor Shulist - Jul 20, 2012:

Let’s get one thing straight… giving a computer rules *IS INFORMATION*... we give children information when we eduate them.  There is no difference.

There is a BIG difference in that humans have the capability to discard the rules when they see a necessity to do so. Did you see the movie I Robot; the difference between Sonny and the other robots is exactly about that.

Victor Shulist - Jul 20, 2012:

Your wheel analogy is another one of your great over-simplificatoins. . . with a computer, you are not changing the hardware. .  when you bend a material into a wheel you are not inputting information into it, you are changing the hardare. . .the analogy is ridiciolus.

No it’s not. The roundness of a wheel is as much part of it’s functional design as IF-THEN rules are part of the functional design of a computer program.

Victor Shulist - Jul 20, 2012:

Changing the physical form of a material into another shape versus having a computer input electrical signals is quite different.

There is no difference, they are both part of the functional design.

Victor Shulist - Jul 20, 2012:

To me, understanding, at least in an AI context, will always be limited to relational understanding.

So you are saying that you believe that ‘concept grounding in reality’ is not possible in AI?

Victor Shulist - Jul 20, 2012:

Probably, although I can’t say for sure, I’m not a neuroscientist, is most likely also simply relational.

I have no idea what you are saying here. Maybe you should check your grammar.

 

 
  [ # 119 ]

I just cam across a article by Margaret Boden, that includes some information that might explain my position a bit better. It talks about (among other things) PDP systems (Parallel Distributed Processing) which my system (somewhat) falls under:

PDP systems lack the three advantages just mentioned but offer others in compensation. They “naturally” provide content-addressable memory, in which an input pattern automatically reactivates the relevant activity array
across the network (as opposed to finding some specific memory address). They allow acceptable pattern-matching performance even if the input pattern is partly missing or accompanied by irrelevant input. And they enable learning by example, as opposed to learning by being explicitly programmed. All these useful capacities are very difficult to program using classical AI methods.

(emphasis is mine)

From http://www.aaai.org/ojs/index.php/aimagazine/article/view/1174 (full PDF can be downloaded).

 

 
  [ # 120 ]

Ben Goertzel on AGI:

On the other hand, some other researchers—including the author—believe that narrow AI and general AI are fundamentally different pursuits. From this perspective, if general intelligence is the objective, it is necessary for AI R&D to redirect itself toward the original goals of the field—transitioning away from the current focus on highly specialized narrow AI problem solving systems, back to confronting the more difficult issues of human level intelligence and ultimately intelligence beyond the human level. With this in mind, I and some other AI researchers have started using the term Artificial General Intelligence or AGI, to distinguish work on general thinking machines from work aimed at creating software solving various ‘narrow AI’ problems.

Some of the work done so far on narrow-AI can play an important role in general AI research—but in the AGI perspective, in order to be thus useful, this work will have to be considered from a different perspective. My own view, which I’ll elaborate here, is that the crux of intelligence mostly has to do with the emergent structures and dynamics that arise in a complex goal-achieving system, allowing this system to model and predict its own overall coordinated behavior patterns. These structures/dynamics include things we sloppily describe with words like “self”, “will” and “attention.”

In this view, thinking of a mind as a toolkit of specialized methods—like the ones developed by narrow-AI researchers—is misleading. A mind must contain a collection of specialized processes that synergize together so as to give rise to the appropriate high-level emergent structures and dynamics. The individual components of an AGI system might in some cases resemble algorithms created by narrow-AI researchers, but focusing on the individual and isolated functionality of various system components is not terribly productive in an AGI context. The main point is how the components work together.

(emphasis is mine)

From http://www.agiri.org/wiki/Artificial_General_Intelligence

 

‹ First  < 6 7 8 9 10 >  Last ›
8 of 15
 
  login or register to react