AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

An example of a thinking machine?
 
Poll
In this example, is Skynet-AI thinking?
Yes 5
Yes, but... (explain below) 1
No, but if it did... (explain below) it would be. 6
No, machines can’t/don’t/will never think. 2
Total Votes: 14
You must be a logged-in member to vote
 
  [ # 136 ]
Hans Peter Willems - Jul 22, 2012:

Yesterday I came across this article, quite an interesting read:

http://secretinfogarden.blogspot.nl/2008/02/programmable-robot-of-ancient-greece.html

Thanks Hans!  That was a good article. It may be fun to try and build an axle with cogs, powered by a string and a weight with its descent regulated by a homemade analogue device similar to an hour glass. Using a modern PVC pipe comes to mind.

 

 

 
  [ # 137 ]

Hans seems to think that providing the computer with an initial English grammar to “bootstrap it” somehow precludes the possiblitiy of adding on to that system higher AI type functionalities.  Well, perhaps we should start of with no language at all, not even any scripting languages, assembly langauge or even micro-code.  Because after all, you’re starting off by writing code in C/C++/perl/phython or whatever, that should really be considered cheating according to the “rules of building an AI” according to Hans.  Apparently you can’t give anything to the system at all except raw data…. when you do, the system is “clockwork”, only executing the instructions you provide it, and no matter how many levels of indirection are from the initial control logic to any generated logic (or generated action plans), since you initially provided that code, it is simply a 17th century clock….  go figure, its a mystery to me how some minds work.

8PLA • NET - Jul 22, 2012:

The approach Victor is taking for natural language processing is of course perfectly valid in one field of artificial intelligence known as NLP for short. Yet, implementing a grammar parser on a computer for Subject, Verb, Object is a challenging feat, but the results are elegant because we may relate to that, as how we use grammar as humans.

Correct and Correct.  Actually GLI is at the point now where it can go far beyond SVO but any level of depth of modifiers of any type tacted on to any subject, verb, or object, recursively (the example I/O is old now, but have a look at the GRACE thread).  It figures out which peices of ‘english-parsing-knowledge’ to apply along the way, based on information about the world, context of current conversation, how relevant your input is to what it already knows (as opposed to Hans’ fixation that it is ONLY a grammar engine, but I’ve given up on that).

Hans Peter Willems - Jul 21, 2012:

Victor, you can go on and on repeating your views (which are still pretty narrow to my opinion),

Your interpretation of them is narrow. 

Hans Peter Willems - Jul 21, 2012:

In software, this means you either need a system to be able rewrite it’s own software (which is a messy solution and very hard to control), or you need a system that can form new conceptual information BY ITSELF, based on the available conceptual information, that way creating richer conceptual models of reality.

It doesn’t have to rewrite itself.  It can generate new programs.  Programs are simply a series of action statements.  Generating a program is really no different than deducing a new statement, or conclusion from one or more premises.  Now I know what is going on in side your head when you’re reading that .. BUT YOU GIVE THE COMPUTER THE RULE TO GENERATE IT right?  But what if it generated its own if/then rule?  Well, you’d say, you still gave it the rule that allowed it to generate that rule?  At some point the “spark” has to be somewhere, there must be SOME INITIAL CODE that the CPU of a computer executed to get things rolling.    What will your software be made from???? magic pixey dust ??  Or logic/programming-code ? You will have to give the CPU SOME CODE to start the ball rolling . .and , if I was like you, I’d say, no no no . .that is rule based system.. you are giving the computer code/if-then, whatever causes it to *DO SOMETHING* is cheating.

How do you propose to :

            “form new conceptual information BY ITSELF, based on the available conceptual information”

BY ITSELF ..  hum .....  as in some magic rays from outer space are going to come into the CPU telling it what code to run in order to “form new conceptual information”. 

the “BY ITSELF” has to be code…. SOMETHING HAS TO CAUSE THE CPU in the machine to ACT on data… nothing much will happen in a computer system with just raw data and no CPU codes to run it..  (can we AT LEAST agree on THAT)? 

SO—question—- what are the ways that CPU instructions can get into a computer for it to execute???

WELL .. .  user can write initial code, “C”.

That code can be executed directly to produce output “O”.

Now, pretty much everyone agrees that that is not A.I.  And sure, it isn’t.  Although I still say that a computer’s ability to handle endless levels of complexity flawlessly, something humans can barely do, as some form of intelligence.  Anyway, moving on.

Now, system receives input from “world”.  Do we CARE if such input is raw sensory data or English sentences, or the user entering information about english grammar?

So that new information is “I” ( sensory data/english input/information about english grammar/information about quantum physics, jump in CR smile ).

Now, if that new information was an english sentence that specified some complex relationship that exists in the world, or if say, instead of one statement there is an entire english conversation of statements I(1) .. I(n).

And say some of those are statements, facts, and some specify relationships of previous facts, to any level of indirection.

NOW .. say it was able to generate new facts and new theories. . that the PROGRAMMER had no idea of (say the programmer knows nothing about QM), but goes and reads it on wikipedia or whatever.

If the programmer never even knew how to solve QM problems, but the system learned by LANGUAGE, in statements that even the PROGRAMMER NEVER EVEN SEEN, how the hell can that *not be* thinking ?

OR—if a system was developed by a programmer say that had no clue how to solve rubix cube… but was given a) the problem,  b) how to change or rotate the sides of the cube and c) the criteria of success (of course that all sides be the same color), and the system was able to combine facts and bits and peices of logic it learned earlier , and combine them in a way that the programmer never even thought of, in order to deduce some rubix cube theoroms, would that be thinking?  I think someone would almost have to be out of their mind NOT to call THAT thinking.

My point in all this is that you MUST START with some ‘control code’ (see previous post).  Or else nothing will happen.  And if your argument is that providing any code or initial set of instructions to a computer somehow invalidates it as thinking, you’re wrong.

That set of initial instructions can either be your base C/C++ whatever code, OR a combination of that and code that allows it to understand a natural languagae.  And yes, yes yes, it doesn’t have to be text based visual.  I’m saying ONE OF THE ALLOWED ways should be text based visual.

So I don’t know where you get the idea that bootstraping a system which can start its life off understanding a natural language precludes these other functionalities form happening later, there is absolutely no basis for that conclusion.

My first priority in my project is to accomplish NLU.  That’s my first goal.  Somehow from that your thinking is:

      1) he’s focusing on NLU
      2) oh no!! he’s providing some grammar information to the system !!!!!!!!!!!
      3) conclusion:  the system is ONLY a ‘grammar expert system’

The reason I focus on NLU first is I see a great application for it.  And regardless of all the philosophy you can use to “prove” me wrong, the fact of the matter is, as I said earlier, no one would CARE about the philosophy if we have a USEFLES system that, perhaps one day, could pass the TT.

Imagine showing an investor a “HAL 9000”, Turing Test passable system that could replace 1000 people in a call center and save MILLIONS… he’d say “hum.. well, i don’t know, this Hans guy says he doesn’t like the architecture because he doesn’t consider it ‘thinking’ .. the philosophy doesn’t agree with him.”  that’s pretty hilarious.

Once NLU is achieved, the system can be educated about the real world.  This is necessary for AGI.  Why?  because only natural langauge is RICH enough to explain the world (hence the name natural smile ).

Things like predicate logic are cool and all, but they are rigid, brital systems.  The can only describe ‘perfect worlds’.  For AGI, we need free form understanding, dealing with arbitrary depth of complexity and conflicting statements, incomplete statements, etc.

Once NLU is achieved, the next step would be to start providing “programs” to this system.  I don’t mean a program that tells the computer exactly what to do, step by step (so I don’t mean “control logic” (see previous post)... I mean “consider logic”.

Provide a program, as a list of instructions, in plain English.

NOW ...  give the system a completely different environment to do that task in..  or perhaps not compeltely different but let’s say some things new, some removed, soem changed.  If the system can take that somewhat fuzzy english “program”, and say “well, step # 5 says to use SSH into a linux box—but the machine in this case is windows”, to which user would reply “yes, those instructions were for a linux-only network, sorry, use remote desktop”.  So the sysem would go online and google “howto remote desktop into windows”.    Coming back, it would update its own enlish “program”, and ask the user for the credentials.  BUT, another unexpected thing would happen, can’t connect to the port.  So the sysem may ask “is the proper port open to allow remote desktop?”, user says “don’t know, what port do you need?” system realizes it doesn’t know that, so back to google it goes “port for remote desktop on windows”, and finds it, and the discussion goes on like that.

This would be an example of a real world, handling CHANGING ENVIRONMENTS, learning new information from natural language, and updates its own “program” (english or whatever language).

I don’t care WHAT the philosophy is, a system like that would be the best thing since slice bread, and franky I won’t give a ***** what you *CALL* it !!!  it is WHAT IT **DOES*** that matters!!  It is what it does that SAVES PEOPLE MONEY, or makes people money.  It is the UTILITY .. and you can argue your philosophy until you are blue in the face…. philosophy you use to “prove” you’re right.


Ok, sorry for the long rant folks, and there’ll be spelling errors in the above i’m sure, but I don’t have the time to check. .sorry.

Keep those blinders on Hans.. and keep NOT giving one inch to anyone!!!

Image Attachments
blinders.jpg
 

 
  [ # 138 ]

So for the quotes from Ben Goertzel and Margaret Boden which I am a fan of, also Ray Kurzweil… well, these people want to see AGI happen .. not winning a match on chatbots.org of who is ‘right’.  I watched many videos on interviews with Ben G, Ray K, etc, and these people are open minded, never really righting any idea off completely.  Ben, which I reallly get a kick out of, constantly points out his approach, and then almost immediatey says “on the other hand.. . . .so-and-so approach to this by .. . . .”.  These people KNOW that the true path to A.I. is still a bit fuzzy.  Anyone that says “we got it all figured out. . just have to hammer out the code now”  is a fool.  I have yet to see you show any HINT of that kind of open mindedless.. it’s your way, or the highway 100% of the time.

 

 
  [ # 139 ]
Victor Shulist - Jul 23, 2012:

Hans seems to think that providing the computer with an initial English grammar to “bootstrap it” somehow precludes the possiblitiy of adding on to that system higher AI type functionalities.

I’m not alone in that view. You declare yourself a fan of Ben Goertzel, well he clearly states:

In this view, thinking of a mind as a toolkit of specialized methods—like the ones developed by narrow-AI researchers—is misleading. A mind must contain a collection of specialized processes that synergize together so as to give rise to the appropriate high-level emergent structures and dynamics.

So when Ben Goertzel makes such a statement, you are a fan of him, but when I make such a statement, I’m narrow minded.

Victor Shulist - Jul 23, 2012:

it’s your way, or the highway 100% of the time.

You really take criticism badly. I really NEVER have stated anything like that, it is YOU who is (almost every time) bringing up this figment of your imagination. I do believe that my system is indeed a possible solution to attaining AGI, I also believe that your chosen path will never lead there (and I’m not alone in that assumption, MANY scientists agree that NLP is no basis for AGI), but I never stated that there are no other projects with viable views. Yours just isn’t one of them.  Ben Goertzel’s system seems to have a few very good ideas in it, although I think it’s a bit convoluted (again, I’m not alone in that view), brain emulation seems to have it’s merits but is hampered by the needed acceleration of hardware that will take a few decades to arrive. I don’t discount these and several other efforts, but I do opine that there are also dead-end ideas still being pursued by certain people.

 

 
  [ # 140 ]

Well, we agree on something ... we both believe eachother will fail. smile 
Keep the blinders on Hans.

 

 
  [ # 141 ]
Hans Peter Willems - Jul 23, 2012:
Victor Shulist - Jul 23, 2012:

it’s your way, or the highway 100% of the time.

You really take criticism badly. I

Not at all.  I just get a kick out of your attitude is all.  No worries.

 

 
  [ # 142 ]
Victor Shulist - Jul 23, 2012:

Keep the blinders on Hans.

And yet again we’re here…. so I’m done here… again. Too bad that while several others seem to engage with me in healthy debate, you again made me loose any will to put effort in debating things here.

Anyone who would like to stay in touch with me, see the social media links on mindconstruct.com, or drop me a personal message, I’m no longer reading the board here.

 

 
  [ # 143 ]

That was just a little ‘ribbing’ Hans.  Come on now.  Besides, I was just about to create a brand new thread which I think you’d find exteremely interesting.  But what the heck, maybe leaving is a good idea..  I’m out too !!

 

 
  [ # 144 ]

I didn’t want to say anything initially. But I have to confess. Previously, I held onto some of Hans hardcore views completely. I thought there was no way in hell a NLP based system could achieve anything in terms of general intelligence. But since working on my own project. My views have definitively changed.

Is it possible that a language based program can achieve GI? I think its plausible since my project is mainly based on a self memory conceptualization of language. Genuinely inspired.
Language can be self conceptual.

Sally went to the store.
Where is Sally? At the store.

That’s a display of intelligence whether or not its genuinely inspired or pro-grammatically imputed is not the question.

The question is whether or not it can be scaled. If the AI can understand the question where is sally, can it also use that same concept to understand similar variations of questions?

So It depends on the structure its built on. If I posed that question with X number of variations will the AI still understand its meaning?

ME: Where is sally?
SKYNET: If I remember correctly, you told me; Sally went to the store
ME: where is she?
SKYNET: where is she? Let me see what I have on file. Sometimes, it takes 10-20 seconds.
ME: sally, where is she?
SKYNET: sally, where is she? Let me see what I have on file.
ME: i don’t know where sally is
SKYNET: i don’t know where sally is Is it that you do not know, or that you do not care?
ME: do you know where sally is?
SKYNET: Where sally is? Thanks for asking.

The thing is, when you change the sentence structure from the most obvious. The AI completely brakes apart. In order words, skynet is not general. Therefore it can’t understand. Likewise thinking is out of the realm of possibility.

 

 
  [ # 145 ]
Genesis - Jul 29, 2012:

Sally went to the store.
Where is Sally? At the store.
That’s a display of intelligence whether or not its genuinely inspired or pro-grammatically imputed is not the question. The question is whether or not it can be scaled. If the AI can understand the question where is sally, can it also use that same concept to understand similar variations of questions?

So It depends on the structure its built on. If I posed that question with X number of variations will the AI still understand its meaning?

ME: Where is sally?
SKYNET: If I remember correctly, you told me; Sally went to the store
ME: where is she?
SKYNET: where is she? Let me see what I have on file. Sometimes, it takes 10-20 seconds.
ME: sally, where is she?
SKYNET: sally, where is she? Let me see what I have on file.
ME: i don’t know where sally is
SKYNET: i don’t know where sally is Is it that you do not know, or that you do not care?
ME: do you know where sally is?
SKYNET: Where sally is? Thanks for asking.

The thing is, when you change the sentence structure from the most obvious. The AI completely brakes apart. In order words, skynet is not general. Therefore it can’t understand. Likewise thinking is out of the realm of possibility.

Your comments contradict. On the one hand Skynet thinks because it can answer the question. On the other, it doesn’t think because it is not general enough. Lets assume for a second that it could be more general. How much more general would it need to be before it thinks?

In some ways this reminds me of Zeno’s Paradox.

In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.

Machines may always suffer from the “Wizard of Oz” effect (Don’t mind the man behind the curtain). As they accomplish more intelligent looking actions, they will always be thought of as using “clever programming”. Even if they write that program themselves.

The current version of Skynet-AI by design does not support general anaphora or cataphora references. These would require recognizing “Sally” as a “she”. Although this could be relatively easy to do by adding name lists, to date I have resisted adding large data lists in an effort to keep Skynet-AI’s size down. There are countless things that the AI currently does not do. But, would you ever consider any of the
things it does do to be thinking?

 

 
  [ # 146 ]

Well my comments would be contradictory if i said answering the question of “where is sally” is a display of the process of thought. I didn’t. I said it was a display of “intelligence”, not “thinking”. You don’t need to think to act intelligently.

Thinking is the process of information to form a composite meaning or to alter or create new information.

Skynet doesn’t have the capacity to do that. It doesn’t think because it doesn’t have the capacity to. But there’s nothing wrong with that. It doesn’t mean it can’t act intelligently, or answer the question “where is sally” intelligently.

You have to understand, its all about the SYSTEM.

Now how did i determine if Skynet thinks or not? Well one way is whether or not it can understand. IF I asked you the question “where is sally” in many variations. You would know the meaning to all of them and that its the same. Why? because you can process information in such an intelligent way to form a composite meaning.


The problem with programmtically inputing information is exactly what you just said. You need to add new information to recognize sally as “she”. To understand the variations of the same question, you need to keep on adding and changing information in your database by yourself. That’s a daunting task.

Like for example: If i said to skynet.

My name is chris
Do you know my name?

will it understand my question?

SKYNET: Your name? I think it was chris.

Well the response makes it look like it actually understood the question. But lets probe more.


is my name john?
is my name james?
is my name joe?

SKYNET: Your name? I thought you told me it was chris.
SKYNET: Chris, are you going senile?
SKYNET: Chris, do you often suffer from memory loss?

Did skynet understand my question? absolutely not. I was asking a yes/no question. It was busy doing its own thing.

But You see if you hypothetically asked my program do you know my name? It would learn from this encounter and generate necessary information to answer future yes/no question. Whether its “is my name john” and respond with a “no” or “are orange black” or “do i have any unread emails”

Generality is not the question. Skynet is as general as any other program. The question is the system. Because the system facilitates it all. Is it a self-developing, Self-Changing, Self-Organizing system?

Another descriptive word i use for Thinking is Simulation.

Does Skynet simulate, compare and contrast information in a way that gives rise to composite meaning? I say no.

Can you add a Thought module? Yes.

Do it need to? I don’t think so.

 

 
  [ # 147 ]

SKYNET: Your name? I thought you told me it was chris.
SKYNET: Chris, are you going senile?
SKYNET: Chris, do you often suffer from memory loss?

Did skynet understand my question? absolutely not. I was asking a yes/no question. It was busy doing its own thing.

I would submit that those answers are correct and give the maximum amount of information. Much more than a yes/no.

Does Skynet simulate, compare and contrast information in a way that gives rise to composite meaning?

In the original example question:

USER:What is the number between twenty one and twenty three?
AITwenty two

Might not the creation of the answer be thought of as some mix of simulation, comparing and contrasting?

 

 

 
  [ # 148 ]
Merlin - Jul 30, 2012:

I would submit that those answers are correct and give the maximum amount of information. Much more than a yes/no.

The question at hand is does skynet understand? Not whether or not it gives the right answer.

If you were in a conversation with someone and they asked you: “is your name john?” The first thought/impression that comes to your mind is “no…etc”

That’s because you understand that to be a yes/no question. Giving additional information like “no, its chris” is based on your preference and the situation at hand.

If you were in a situation that required your name, you would. If it were a bystander that asked you the question. You most likely would have said “no” and went about your business.

What we really need is an AI that understands what we’re saying. Whether or not it thinks is beside the point. Does skynet think? It doesn’t have the capacity to, else calculators would also be claiming the art of thinking.

Merlin - Jul 30, 2012:

Does Skynet simulate, compare and contrast information in a way that gives rise to composite meaning?

In the original example question:

USER:What is the number between twenty one and twenty three?
AITwenty two

Might not the creation of the answer be thought of as some mix of simulation, comparing and contrasting?

Depends whether or not its using concepts (like numbers & between) to come up with the answer or a calculator.

Like for example, if it were using concepts. You can peradventure use the same “between” concept to answer the question: “show me the emails between C and G”

 

 
  [ # 149 ]
Genesis - Jul 30, 2012:
Merlin - Jul 30, 2012:

I would submit that those answers are correct and give the maximum amount of information. Much more than a yes/no.

The question at hand is does skynet understand? Not whether or not it gives the right answer.

If you were in a conversation with someone and they asked you: “is your name john?” The first thought/impression that comes to your mind is “no…etc”

That’s because you understand that to be a yes/no question. Giving additional information like “no, its chris” is based on your preference and the situation at hand.

If you were in a situation that required your name, you would. If it were a bystander that asked you the question. You most likely would have said “no” and went about your business.

Actually, in most social situations, you might answer “My name is ...”, dropping the “no” so as not to embarrass the other individual. Answering only no forces the other party into a guessing game or forces them to ask directly, “What is your name?”

The ability to give the correct answer demonstrates understanding of the input.

All of these statements are testing part of the same, general concept:
Do you know my name?
is my name john?
is my name james?
is my name joe?

Now you can argue whether or not responding with a boolean value is a better response than what it gave (a ‘no’ would often not be scored well in chatbot contests), but it does understand some basic sub-concepts about the input:
The input is a question.
The goal is to determine if the AI understands the name of the user.

It compares this question against its memory.
It then generates a new response which is the process of using information to form a composite meaning or to alter or create new information.

Genesis - Jul 30, 2012:

What we really need is an AI that understands what we’re saying. Whether or not it thinks is beside the point. Does skynet think? It doesn’t have the capacity to, else calculators would also be claiming the art of thinking.

Merlin - Jul 30, 2012:

Does Skynet simulate, compare and contrast information in a way that gives rise to composite meaning?

In the original example question:

USER:What is the number between twenty one and twenty three?
AITwenty two

Might not the creation of the answer be thought of as some mix of simulation, comparing and contrasting?

Depends whether or not its using concepts (like numbers & between) to come up with the answer or a calculator.

It does use concepts. Both numbers & a number between 2 others.

You can peradventure use the same “between” concept to answer the question: “show me the emails between C and G”
To also answer that question, you would need to resolve a few other items.

Does “between” mean:
Emails that went back an forth between entity C and entity G
or
Emails that are listed between point C and point G (letters?, point in time?)

Of course “Emails, C, and G” are not numbers. The more general concept is a set of items between two other items in an ordered superset. This also presupposes that if the set is unordered that the AI can sort it in a logical manner.

 

 

 
  [ # 150 ]
Merlin - Jul 30, 2012:

Actually, in most social situations, you might answer “My name is ...”, dropping the “no” so as not to embarrass the other individual. Answering only no forces the other party into a guessing game or forces them to ask directly, “What is your name?”

The ability to give the correct answer demonstrates understanding of the input.

All of these statements are testing part of the same, general concept:
Do you know my name?
is my name john?
is my name james?
is my name joe?

Now you can argue whether or not responding with a boolean value is a better response than what it gave (a ‘no’ would often not be scored well in chatbot contests), but it does understand some basic sub-concepts about the input:
The input is a question.
The goal is to determine if the AI understands the name of the user.

The goal is to understand that the statement is a yes/no question first and then respond appropriately. Whether you understand the name comes after you recognize the type of statement.

But whatever u say bro!

Merlin - Jul 30, 2012:

It compares this question against its memory.
It then generates a new response which is the process of using information to form a composite meaning or to alter or create new information.

It doesn’t generate a new response. It uses a canned response. (ex: “?, are you senile?”)
The use of information i’m talking about is the use of concepts.

For example, you have a concept in your brain that each first letter of a word in the beginning of a sentence is capitalized. This concept was created by seeing the patterns in sentence structure.

You use this everyday when you construct a sentence. Now a chat bot has a grammar system in it. but its not a concept.

The canned response wasn’t formed from using existing information (concepts).

It doesn’t alter any existing information (concepts) during that process, nor does it create new ones.

Merlin - Jul 30, 2012:

It does use concepts. Both numbers & a number between 2 others.

Yes but that concept is defined by you. In a way that is not general. You can not use it for other situations like the example i gave.

Merlin - Jul 30, 2012:

To also answer that question, you would need to resolve a few other items.

Does “between” mean:
Emails that went back an forth between entity C and entity G
or
Emails that are listed between point C and point G (letters?, point in time?)

Of course “Emails, C, and G” are not numbers. The more general concept is a set of items between two other items in an ordered superset. This also presupposes that if the set is unordered that the AI can sort it in a logical manner.

The meaning varies on the situation at hand. If I said show me the emails between john doe and sally. Then it will understand i meant relational between not boundary between.

If I said, show me the emails between youtube anouncement and google password reminder. It would understand that i meant ‘boundary’ between. And respond appropriate.

My point is that, your concept of between u are claiming can’t do that. There are millions of ways we ultilize the word concept of between. The concept you claim skynet uses is static and fixed. You can’t ultilize it on any other situation.

A better example and more simple. Is If I said: “show me the pages between 30 and 40”
or while looking at an alpha ordered list, I say “show me letters between S and U”.

 

 

‹ First  < 8 9 10 11 12 >  Last ›
10 of 15
 
  login or register to react