Hans seems to think that providing the computer with an initial English grammar to “bootstrap it” somehow precludes the possiblitiy of adding on to that system higher AI type functionalities. Well, perhaps we should start of with no language at all, not even any scripting languages, assembly langauge or even micro-code. Because after all, you’re starting off by writing code in C/C++/perl/phython or whatever, that should really be considered cheating according to the “rules of building an AI” according to Hans. Apparently you can’t give anything to the system at all except raw data…. when you do, the system is “clockwork”, only executing the instructions you provide it, and no matter how many levels of indirection are from the initial control logic to any generated logic (or generated action plans), since you initially provided that code, it is simply a 17th century clock…. go figure, its a mystery to me how some minds work.
8PLA • NET - Jul 22, 2012:
The approach Victor is taking for natural language processing is of course perfectly valid in one field of artificial intelligence known as NLP for short. Yet, implementing a grammar parser on a computer for Subject, Verb, Object is a challenging feat, but the results are elegant because we may relate to that, as how we use grammar as humans.
Correct and Correct. Actually GLI is at the point now where it can go far beyond SVO but any level of depth of modifiers of any type tacted on to any subject, verb, or object, recursively (the example I/O is old now, but have a look at the GRACE thread). It figures out which peices of ‘english-parsing-knowledge’ to apply along the way, based on information about the world, context of current conversation, how relevant your input is to what it already knows (as opposed to Hans’ fixation that it is ONLY a grammar engine, but I’ve given up on that).
Hans Peter Willems - Jul 21, 2012:
Victor, you can go on and on repeating your views (which are still pretty narrow to my opinion),
Your interpretation of them is narrow.
Hans Peter Willems - Jul 21, 2012:
In software, this means you either need a system to be able rewrite it’s own software (which is a messy solution and very hard to control), or you need a system that can form new conceptual information BY ITSELF, based on the available conceptual information, that way creating richer conceptual models of reality.
It doesn’t have to rewrite itself. It can generate new programs. Programs are simply a series of action statements. Generating a program is really no different than deducing a new statement, or conclusion from one or more premises. Now I know what is going on in side your head when you’re reading that .. BUT YOU GIVE THE COMPUTER THE RULE TO GENERATE IT right? But what if it generated its own if/then rule? Well, you’d say, you still gave it the rule that allowed it to generate that rule? At some point the “spark” has to be somewhere, there must be SOME INITIAL CODE that the CPU of a computer executed to get things rolling. What will your software be made from???? magic pixey dust ?? Or logic/programming-code ? You will have to give the CPU SOME CODE to start the ball rolling . .and , if I was like you, I’d say, no no no . .that is rule based system.. you are giving the computer code/if-then, whatever causes it to *DO SOMETHING* is cheating.
How do you propose to :
“form new conceptual information BY ITSELF, based on the available conceptual information”
BY ITSELF .. hum ..... as in some magic rays from outer space are going to come into the CPU telling it what code to run in order to “form new conceptual information”.
the “BY ITSELF” has to be code…. SOMETHING HAS TO CAUSE THE CPU in the machine to ACT on data… nothing much will happen in a computer system with just raw data and no CPU codes to run it.. (can we AT LEAST agree on THAT)?
SO—question—- what are the ways that CPU instructions can get into a computer for it to execute???
WELL .. . user can write initial code, “C”.
That code can be executed directly to produce output “O”.
Now, pretty much everyone agrees that that is not A.I. And sure, it isn’t. Although I still say that a computer’s ability to handle endless levels of complexity flawlessly, something humans can barely do, as some form of intelligence. Anyway, moving on.
Now, system receives input from “world”. Do we CARE if such input is raw sensory data or English sentences, or the user entering information about english grammar?
So that new information is “I” ( sensory data/english input/information about english grammar/information about quantum physics, jump in CR ).
Now, if that new information was an english sentence that specified some complex relationship that exists in the world, or if say, instead of one statement there is an entire english conversation of statements I(1) .. I(n).
And say some of those are statements, facts, and some specify relationships of previous facts, to any level of indirection.
NOW .. say it was able to generate new facts and new theories. . that the PROGRAMMER had no idea of (say the programmer knows nothing about QM), but goes and reads it on wikipedia or whatever.
If the programmer never even knew how to solve QM problems, but the system learned by LANGUAGE, in statements that even the PROGRAMMER NEVER EVEN SEEN, how the hell can that *not be* thinking ?
OR—if a system was developed by a programmer say that had no clue how to solve rubix cube… but was given a) the problem, b) how to change or rotate the sides of the cube and c) the criteria of success (of course that all sides be the same color), and the system was able to combine facts and bits and peices of logic it learned earlier , and combine them in a way that the programmer never even thought of, in order to deduce some rubix cube theoroms, would that be thinking? I think someone would almost have to be out of their mind NOT to call THAT thinking.
My point in all this is that you MUST START with some ‘control code’ (see previous post). Or else nothing will happen. And if your argument is that providing any code or initial set of instructions to a computer somehow invalidates it as thinking, you’re wrong.
That set of initial instructions can either be your base C/C++ whatever code, OR a combination of that and code that allows it to understand a natural languagae. And yes, yes yes, it doesn’t have to be text based visual. I’m saying ONE OF THE ALLOWED ways should be text based visual.
So I don’t know where you get the idea that bootstraping a system which can start its life off understanding a natural language precludes these other functionalities form happening later, there is absolutely no basis for that conclusion.
My first priority in my project is to accomplish NLU. That’s my first goal. Somehow from that your thinking is:
1) he’s focusing on NLU
2) oh no!! he’s providing some grammar information to the system !!!!!!!!!!!
3) conclusion: the system is ONLY a ‘grammar expert system’
The reason I focus on NLU first is I see a great application for it. And regardless of all the philosophy you can use to “prove” me wrong, the fact of the matter is, as I said earlier, no one would CARE about the philosophy if we have a USEFLES system that, perhaps one day, could pass the TT.
Imagine showing an investor a “HAL 9000”, Turing Test passable system that could replace 1000 people in a call center and save MILLIONS… he’d say “hum.. well, i don’t know, this Hans guy says he doesn’t like the architecture because he doesn’t consider it ‘thinking’ .. the philosophy doesn’t agree with him.” that’s pretty hilarious.
Once NLU is achieved, the system can be educated about the real world. This is necessary for AGI. Why? because only natural langauge is RICH enough to explain the world (hence the name natural ).
Things like predicate logic are cool and all, but they are rigid, brital systems. The can only describe ‘perfect worlds’. For AGI, we need free form understanding, dealing with arbitrary depth of complexity and conflicting statements, incomplete statements, etc.
Once NLU is achieved, the next step would be to start providing “programs” to this system. I don’t mean a program that tells the computer exactly what to do, step by step (so I don’t mean “control logic” (see previous post)... I mean “consider logic”.
Provide a program, as a list of instructions, in plain English.
NOW ... give the system a completely different environment to do that task in.. or perhaps not compeltely different but let’s say some things new, some removed, soem changed. If the system can take that somewhat fuzzy english “program”, and say “well, step # 5 says to use SSH into a linux box—but the machine in this case is windows”, to which user would reply “yes, those instructions were for a linux-only network, sorry, use remote desktop”. So the sysem would go online and google “howto remote desktop into windows”. Coming back, it would update its own enlish “program”, and ask the user for the credentials. BUT, another unexpected thing would happen, can’t connect to the port. So the sysem may ask “is the proper port open to allow remote desktop?”, user says “don’t know, what port do you need?” system realizes it doesn’t know that, so back to google it goes “port for remote desktop on windows”, and finds it, and the discussion goes on like that.
This would be an example of a real world, handling CHANGING ENVIRONMENTS, learning new information from natural language, and updates its own “program” (english or whatever language).
I don’t care WHAT the philosophy is, a system like that would be the best thing since slice bread, and franky I won’t give a ***** what you *CALL* it !!! it is WHAT IT **DOES*** that matters!! It is what it does that SAVES PEOPLE MONEY, or makes people money. It is the UTILITY .. and you can argue your philosophy until you are blue in the face…. philosophy you use to “prove” you’re right.
Ok, sorry for the long rant folks, and there’ll be spelling errors in the above i’m sure, but I don’t have the time to check. .sorry.
Keep those blinders on Hans.. and keep NOT giving one inch to anyone!!!