AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is AI-research looking for a ‘quick fix’
 
 

While reading the topic on ChatScript it hit me how tempting it is to start building a chatbot with it. Having used AIML and having quite some fun with it, trying out another language with new possibilities is seriously tempting. While thinking this it hit me that the direction I choose for my current project is of course a long one; building a model, researching possibilities to find solutions for certain problems, etc. and all this before I can write just one line of code to get ‘something’ working.

So it seems to me that maybe this is one of the barriers that AI-research (in general) has made for itself; instead of looking at the grand design and taking on the interwoven problems of strong-AI, researchers seem to go for the easy pickings, the quick fixes. Projects that are aiming at a conscious machine can be counted on one hand (at least the ones out in the open), while there are hundreds or more projects that are focused on NLP and similar things. The problem I see here is that those projects are not specifically aimed at fixing ‘one of the problems of strong-AI’ but instead are going for the quick win and try to build ‘something’ (certainly NOT real AI) that can be (commercially) used in one or another scenario.

I took NLP as an example because it is used in chatbots but also in systems for training in negotiation and such. And although those systems ‘look like’ they are capable of ‘some sort of’ reasoning, in reality they are expert-systems containing a big database with man-made rules that is used through a NLP-engine. Now before someone posts that a chatbot can learn as well, let’s not forget that this ‘learning’ is also implemented, and bound by, man-made rules implemented by the developer. But to stick to the point I want to make; there is nothing wrong in building such applications and nothing wrong with the way to do it (e.g. AIML).

The other thing that comes up in my mind is the philosophical argument that ‘maybe we don’t want conscious machines, but just smart machines that can do things for us’. Although there might be some validity in such an idea (I don’t want to debate that here), this argument seems to be used more and more as an excuse for why we don’t have strong-AI yet and even more so as a reason not even to pursue it and focus on ‘quick fixes’.

—-
This is a note that I copied verbatim from my research notes. Let’s not make this a ‘my AI is stronger then your AI’ discussion. I’m looking for insights and ideas on the implications for current and future AI-research. Of course you are completely free to tell me I see it all wrong and point out the errors in my reasoning wink

 

 
  [ # 1 ]

Of course I would have to read this less than an hour before I have to be away for a week or more. raspberry I think I’ll have to leave the notification for this post in my inbox, so that I can come back to this thread and give it the treatment it deserves when I get back. smile

 

 
  [ # 2 ]

It may be much easier to define what we think is a “smart machine” versus a “conscious machine”.

This lack of agreed upon definition or “system requirements” hinders progress. If we go through the thought exercise and assume we have created a “grand design” for a conscious system. Then, we have are left with a number of questions:

How do we know the grand design is correct?
How do we know that each sub system of the grand design is correct?
If we implement the grand design will it have value (meet whatever requirements we originally set out)?

With humans, we have patience that all humans meet the specification of the grand design and we are will to invest years of time, thousands of dollars, and to assign hundreds of people the task of educating and caring for the product (a human). Will anyone be willing to make the same investment in time and resources on an unproven product (conscious machine).

As I am sure you know, even with the best specification, some products are not successful. One alternative to the all or nothing approach is to design and test prototypes in a much more rapid fashion. I take inspiration from the work of Paul MacCready. MacCready was the first person to build a human-powered plane that could fly a figure eight around two markers set a half-mile apart.

MacCready’s insight was that everyone who was working on solving human-powered flight would spend upwards of a year building an airplane on conjecture and theory without a base of knowledge based on empirical tests. Triumphantly, they would complete their plane and wheel it out for a test flight. Minutes later, a year’s worth of work would smash into the ground.

The problem was the problem. MacCready realized that what needed to be solved was not, in fact, human-powered flight. That was a red herring. The problem was the process itself. Maybe conscious machines are similar.

For an overview on MacCready, read;

Wanna Solve Impossible Problems? Find Ways to Fail Quicker
A case study in how an intractable problem—creating a human-powered airplane—was solved by reframing the problem. http://www.fastcodesign.com/1663488/you-are-solving-the-wrong-problem

 

 
  [ # 3 ]
Hans Peter Willems - Mar 29, 2011:

So it seems to me that maybe this is one of the barriers that AI-research (in general) has made for itself; instead of looking at the grand design and taking on the interwoven problems of strong-AI, researchers seem to go for the easy pickings, the quick fixes.

Quick fix.  Interesting.  Real simple.  Over 60 years of research and no program can pass a Turing Test, with consciousness not even in the equation.    Either you’re right Hans, or everyone else over the last 60 years has been idiots, and just can’t figure out how to do these very simple ‘quick fixes’.

 

 

 
  [ # 4 ]
Hans Peter Willems - Mar 29, 2011:

Of course you are completely free to tell me I see it all wrong and point out the errors in my reasoning wink

Ok, you asked for it. You’re Wrong.

Just kidding.  But seriously, I think what you are doing is downplaying the idea of the bottom up approach to AI.    “Just build” something very well may be the way to do it.

What you want to do is go from zero to a Stealth Fighter , and if someone is working there way up and has a “Wright brothers” air plane that at least gets off the ground, you are passing it of as ‘a quick fix’.    Sure, the first airplane the Wright brothers built didn’t break the sound barrier, but it was a start.  And I think writing off others’ progress as quick fixes is pretty arrogant.

Sorry Hans, you asked for opinions smile

 

 
  [ # 5 ]
Merlin - Mar 29, 2011:

MacCready’s insight was that everyone who was working on solving human-powered flight would spend upwards of a year building an airplane on conjecture and theory without a base of knowledge based on empirical tests. Triumphantly, they would complete their plane and wheel it out for a test flight. Minutes later, a year’s worth of work would smash into the ground.

<Applause><Applause>

Well said Merlin.

 

 
  [ # 6 ]

That is actually a direct quote from the article. But, that is the approach I am taking. First I built tools, then I built a test jig (Skynet-AI) to test/fail and try again. This is not the only viable approach, but it has allowed me to make rapid progress in an unconventional manner.

I view Skynet-AI as a prototype (hence the .003 version number). It is easily modified and refactored, and as I learn more I add tools/functions/approaches that seem interesting. Some work, some don’t. As I refine my “model of an AI mind” I try to keep the design flexible.

 

 
  [ # 7 ]

Merlin,

We’re on the same page, the scientific method.  Successive approximation.  Newtonian physics to Quantum physics to Grand unified theory eventually, and step by step.    Grounding our theories always with experimentation which we learn from.  But thinking of a “Grand Design” which ignores everything before it , I very much doubt the success of.  I think early AI research was overly theoretical, and most of the time they were in paralysis by analysis.

Like you, I have learned a lot in my experimentation travels.

 

 
  [ # 8 ]
Hans Peter Willems - Mar 29, 2011:

...instead of looking at the grand design and taking on the interwoven problems of strong-AI, researchers seem to go for the easy pickings, the quick fixes. Projects that are aiming at a conscious machine can be counted on one hand (at least the ones out in the open), while there are hundreds or more projects that are focused on NLP and similar things. The problem I see here is that those projects are not specifically aimed at fixing ‘one of the problems of strong-AI’ but instead are going for the quick win and try to build ‘something’ (certainly NOT real AI) that can be (commercially) used in one or another scenario.

Strong AI will likely only be attained through extensive collaboration and standardization of the various components of AI (lexicon, NLP, memory, emotion, self, learning, reasoning, etc.).

The idea that a single person is so smart that only they can make the first strong AI is kind old fashioned in this age of cloud computing and social interconnectedness, which act as force (or brain) multipliers when applied to specific challenges.

Something like Google (example) could be a good example of how it will eventually be achieved imo- thousand of super smart people focused on the individual processes and concepts and backed by Googilian $$$, but able to assemble and distribute it as something useful/practical to the GP.

So, back to my original thought, as individuals fiddling with isolated solutions, the little guy really benefits from the “easy” and open source components for pseudo machine intelligence, but no one individual will likely ever be the unitary father/mother of Strong AI.

 

 
  [ # 9 ]

Could not agree with you more Carl.

No one is going to dream up a conscious machine and it will magically be strong AI.  Like all other human endeavors, it will be gradual, over time, ground up.  Assuming consciousness is required for true AI is even a huge assumption, no one seems to really be able to even define it, let alone build it, and it perhaps isn’t even necessary.

There is really no test for consciousness that everyone agrees on, the prevailing belief seems to be that in order to tell if <X> has consciousness , is .. . .TO BE <X> !  Everything other test for consciousness is a functional test (ie behavior test), hence you’re back to Chinese room.  Its an endless , pointless discussion cycle.

 

 
  [ # 10 ]
Dave Morton - Mar 29, 2011:

Of course I would have to read this less than an hour before I have to be away for a week or more. raspberry I think I’ll have to leave the notification for this post in my inbox, so that I can come back to this thread and give it the treatment it deserves when I get back. smile

Dave, I’m looking forward to your insights on this issue smile

Merlin - Mar 29, 2011:

How do we know the grand design is correct?
How do we know that each sub system of the grand design is correct?
If we implement the grand design will it have value (meet whatever requirements we originally set out)?

Well, I think the answer to that is to build it. The point I tried (partly) to make is that what you describe here seems to be used as an excuse to stay away from ‘building it’ (‘it’ being strong-AI) and instead go for small(er) results.

Merlin - Mar 29, 2011:

The problem was the problem. MacCready realized that what needed to be solved was not, in fact, human-powered flight. That was a red herring. The problem was the process itself. Maybe conscious machines are similar.

That is quite an interesting thought and something for me to read up on smile

Victor Shulist - Mar 29, 2011:

What you want to do is go from zero to a Stealth Fighter…

No Victor, this topic is NOT about what I want, maybe re-read the last part of my initial message here.

Strong AI is of course a big and complex project, that’s obvious. There is a lot of stuff needed to get to the final result, I don’t disregard that at all. The point is that, as it seems to me, there are very few professional projects that are building a part of the solution with the intent of it being ‘a part of the solution’. Instead most that is done is actually aiming at quick monetizing in a commercial setting. Take a look at most official AI and robotics related projects and you will find very little that is aimed at being assimilated into a bigger goal.

Victor Shulist - Mar 29, 2011:

And I think writing off others’ progress as quick fixes is pretty arrogant.

Again, I think you missed the last part of my message. I’m not ‘writing off’ anything; I’m trying to have a serious discussion about something I (think I) see happening in the AI research field.

Carl B - Mar 29, 2011:

Strong AI will likely only be attained through extensive collaboration and standardization of the various components of AI (lexicon, NLP, memory, emotion, self, learning, reasoning, etc.).

Of course all these parts are important and tmo necessary for the ‘grand total’. What I’m doing myself is also just a part of the puzzle (but this topic is NOT about MY project). But as I answered above, it seems that most official projects are NOT aiming at integrating but instead aiming at specific applications that have little to do with strong-AI. Instead most seem to be aimed at expert-systems, robotic solutions that don’t need real AI, etc.

 

 
  [ # 11 ]

My theory is that the key to conscious artificial intelligence is conscientious artificial intelligence.
One of the main challenges to building a conscious control system, is that the artificial mind,
we are building it for, is based on an unexplained feat of science in semi-conductor physics. 
Shouldn’t we be able to at least explain it before making it conscious?

 

 
  [ # 12 ]

Let’s put another angle into this discussion; can anyone point me to any official projects that are working towards strong-AI, general-AI, AGI or what you name it, either by trying to do it all or by integrating several other (smaller) projects together?

 

 
  [ # 13 ]
Hans Peter Willems - Mar 29, 2011:

Let’s put another angle into this discussion; can anyone point me to any official projects that are working towards strong-AI, general-AI, AGI or what you name it, either by trying to do it all or by integrating several other (smaller) projects together?

chatbots.org is a good example of this.

 

 
  [ # 14 ]
Hans Peter Willems - Mar 29, 2011:

Strong AI is of course a big and complex project, that’s obvious. There is a lot of stuff needed to get to the final result, I don’t disregard that at all. The point is that, as it seems to me, there are very few professional projects that are building a part of the solution with the intent of it being ‘a part of the solution’. Instead most that is done is actually aiming at quick monetizing in a commercial setting. Take a look at most official AI and robotics related projects and you will find very little that is aimed at being assimilated into a bigger goal.

I would agree with this. I think the reason isn’t necessarily that this generation of computer programmers/roboticists is not enthralled by the problem of strong AI, just that it isn’t a practical pursuit. Both in terms of funding (the magic word), and the scope of the problem.

By scope I mean, even if you had a sound theoretical algorithm suite for strong AI, if it really is human-like, it could take years to train and prove that it is actually is what you purport it to be. But we don’t have a sound theoretical algorithm, we have many different ideas, most of which are vaguely described. And without an agreement of what strong AI even is, how can you claim that you’ve accomplished it?

So instead, researchers state clearly a project they want to accomplish, the terms that will qualify as accomplishment, and then work towards that. Nothing wrong with this approach, and I think it will take many more years of mucking around with practical, limited intelligence systems (as well as mucking around with our own neurology) before interest returns to the idea of a strong AI.

As a personal opinion, I do believe that “strong AI” will be the result of many different technologies, which give the bot the tools it needs to think intelligently at an abstract level.

 

 
  [ # 15 ]
Merlin - Mar 29, 2011:

With humans, we have patience that all humans meet the specification of the grand design and we are will to invest years of time, thousands of dollars, and to assign hundreds of people the task of educating and caring for the product (a human). Will anyone be willing to make the same investment in time and resources on an unproven product (conscious machine).

Good point!

Hans Peter Willems - Mar 29, 2011:

Let’s put another angle into this discussion; can anyone point me to any official projects that are working towards strong-AI, general-AI, AGI or what you name it, either by trying to do it all or by integrating several other (smaller) projects together?

Good question! smile The closest I’ve found is the MIT Computational Cognitive Science Group.

 

 1 2 3 >  Last ›
1 of 5
 
  login or register to react