AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is AI-research looking for a ‘quick fix’
 
 
  [ # 46 ]
Merlin - Mar 31, 2011:

The current “mechanical” process is Google Translate. It takes “crowd sourced” data as its original translation template and then applies it to new material.

But it still doesn’t do semantic translation, and that’s the only translation that is really valid; after translation it still has to have the same meaning.

 

 
  [ # 47 ]

Are you say that after translation it does not mean the same thing? Wouldn’t translation be pointless (not working) if it didn’t convert it into an equivalent phrase in the new language?

 

 
  [ # 48 ]
Merlin - Mar 31, 2011:

Are you say that after translation it does not mean the same thing? Wouldn’t translation be pointless (not working) if it didn’t convert it into an equivalent phrase in the new language?

That is indeed what I mean. Example: as soon as you start translating ‘sayings’, machine translation starts failing miserably. Sayings are of course all about semantics.

Here’s a Dutch saying that I translated into English with Google Translate:

I would not advice to the wind

I translated the same Dutch sentence into Icelandic and then from Icelandic to English:

I would not expect the wind

Machine translation still has a way to go before it can actually translate like humans can.

 

 
  [ # 49 ]

Here’s another totally normal and valid Dutch saying translated. It’s so funny, I just had to add this one:

that you will not wind eggs

wink

 

 
  [ # 50 ]

Even human translators have problems. But, would you agree that most common language phrases can be automatically translated via software?

 

 
  [ # 51 ]

Jan, you don’t think reincarnation exists and revert to the argument I’ve heard so many times before in this community of “prove it”.  I’m not. Although the scientific study does exist and you can track it down for yourself.  Also take into account that probably the majority of the people in the world believe in reincarnation, although western civilization in its infinite wisdom disclaims it.  Western civilization also said acupuncture is bogus, but now not so much.

On the other hand, out of body experiences are more commonly accepted and even documented in our society. Details such as looking down at the body and seeing things that could not ever have been in sight for the body are evidence.  Things can be experienced without using the body.  Not just imagination, real world facts.  But let’s not go into a discussion of what is real and how much your mind plays tricks on you.  Suffice it to say some weak AI can be considered as being tricks.  Like a magician, they appear to be more. That magic only works on stage where the conditions are prearranged.

So often I have found folks here that are ardent towards discovering a working example of their vision and yet narrow minded to be blind about the possibilities that might make it come true.  Many have their one and only way they will accept as how it must be.  Such down-to-earth thinking would never make a break through like Einstein did. Warping space and time, that’s really thin ice. 

Quotes (guess who):
“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.”

“Imagination is more important than knowledge.”

“Science without religion is lame. Religion without science is blind.”

“Great spirits have often encountered violent opposition from weak minds.”

“We can’t solve problems by using the same kind of thinking we used when we created them.”

“No, this trick won’t work…How on earth are you ever going to explain in terms of chemistry and physics so important a biological phenomenon as first love?”

“Now he has departed from this strange world a little ahead of me. That means nothing. People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.”

“Not everything that counts can be counted, and not everything that can be counted counts.” (Sign hanging in Einstein’s office at Princeton)

 

 
  [ # 52 ]

Also take into account that probably the majority of the people in the world believe in reincarnation

If it is part of your believe system, we are talking a different story. My apologies.

 

 
  [ # 53 ]
Merlin - Mar 31, 2011:

Even human translators have problems. But, would you agree that most common language phrases can be automatically translated via software?

Even with common phrases there are still mistakes because of the lack of actually understanding what a sentence means. Translation without intelligence (human or machine) that can decide to use different words for a better translation, will always fail because the complexity of matching different languages for translation can not be captured in default computing algorithms.

Machine translation has the exact same problems as ‘machine conversation’. So my view is that as long as no computer can beat the Turing Test, no computer can give a ‘really good translation’ because the underlying technology is very much the same.

As for humans: bad translators give bad translations, real good translators…. you get the point wink

 

 
  [ # 54 ]

Glass half empty. . .
Glass half full . . .

Different perspectives. I think machine translation is currently in the “usable” stage, but could be solved independently of the Turing test. I liken it to computers playing Jeopardy, or playing chess.

In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time).

In May 1997, Deep Blue defeated Kasparov (world champion) 3½-2½ in a tournament.

So, maybe in 30 years, all translation will be by machine.

By the way, what is the correct translation for, “that you will not wind eggs”?

 

 
  [ # 55 ]
Merlin - Apr 1, 2011:

So, maybe in 30 years, all translation will be by machine.

I think in 30 years we will have Strong-AI (and quite a while before that actually).

Merlin - Apr 1, 2011:

By the way, what is the correct translation for, “that you will not wind eggs”?

The Dutch saying is: ‘dat zal je zeker geen wind-eieren leggen’. Translated freely without taking the ‘meaning’ into account is: ‘that will not bring you any wind-eggs’. So you can see that even straight syntactical translation fails in this case.

The real meaning of the saying is this: That (a certain endeavor) will not bring you any ill results. To explain it a bit further, ‘wind-eggs’ (in Dutch) are empty eggs filled with only gas. So getting NO wind-eggs means you’re getting the ‘good stuff’ as a result of what you are going to do.

 

 
  [ # 56 ]

Ive appreciated Jan’s IQ test of list as many as you can of the uses of a sock.  So where in this discussion of AI shortcuts is the tool that can do that?  I understand that is the class of strong-AI which inspired this thread.

Which brings me to strong-AI being creative.  While we have computer programs to help compose original music and tools to help write novels and television scripts, most of this thread centers around the bits and pieces of current AI that don’t contribute to that part of strong-AI (leaving out the dead-end neural nets.)

NLP holds no promise if you can’t solve the general problem because there is no framework in which to pose the problem.  So maybe you decide then to pick a representation like RST or maybe one step back from that with tuples (triplets) like RDF.  Really, what different does that make if you don’t have the rules for everything that your strong-AI machine requires for a general purpose problem solver, if such a thing is necessary for resolving the problem of picking the problems to resolve.

So we are right back at the start again.  What are we trying to do? Wikipedia: Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can.

Which is to create a masterpiece.  Here we are.

NLP is not creating.  It can be translating though.  Logical reasoning is not creating.  It can be deducing something not clearly known though. Neural nets don’t create (that I know of), but they do learn.  Planning might be considered creating, if a plan both extends into new areas and is executed.  Only what’s a new area?  Something like the search for the best moves in chess?  How does that work in the real world?  Genetics create.  That is, if there is feedback to do the selection like evolution uses natural selection.  Genetics depends on random mutations.

Randomness isn’t exactly a strength of computers.  If a machine were built with complete randomness as quantum mechanics features, then might we entertain some of this “thin ice” dribble I’ve been blogging? Couldn’t stuff created by such randomness come from “other” experiences?  Call it coincidence if you wish.

The research in analogies has used genetics.  What if we had the resources to retrieve mass amounts of data like Watson and used that for feedback to genetic mutations based on “pure” randomness (not that phony mathematical stuff computers use now although it might be good enough to get the infrastructure in place, that is, a mock random object) of a FrameNet (for semantics - much better than WordNet) and a Bayesian belief network (for personality) combination?  We could focus the mutations (as temporary projections at first) on nodes in the network activated through specific excitations as parametrized by analogy to control the creativity.

No, that is still incomplete because we still need the vast catalog of how to do things so we can make new plans using our newly found imagination.  What are the current advances in machines learning how to do things?  This is not the same as learning what things are. ConceptNet has fostered some research into this area - common “how-to"s extracted from its common sense database.  Since plans can be dynamically created, I don’t think the planning will need mutating for creativity.

Is this strong-AI? Becoming aware is not addressed in this model. It doesn’t deal with choosing what to retrieve from our large repository of text (Watson does text).

None of this helps, does it?

 

 
  [ # 57 ]

To me, Strong-AI or AI-consciousness is the threshold where the ‘machine’ exceeds it’s base-programming. And I think that this ‘base-programming’ needs only to provide the basic framework for the ‘machine’ to be able to evolve. As I’ve stated before in other topics; everything else is ‘data’.

The result of this is the ‘machine’ being able to give a response that can not be traced back to it’s initial programming. That in itself is not ‘consciousness’ of course, there is more to it, but it is an important part of the whole solution. If the machine is not able to ‘decide by itself’, then it is just following it’s programming. And by ‘decide by itself’ we automatically bring ‘self awareness’ into the equation.

I’m convinced that to realize ‘machine consciousness’, we need all the parts of the ‘whole’ to work in harmony. Only then ‘consciousness’ will emerge. Mind you, I’m not saying that consciousness is an ‘emergent property’ in the same vein as dualism tends to see it; consciousness is the result of the correctly defined systems working together to realize the result, in the same way that all the parts of a combustion engine work together to give it the resulting output that can drive a car.

To me it’s an engineering issue that has been overlooked in AI-research. Most projects work on ‘something’ that might do ‘something’ when combined with ‘something else’. There is no overall plan to work towards, defining the ‘what’ that we need as parts to realize the overall result. Instead most projects work on ‘something’ that might be used to do ‘something else’ BESIDES building ‘machine consciousness’, and because building machine consciousness seems to be so hard and unattainable (at least for now), it seems more useful to focus on the ‘besides’ and go for the ‘quick fix’.

 

 
  [ # 58 ]

Do you think that as computers nibble at the cookie of creativity they will be try to reach a moving bar?
Each time a computer does something creative, it will be classified as “only” algorithmic.

Writing is often thought of as a creative process. If machines do it is it just “text processing”?

http://mediadecoder.blogs.nytimes.com/2009/10/19/the-robots-are-coming-oh-theyre-here/

 

 
  [ # 59 ]
Merlin - Apr 3, 2011:

Each time a computer does something creative, it will be classified as “only” algorithmic.

That goes just as far as someone can point out the algorithm. Sometimes I do something that doesn’t seem logical at all, but it just felt right to do it; the same will apply to a conscious machine.

The whole point in this perspective is that so far, we still don’t have machines that are doing anything more then algorithmic processing. Even more so to the point, if a machine is making a mistake then we adjust the algorithm for it not to happen again. In human education we know that we have to ‘learn from our mistakes’, but in computers we still ‘fix the bug’. So this brings us to my own view on strong-AI: there are no algorithms (other than the base operating system), processing (as in ‘thinking’) has to be learned by the AI, and it has to learn from it’s mistakes because THAT is experience.

Experience, in the human sense of the concept, is paramount to attain strong-AI. So that automatically involves the whole shebang of feelings, emotions, personality and what not, because all these things are in some way connected to how we ‘experience’... everything !

 

 
  [ # 60 ]

Some might say that all machines can do is algorithmic processing. Whether it is written by a programmer or by a self correcting program with a feedback loop it all boils down to a set of instructions.

Experience is influenced by feelings, emotions, personality and sensory input. But not all are required for experience. The story of Helen Keller might hold an example with bots/AI. Maybe we just have not created the right language that would allow us to communicate what is needed for strong AI to be successful.

 

‹ First  < 2 3 4 5 > 
4 of 5
 
  login or register to react