|
Posted: Jul 6, 2012 |
[ # 16 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Jan Bogaerts - Jul 6, 2012:
How about completing sequences like: 1, 2, 3 (with any numbers), or completing lists: apples, pears, bananas?
To clarify, I voted ‘no, but…’, also for my example, basically meaning ‘not yet’. For basically the same reasons as Victor gave: the bot is not yet able to construct new/unique things with these tricks.
Actually, Skynet-AI can do basic sequences:
USER:1, 2, 3? AI: That looks like a series of numbers. 1-2-3-4 USER:2, 4, 6? AI: That looks like a series of numbers. 2-4-6-8 USER:900, 800, 700? AI: That looks like a series of numbers. 900-800-700-600
And it does construct new/unique things. This is more apparent in other parts of how it responds to more complex questions:
Each hand has 5 fingers. Each man has 2 hands. How many fingers does a man have? Do you think it is degrading for an AI to be answering questions you could do on a pocket calculator? Here is my understanding of the problem: Each hand = fingers #5 man = hands #2 find number of fingers = man have
Solution: man =10 fingers ***Basic Solver*3.5**
Do you consider these examples of Skynet-AI thinking?
Jan Bogaerts - Jul 6, 2012: I regard them as tricks, but I believe our brain uses similar type of tricks (maybe not as mathematically), except that it can still do a lot more.
Funny, I think of some of the things the bot as “party tricks”. They look like magic until I tell you how they are done, then they are just clever programing.
The list of fruit test is another example of sets. I have intentionally ignored adding “set” based operations. The reason is simple, to do a good job you would need to add lots of data. Since Skynet-AI is designed to download and run in your device (cell phone, TV, Game console, etc), I feel it would take up to much overhead for to little benefit. I have started exploring dynamic downloading. Maybe in the next version I’ll roll set manipulation into it (inspired by Steve).
With Mitsuku, Steve has done a great job capturing basic sets and their relationships. I am impressed at how Mitsuku can handle queries based on them. This “bot grounding” is a lot of work (and can take up a lot of space).
|
|
|
|
|
Posted: Jul 6, 2012 |
[ # 17 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Steve Worswick - Jul 6, 2012:
On a similar note, Mistuku can answer things like “Is bread edible?”, “Can you eat a brick”? but nowhere in the bot have I coded these responses. Mitsuku knows that bread is made from flour and can work out that it is edible from that and a brick isn’t made from anything edible.
However, I have still had to program these rules into her. Would this be classed as thinking? You have to teach a small child these same rules and nobody would doubt that the child was thinking?
I see programming these rules just the same as teaching a small child. It’s only the method of input that differs.
I agree. These are examples of basic reasoning. The more general the situations that the bot can handle the more “human like” it becomes. Skynet-AI is about 3 years old, yet I find some users expect it to have the same level of comprehension as a university professor. It may have the potential to get there, but don’t be surprised if it takes as long as a human would. It is all a function of the number of man hours put into it’s education.
|
|
|
|
|
Posted: Jul 6, 2012 |
[ # 18 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Don’t get me wrong, I consider basic deductive capabilities like this important, but not there yet. Once bots can use these tricks in real world situations, that’s when I’d consider it a truly ‘thinking’ bot. As an ‘extreme’ example, a bot that would be able to construct a bridge, on the command ‘I want to get to the other side and back with my car’, that would be pretty cool.
It is all a function of the number of man hours put into it’s education.
yes it is.
The list of fruit test is another example of sets. I have intentionally ignored adding “set” based operations. The reason is simple, to do a good job you would need to add lots of data. Since Skynet-AI is designed to download and run in your device (cell phone, TV, Game console, etc), I feel it would take up to much overhead for to little benefit. I have started exploring dynamic downloading. Maybe in the next version I’ll roll set manipulation into it (inspired by Steve).
Yep, I figured as much. It’s not only the sets, but all the relationships between the sets that takes up memory.
|
|
|
|
|
Posted: Jul 6, 2012 |
[ # 19 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
All stated friendly… Merlin,
I vote a big YES for Skynet-AI’s excellent programming which is very impressive.
However, for the sake of this interesting conversation, I voted the third option, No.
It occurred to me that the numbers are base ten, so I wanted to open a dialogue that
the numbers could easily be represented in another base such as binary, hexadecimal,
or even a hypothetical base such as martian.
|
|
|
|
|
Posted: Jul 6, 2012 |
[ # 20 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
8PLA • NET - Jul 6, 2012: All stated friendly… Merlin,
I vote a big YES for Skynet-AI’s excellent programming which is very impressive.
However, for the sake of this interesting conversation, I voted the third option, No.
It occurred to me that the numbers are base ten, so I wanted to open a dialogue that
the numbers could easily be represented in another base such as binary, hexadecimal,
or even a hypothetical base such as martian.
Martian math, now that’s an interesting feature request.
|
|
|
|
|
Posted: Jul 7, 2012 |
[ # 21 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Steve Worswick - Jul 6, 2012:
I think after a session in the bar, many people would struggle with that one
LOL.. true !
Steve Worswick - Jul 6, 2012:
I see programming these rules just the same as teaching a small child. It’s only the method of input that differs.
EXACTLY.
Merlin - Jul 6, 2012: The more general the situations that the bot can handle the more “human like” it becomes.
And the more intelligent. Any reasoning requires intelligence. The higher the generality, the more powerful the intelligence. The more indpendent of its original program, the more it puts together, on its own, the building blocks, to reach a conclusion, the more powerful the intelligence. A pocket calculator has intelligence, but EXTREMELY narrow and small, probably a billion times less than a house fly lol.
A computer has conditional branching, so right there is one step higher in reasoning ability than a calculator, but still of course a ‘far cry’ from human.. but as our algorithms get more and more general, so does their intelligence.
|
|
|
|
|
Posted: Jul 7, 2012 |
[ # 22 ]
|
|
Senior member
Total posts: 141
Joined: Apr 24, 2011
|
Hi there
Skynet-AI seems to have collecting a fair number of math-logic tricks, it seems to have a parser in the front-desk.
Congratulations!
Yes it seems to think, specially for a dummy human who cannot imagine the next numer seq. or do a math calculation in a snap! But, unfortunately I still think no machine can think this way, the way to do it is quite different.
My Agent Framework, which is being developed, and translated into English (actually only speaks Spanish) has many this tricks built-in (almost) and there is no intelligence in it, only long and thorogh grammar creation and pruning.
My Agent also makes abstract “thing"operations, like “a cat and a horse and atwo cats” = “three cats and a horse”, this is done at the front-end the bot receives the result to begin the reasoning/pattern matching etage. Even he can make mixed math, like 3 + cat plus 5 = cat and 8
Another interesting part is that I’ve built in all the Physics relationships so if you tell him N * m he get J (joule=Newton * meter) and all the math is done in fuzzy-logic which goes into a generic overloaded operation-chunker, based on objects, much like Skynet-AI
Mine is not JAIL its C# and internally its compiled, so there is no script interpreting etage! so then its rather fast (20k parsing clauses per second). The lexing stage delays 50-300 mSeconds for 10 word input, more if heavily misselled, it corrects spelling and holds up to 9 meanings for each word, even if it is well spelled, it gets a similar one, just in case the parsing dont succeed!
Actually I am developing Twitter/Facebook/Google-Talk (XMPP), and MSNp9 Interfaces, which seem to be working, may be in some weeks I’ll give you the username to begin testing it.
The new Agent do recognize the language and answer in any language, also. we’ll see.
I have made some brochure on this, not quite polished but might share it if anyone is interested.
Have also a Block diagram of the whole thing!
cheers and congratulations!
|
|
|
|
|
Posted: Jul 7, 2012 |
[ # 23 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Andres Hohendahl - Jul 7, 2012: Hi there
Mine is not JAIL its C# and internally its compiled, so there is no script interpreting etage! so then its rather fast (20k parsing clauses per second). The lexing stage delays 50-300 mSeconds for 10 word input, more if heavily misselled, it corrects spelling and holds up to 9 meanings for each word, even if it is well spelled, it gets a similar one, just in case the parsing dont succeed!
cheers and congratulations!
Thanks Andres.
JAIL of course uses a different approach. It is running about 1 million parsing clauses per second.
The example:
Each hand has 5 fingers. Each man has 2 hands. How many fingers does a man have?
Takes 7 milliseconds to parse and evaluate.
Most responses take <10 milliseconds (not including time to display on screen or do a web search) depending on hardware (cell phones may be a bit slower) and browser. Selective prioritizing and branching allows many responses in the sub 5 millisecond range.
USER:hwat is four plus four? AI: Eight.
Takes 3 milliseconds to spell check, convert to numbers, calculate, and convert back to natural language.
I hope to use this speed to add additional intelligence as time goes on.
|
|
|
|
|
Posted: Jul 7, 2012 |
[ # 24 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Last night, ‘jokes’ also came to mind. The ability to understand and explain jokes is probably an important way to display that a machine can ‘understand’ things.
|
|
|
|
|
Posted: Jul 7, 2012 |
[ # 25 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I suppose it all depends on how we define “thinking”. Some might say that humans don’t think and it’s just a bunch of chemicals moving around our brains.
If someone says to me, “What does a cat and a horse have in common”, I subconciously search my memory for these two items, compare their qualities and say “They are both animals”. People would say I have thought of the answer. If a machine can perform the same feat (without having the answer hard coded in), surely it is thinking too?
|
|
|
|
|
Posted: Jul 8, 2012 |
[ # 26 ]
|
|
Senior member
Total posts: 141
Joined: Apr 24, 2011
|
@steve
If would say it thinks, if the answer is
They both were mentioned by you
|
|
|
|
|
Posted: Jul 17, 2012 |
[ # 27 ]
|
|
Member
Total posts: 27
Joined: Jul 17, 2012
|
I don’t think Skynet AI is thinking. Thinking is demonstrated in providing a meaningful answer to a question whose answer you have no way of knowing. Ask Skynet what the position of an electron is, whether the electron exists to the left or right of a particular position. This is thinking.
Ask it to predict the outcome of event which it cannot know. Ask it any question about the future which is uncertain (which is just about everything). Right or wrong, if the answer evolves in a behavior which matches the observeration, the machine is thinking.
Also, thinking is not limited to humans. For instance a horse (other than Mr. Ed) doesn’t understand a joke. This does not mean the horse doesn’t think.
|
|
|
|
|
Posted: Jul 17, 2012 |
[ # 28 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
Jonathan Charlton - Jul 17, 2012: I don’t think Skynet AI is thinking. Thinking is demonstrated in providing a meaningful answer to a question whose answer you have no way of knowing. Ask Skynet what the position of an electron is, whether the electron exists to the left or right of a particular position. This is thinking.
Ask it to predict the outcome of event which it cannot know. Ask it any question about the future which is uncertain (which is just about everything). Right or wrong, if the answer evolves in a behavior which matches the observeration, the machine is thinking.
Also, thinking is not limited to humans. For instance a horse (other than Mr. Ed) doesn’t understand a joke. This does not mean the horse doesn’t think.
You seem to confusing thinking with predicting. To answer the type of question you pose above is merely guessing and you will find MANY bots online already do just that.
if the answer evolves in a behavior which matches the observeration, the machine is thinking.
So if you ask it what the lottery numbers are going to be and it gets it right, it is thinking otherwise it isn’t. Is that what you are suggesting?
|
|
|
|
|
Posted: Jul 17, 2012 |
[ # 29 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Jul 17, 2012: To answer the type of question you pose above is merely guessing and you will find MANY bots online already do just that.
I would say that chatbots (or similar) are ‘just guessing’ (I actually think that running a random pattern search is still eons away from a real guess), while humans are capable of making an ‘educated guess’. An educated guess points to the capability of deliberation including handling of analogies to fill in the holes in the deliberation.
So ‘thinking’ should include the capability of ‘educated guesses’.
Also, prediction (or as I would call it ‘anticipated expectation’) is pointed to by many researchers today as one of the discriminating features of human intelligence.
|
|
|
|
|
Posted: Jul 17, 2012 |
[ # 30 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
If you ask my bot, “Which is larger, a whale or a flobadob?”, it will respond along the lines of “I don’t know what a flobadob is but a whale is pretty big and so is probably larger”. This is an educated guess based on what it DOES know but I doubt it is “thinking”.
|
|
|
|