|
Posted: Mar 24, 2014 |
[ # 16 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I’m tempted to see how the first year’s contest goes and if it’s like the early days of the Loebner Prize where the bots were erm not exactly brilliant then yes, I’ll definitely have a go too, especially for that sort of money.
|
|
|
|
|
Posted: Mar 24, 2014 |
[ # 17 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
I’m not sure when the contest is going to happen…but I’m seriously considering it with my mobile robot if I have enough time to finish all the crap I need to finish. Right now my personal life leaves me little time to work on the brains of my robot, so unless the contest is at least 8 months away…
|
|
|
|
|
Posted: Mar 24, 2014 |
[ # 18 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
@Hans All right! I’m thinking of giving it a go myself.
I’m not sure that I agree with the characterization of chatbots as “dumb”, even though the RICH platform was not originally designed for that purpose, I have gotten quite a bit of insight from having it function in that capacity. And the author uses SIRI as a baseline for intelligent, when in fact published statistics (being used in a lawsuit against Apple so we can assume that they are suportable) show SIRI having a direct response (rather than what we might call an oblique or category response) less than 13% of the time. Of course a lot of that can be attributed to failures in the VR rather than the actual AI architecture, but we dont have access to a text interface for SIRI so there is no way to make a direct comparison. Recently I was comparing some of the current chatbots that I would consder to be at the top against SIRI, and the results are decidely in favor of the “hobbyist”. I wasnt able to finish Mitsuku as myconnection went down, but at the point where I stopped she was way ahead on percentages. When we are talking about what is intelligent, and is a particular platform or instance of a platform intelligent, the problem may be compared (in my opinion only) to that of the three blind men who try and descrbe an elephant. They all have differing opinons because they are not looking at the whole. I’m going to use Mitsuku as an example because it is one of my favorites. Is Mitsuku intelligent? She can answer questions that most of the people visiting her cannot answer. Isnt that a reflection of Steves intelligence as he directly programmed them? I submit that the majority of humans who work at a job, went to school where they had responses programmed into them, and in the capacity of their employment they regurgitate what they were programmed to do with very little cognitive reasoning, certainly less than Mitsuku shows when answering certain types of questions. Does that make Mitsuku intelligent? Does that make them dumb? Perhaps machine intelligence is already here, but we just didnt recognize it and what we are talking about is creating a machine da Vinci, or Mozart. I dont know, but I do know that for a million and up, Id be willing to try dressing dave Morton up in a C3PO outfit and see if he can pull it off
(please note that the preceding C3PO reference was intended as humor and not an actual suggestion that subterfuge should or will be attempted up to and including kidnapping Dave)
V
|
|
|
|
|
Posted: Mar 24, 2014 |
[ # 19 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
Don Patrick - Mar 24, 2014: There are already examples of fairly dumb AI agents that can create news stories and summaries of articles. Answering questions on the topic is again a Watson-like task
I guess what they’re saying is that this contest would invite Watson-ish participants, and that these are somehow not advancements in AI because they don’t think in exactly the same way as a human. Not that anyone wants to confess what way that is.
But their main complaint that this is a distraction? Who is even still after human-like intelligence these days, who haven’t had their funding cut for lack of results?
I agree that something can be intelligent even if that intelligence is the result of a very different process to the human mind, but there are plenty of valid criticisms of any claims of intelligence from Watson more specific than ‘not how humans do it’. It cannot form abstract models of the world or of itself. It does not possess advanced reasoning abilities. It cannot comprehend the notion of a process (something that my own research and reflections in the pursuit of AGI have led me to believe to be crucial to any intelligent agent that has to actually function in the real world). Watson is probably closer to AGI than any chatbots that I’m aware of, but it still has a long way to go, and the difference between Watson and true AGI is qualitative, not quantitative.
@Vincent
Define ‘intelligent’. Better yet, let’s forget the word ‘intelligence’ and discuss the underlying matters. Why do we care about this thing we call intelligence? Well, intelligence is why we’re here today. Our ancestors looked at a rock and realized that, if thrown, that rock could be a dangerous weapon. This was the result of a detailed understanding of the way the world worked. Probably not an explicit understanding, but that’s ok. It seems to me that what really makes our intelligence powerful is that it lets us understand the processes underlying a domain and use that understanding to solve problems within that domain. To see the way the world is and the way we want the world to be, and find ways to make the former more like the latter. So let’s talk about that. Can Mitsuku do that? Can any chatbot that you know of?
I know that the derivative of ln(|x|) is 1/x. I learned that by rote in high school. But if you were to delete that knowledge from my brain, I could regenerate it from more basic principles in about twenty seconds. What chatbot can do that? Off the top of my head, many chatbots can perform arithmetic operations on arbitrary input, so there’s that. But that’s not much. It’s true that humans spend a lot of time on rote learning, but that’s not what makes us intelligent.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 20 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Cool, Hans Jarrod Torriero - Mar 24, 2014: It cannot form abstract models of the world or of itself. It does not possess advanced reasoning abilities. It cannot comprehend the notion of a process
Watson is certainly not short on deficiencies. But Watson’s search process does resemble the main function of my right hemisphere: Quickly trying many avenues of associations until a probable option is found, without conferring with the reasoning processes of my other hemisphere. I don’t mean to imply that this is how anyone else’s mind works, only you yourselves can judge that, as only I can judge mine. But I see at least one intelligent process in Watson, and that’s a start.
I guess my stance on this contest is that I am not expecting AGI, but I am expecting one or two steps towards it. Whereas the Loebner Prize focuses on “human” banter that needn’t also be intelligent, it likely takes more to dynamically compose a well argumented, convincing speech, as that is what TED talks are about. It’s harder for something intelligent to be human than to be intelligent, and I think we’ll see that return in the nature of the contestants. If I’m wrong, then we’ll see that soon enough.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 21 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
Don Patrick - Mar 25, 2014: Cool, Hans Jarrod Torriero - Mar 24, 2014: It cannot form abstract models of the world or of itself. It does not possess advanced reasoning abilities. It cannot comprehend the notion of a process
Watson is certainly not short on deficiencies. But Watson’s search process does resemble the main function of my right hemisphere: Quickly trying many avenues of associations until a probable option is found, without conferring with the reasoning processes of my other hemisphere. I don’t mean to imply that this is how anyone else’s mind works, only you yourselves can judge that, as only I can judge mine. But I see at least one intelligent process in Watson, and that’s a start.
I guess my stance on this contest is that I am not expecting AGI, but I am expecting one or two steps towards it. Whereas the Loebner Prize focuses on “human” banter that needn’t also be intelligent, it likely takes more to dynamically compose a well argumented, convincing speech, as that is what TED talks are about. It’s harder for something intelligent to be human than to be intelligent, and I think we’ll see that return in the nature of the contestants. If I’m wrong, then we’ll see that soon enough.
I agree with all this. Should be interesting. I don’t expect that much, but perhaps I’ll be surprised.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 22 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 25, 2014: Cool, Hans
I’m trying to stay away from the discussion about chatbots being capable (or not) to reach the needed level for this challenge. My statements have spurred more then just ‘good discussion’ in the past here on the board
However, just to give some insight to my statement about actually entering this Xprize: my project already has most capabilities for doing this challenge. It is capable of free association of concepts, building ‘personal’ preference for certain topics, planning ahead towards goals and yes, even capable of formulating it’s own goals (both short and long term). The announcement of this Xprize could not have come on a better moment, as we are about to start building a prototype of my model soon. The needed 0.5 mln Euro for this has been granted to my project and we are in the process of securing those funds. I’m currently working with two academic organizations, a third is about to join the project. If all goes well I’ll be working on this full time over the next few years (and probably long after). We are planning to take on the Turing test in 2015, and I think we will be able to take on this Xprize (at the level of actually trying to win it) the year after that.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 23 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Sounds pretty serious (And holy smoke 0.5 million euro! Are we talking about research in the Netherlands here? Can I ask which instances one could apply to for funding?)
Anyway, nice to hear, I noticed “working on AGI” on your profile and was curious. Forgive my ignorance though, has the AI’s engine already been programmed to some extent, or is it still on paper? Since you mention a prototype, and I’ve noticed that people often use the word “project” before they make the “program”. Just curious.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 24 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 25, 2014: Sounds pretty serious (And holy smoke 0.5 million euro! Are we talking about research in the Netherlands here? Can I ask which instances one could apply to for funding?)
Anyway, nice to hear, I noticed “working on AGI” on your profile and was curious. Forgive my ignorance though, has the AI’s engine already been programmed to some extent, or is it still on paper? Since you mention a prototype, and I’ve noticed that people often use the word “project” before they make the “program”. Just curious.
Yep, pretty serious indeed
Nothing has been programmed, at least nothing substantial. However, it is a ‘project’ in that there is over four years of substantial R&D, loads of generated documentation (including blueprints, datamodels, functional designs, etc.) and several people involved, including academic researchers. The point is (from my perspective) that developing AGI can simply not been done by starting to code some idea and then refactor it into ‘working AGI’. The scientific research needs to be solid (and extensive) before even thinking about actually building something. We (my team) are now at that point where we can actually start ‘building it’. The funding has been awarded based on the current body of work that is already in the project. At least one of the academic institutions is interested in getting additional research funding (EU money) as to do ‘applied research’ in parallel with the development of the prototype.
The 0.5 mln funding comes from a private investor. This funding has been granted, but we are awaiting the actual deposit of the money. As far as I know now it will be available somewhere in the next two months. Most of this money will go to hiring a few top-notch Python coders for about 6 to 8 months. Those guys are pretty expensive
You can follow the project on our website mindconstruct.com and/or via all the related social media outlets. Although it has been silent over the last year, starting later this week we will regularly post news on the website and social media channels.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 25 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 25, 2014: Can I ask which instances one could apply to for funding?
Forgot to answer this one: well, as this is very high risk investment, my best bet is to get private funding. However, that means you need to find someone who believes your plans and has ‘some money’
Besides that, there is a substantial amount of EU-funding available for AI-related projects. Look for the EU Horizon funding effort. You need minimum two other participants from other EU-countries to be able to apply for Horizon funding though.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 26 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Oh yes, I remember someone once posting to join up and apply for chatbot funding that required at least two other international parties. Makes sense if it’s coming from the EU. Thanks for the references, I’ll keep an eye on your project.
Well, as long as university isn’t returning my email, it looks like I’ll just have to find myself an eccentric millionaire
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 27 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
Looking forward to seeing your work Hans! If it is approved, we will also be looking for outside funding. I have also been working on something that can be readily adapted to the challenge, specifically a neural bundle capable of taking a topic, and creating a short paper that would be classified by a grade 6 teacher as being representative of a child at that level, and graded accordingly. (3) three minutes seems daunting, but in actuality at the average human speed of 130 +- 10 words a minute, its hovering around 400 words. Thats not a lot, and the real challenge may be in making the shorter dissertation more refined. That and the VR for the QA section. There will be major hurdles there I think, hopefully I’m wrong. I am assuming this will take place in a large auditorium setting, lots of ambient noise, an unknown speaker, given these parameters I’d guess that our current system would hit somewhere around 25% success rate. Definitely not good enough. I’m hoping to have a Proof of Concept module online in a few days, so maybe we can crowd fund it. Best of luck!
Vince
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 28 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Vincent, not trying to diss your project… Vincent Gilbert - Mar 25, 2014: I have also been working on something that can be readily adapted to the challenge, specifically a neural bundle capable of taking a topic, and creating a short paper that would be classified by a grade 6 teacher as being representative of a child at that level, and graded accordingly.
I’m somewhat weary about projects that aim at a certain lower grade of human development (like a ‘child’), as that seems to me a big shortcut in simply lowering the expectation. The biggest hurdle is exactly those things that children are still learning or even not yet on to, but grownups do. Stuff like complexer lines of reasoning, layers of cause-effect chains, that sort of stuff. I’m pretty sure a six-year old can give a three minute speech, but I don’t think that any six-year old will solicit a standing ovation. If he/she does, it’s because they perform way above the perceived level of their actual age.
Vincent Gilbert - Mar 25, 2014: (3) three minutes seems daunting, but in actuality at the average human speed of 130 +- 10 words a minute, its hovering around 400 words. Thats not a lot, and the real challenge may be in making the shorter dissertation more refined. That and the VR for the QA section.
I think the real challenge is that to get a standing ovation, the talk needs to be (at least in some way) inspiring. I like the challenge because it exactly askes for that ‘human’ quality that can not be ‘faked’ by simply reading out a nice story.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 29 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Hans Peter, while I do not wish to dis(respect) your opinion, I do intend to dis(agree) with at least part of it.
I don’t see the notion of setting the project goal of an AI entity for a “developmental tier” similar to that of a 6 year old child to be a “shortcut”, but as a potentially necessary intermediate goal on the way toward something greater. To use a wildly imperfect analogy, everything that exists goes through growth and evolution processes. No Human was ever born as a fully formed adult; no wine started it’s existence in the same state as it will be when opened and consumed, and no AI project is ever instantly created in it’s fully mature state (unless you’re keeping secrets ). Higher goals are great, but they’re made up from smaller ones, and this is just one of those smaller goals to be achieved along the way. I’m sure that this isn’t Vincent’s “end all, be all” goal.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 30 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
No “dis” taken at all Hans In this case, grade 6 would approximate a 10 year old child, and if there is anything published showing an AI even coming close to producing a topical paper approximating what even an average ten year old human child produces, I havent seen it yet but I would be extremely keen on doing so! True there are many algorithms that can take bulk data, parse it, and create a really good synposis. In fact I believe that this is what sites such as DBpedia are doing, taking a wikipedia page, and creating a synposis. But that falls far short of a human child. For this experiment we broke it down into these “must do” areas;
get the subject
get the focus
create a synopis
create an emotional\personal view of the synopsis
explain why this personal view was reached
expanding on the synopis relate this topic to other topics from the derived personal perspective
We’ll see how close we get. Should have the basic engine up and available to play with in a few days.
In my view, there is another reason for having an AI develop in this fashion. I believe that for an AI to develop a true personality and a self synthesized view, there has to be a temporal component. In other words, by having the AI create papers (using rule based logic) based on its learned emotions and world views at a certain stage, these when stored and dated (not necessarily actual dates, we create artificial “memories”) become the basis for views created using a neural net based engine. the AI can relate its “opinion” to its own history, and to a world view history gathered later at any point.
We will see how close we get. Could be interesting, could turn out to be ****
Vince
|
|
|
|