|
Posted: Apr 17, 2014 |
[ # 31 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
Hans,
I agree. Having a design on paper, may prove the value of Mathematical notation. Investors may check your design with their own Mathematician. That’s an advantage that a running A.I. system with tens of thousands of lines of programming code and data ... May NOT have! You’re right abou that.
Thanks for your good advice.
|
|
|
|
|
Posted: Apr 17, 2014 |
[ # 32 ]
|
|
Experienced member
Total posts: 40
Joined: Mar 31, 2014
|
Et al,
Currently chatbots seem to be centralised and so can be easily linked to independent programs.
Let’s assume that every user/machine/switch has their/its’ own independent chatbot, as possibly they might, am I
right in thinking that such independent “chatbots” can’t ask themselves any question unless they are linked to
- third party chat script language(?)
- third party dictionary
- third party data base
- third party SQL
and possibly other third party modules?
This list can only grow when it comes to controling machines and even down to switches (which of course Simplex caters for and which in turn means that any type of machine can eventually be controlled by voice commands!)
THEN
- does this mean that every device needs to have a chat bot system plus all those independent programs also?
- surely there must be assosciated time penalties due going through all those independent programs?
- dosn’t this mean that the total RAM demands are excessively high?
- when things go wrong how on earth do you figure out who’s responsible?
- etc
Do remember that Simplex currently does not rely on third party software but is an itegrated solution. It even avoids
making use of any Windows functions so as to be independent of operating sytems etc.
With evolution the elegant solution tends to be the prefered choice. Is it different in computing?
With tongue in cheek, perhaps computing demands another piece of jargon to supplement self aware. Perhaps,
multi aware? Or ... hmm ... now I am hesitating ... oh buggar it ... how about multiswear?
It reminds me of a comment I heared to the effect that the Pentagon was built in a round shape to facilitate passing the buck!
What am I missing? Is there something I just don’t understand?
Incidentally, I had a stab at calculating how many different ways a user might ask the question under discussion and had to give up! Let’s just say there are a lot.
The way Simplex is designed means it should automatically deal with all such variations of the question. In the testing I
have undertaken Simplex seems to deal with each correctly.
Are other “chatbots” designed in the same manner? ie Do they they automatically deal with all such cases?
Jim.
|
|
|
|
|
Posted: Apr 17, 2014 |
[ # 33 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
Admins please delete my account. I can’t stand any more of the egos or the obtuse grammar that is supposed to substitute for a real answer; not to mention the replies here that don’t have anything at all to do with the orginal post or question. I came from and believe in the world of open source, and most (not all) of you are the antithesis of that.
I was hoping this site would be a help to me, a newcomer to AI, to learn new things. But alas, with the exception of one or two people, it seems to be all about the best method of faking it. Or at least the best way to claim greatness without substantiating it. This is not what I came here for.
I wish you all the best in whatever your endeavors are, and thank you for the time that I’ve spent here.
John
|
|
|
|
|
Posted: Apr 17, 2014 |
[ # 34 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Ouch, John. I’d hate to see you leave on account of a topic such as this that comes around only twice a year (in every AI forum, I’m afraid).
I’m sure if you keep your account you will find e.g. the Chatscript development section containing only helpful comments by helpful people.
|
|
|
|
|
Posted: Apr 18, 2014 |
[ # 35 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
John, I’m afraid that I cannot help you with deleting your account, because there is no mechanism in the forum software for removing accounts. I can ban you, but that’s not the same thing, and wouldn’t be useful, really.
Look, I know that this community has it’s “warts”. All communities do. When you add to that the fact that we’re all somewhat passionate about the field (to varying degrees, of course), this can lead to friction, and it can also be frustrating at times when your ideas don’t seem to be given the respect or consideration that you feel they deserve, but it’s good to keep in mind that there are always more ways to achieve a goal than any one person can envision, and that not all goals are the same, either. My suggestion here is to take what you can benefit from, help those who are genuinely trying to learn, and ignore the rest (especially if it upsets or frustrates you). I personally don’t want to lose you or your insights, but you should not have to endure that which frustrates or upsets you. So if you feel that you must go, nobody here will stop you, but you should know that you will be missed.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 36 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
Folks,
I’d like to apologize for my little rant (above). Please rest assured that you guys here had nothing to do with my venting. I’ve had a difficult few months in real life and sadly, you guys got the venting of my frustrations.
My rant was not aimed at any of you personally, and I would like to make that clear.
James, Dave, and a few others have emailed me privately out of concern that they were somehow responsible for my little breakdown, so I am posting this in the hopes that no one here should take my venting personally. The only person responsible for my rudeness is myself.
It was out of character for me, but that’s no excuse. My apologies to anyone I may have offended. And my thanks to all of you.
I now return you to our regularly scheduled programming.
John
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 37 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Not to worry, John. We all have our off moments (days, weeks, months, etc.), and I know from personal experience just how much rough times can color one’s outlook. Just know that many of us here (well at least me) stand willing to assist in whatever capacity, even if it’s just to be a “venting target” (though that sort of thing is best kept to email channels ). You’ve got my email, so if you want to “talk”, please don’t hesitate to use it.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 38 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
Thank you Dave. I appreciate that more than I can tell you. I appreciate this board and the people here. I’m learning quite a bit.
My thanks and apologies to all of you. Now, since I don’t want to continue hijacking James’ thread…let’s get back on track.
How long have you been working on your project, James?
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 39 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Well, if we want to get back on track; James Curran - Apr 17, 2014: am I right in thinking that such independent “chatbots” can’t ask themselves any question unless they are linked to
- third party chat script language(?)
- third party dictionary
- third party data base
- third party SQL
and possibly other third party modules?
Taking the most basic chatbot as example, it doesn’t have a separate dictionary or database, it’s all combined as one large script of rules, run by an interpreter. It’s easy enough to research how they work.
Now that we’ve exited the realm of magic, I don’t doubt that you have created the equivalent of a chatbot from scratch with great effort. Some of your bot’s responses seem pre-written, others seem to incorporate data search results. This is what chatbots do, some better than others. As far as language flexibility goes, most chatbots are equipped with word patterns tied to responses and/or data searches. e.g. “What colour is a North-Korean sunset in mid-winter?” and any variation thereof would trigger the response tied to the built-in pattern “What colour is X” as long as those three words match in the sentence.
How thoroughly the colour of “X” is actually investigated and how broad the range of automated answers is depends more on the creator’s scripting skills than the system’s abilities.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 40 ]
|
|
Experienced member
Total posts: 40
Joined: Mar 31, 2014
|
John, great to hear from you.
The original concept for dealing with words occured to me some 25 years ago.
I had a break from computing for about 20 years but during that time my mind constantly returned to the problem of how
the mind managed the thinking process.
About 5 years ago I decided to develop my ideas on the subject and amazing as it may sound spent a year listening to BBC radio 4 (a channel that spcialised in “talk” rather than music). I would play around classifying often repeated phrases etc and developing the underlying structure of my program. It was during that phase and in dealing with many lists that I fully appreciated how a computer program was essentially a list! A list of things to do.
After that came the intense business of coding which continues.
Jim.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 41 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
I debated about starting a new thread for this next question, so if anyone here thinks it should be moved, please feel free to do so. But this thread already deals with many conceptual issues, so I thought I’d throw one more into the mix. I’d be interested in opinions on how any of you have or have not addressed this.
I’ve been having a thought lately. I’ve got a relatively new border collie puppy. She’s one of the most intelligent dogs I’ve ever seen, and even after 8 months, she continues to blow my mind almost daily. I won’t mention the fact that I want a bumper sticker that says “My Border Collie is smarter than your honor sudent” lol.
Anyhow…the thing I’ve noticed about her…..is her curiosity about everything. I’m thinking that curiosity is an important facet of intelligence.
So my question is….How the heck would you program a sense of curiosity into your AI, or in my case, into my robot?
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 42 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
That’s a great question, John! And given that this thread has already been “winding down the garden path”, so to speak, I don’t think we need to worry about moving your post.
I’ve not even considered this aspect of chatbot “training”, let alone implemented it. I’ll have to consider how I would do so.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 43 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Pretty interesting how you’ve tackled your project, James.
I don’t know if curiosity is part of intelligence or coincides because a greater mind has more space to fill, but it is the basis of learning and wisdom. It had me stumped for quite a while too, I still haven’t properly implemented it. As a makeshift I’ve made it so that when a user mentions a topic, my program checks roughly how much knowledge it has about it (the size of that part of its database). When this is below certain thresholds, it is instructed to ask default questions about the topic.
There should be better means for curiosity, but this is one that’s not too hard to implement for the sake of acquiring knowledge.
On that note, if anyone remembers QI Stephen Fry’s explanation of how humans classify objects (tools, shelter,... something something), or can point me to research on how children classify object traits, it would help me finetune my AI’s line of questioning.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 44 ]
|
|
Experienced member
Total posts: 69
Joined: Feb 6, 2014
|
Thresholds…that’s an interesting start. Thank you, Don.
|
|
|
|
|
Posted: Apr 24, 2014 |
[ # 45 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
On further thought, I think the general procedure for curiosity is:
1. Notice something out of the ordinary (this may include location, colour, shape)
2. Investigate.
3. Stop investigating once sufficient information or procedure has been established to deal with #1 (curiosity threshold).
|
|
|
|