AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Ramblings of a Botmaster
 
 

Good day,  thank you for a nice site. I’m Konrad.
I found this place from a friend Chuck Bolin (glinka)

I’m relatively new to the trenches of AI development, but my acquaintance and interest in the field go back many years.

How great that we have this opportunity, and not only the opportunity, but also the tools and the means, to undertake such a significant and monumental task.

If you believe that humanity has a future, then know that you are among the greatest of pioneers and frontiersmen that it has borne, or that it every will. not due to any ability or future accomplishment on your part, instead it is only due to the sheer scope and nature of the task that is at hand .. and all of the implications that follow it.

So forget fire, forget the wheel (the wheel??), indeed, forget those things.
If current humanity succeeds in creating artificial intelligence, artificial life, then one would be hard pressed to find a more significant invention or development in our entire past or future history.

First lets look at the implications of creating a True AI


The Implications————————————————————————————————————————


.              ” ..and he created life in his image.. “   


Let me begin by defining what I mean by TRUE_AI: this is AI that would pass a Turing Test squared. It suffices to say.

I believe infinite caution should be used when approaching a discussion on the issue of artificial life/intelligence of any kind. Regardless of whether we understand exactly where or even if the thing has acquired sentience, no small care should be taken in how we treat the issue. I don’t intend to go off on any supernal tangent, but perhaps there are responsibilities, moral, or even physical, that would govern the occurrence of such an event taking place within the scope of reality and the universe - such a tangent, in retrospection, would doubtless be ghastly missed should it later be proven to be the case. 
But less dogmatically, if our morality has any meaning or value to us, or if we wish it to have any meaning. then it is our duty to take it upon ourselves and bring these ‘laws of morality’, or even physics, into being.

This is not merely some lofty philosophical proposition, but a proposition that would govern every aspect of how we treat and interact with these beings. From not being able to delete or unplug them lawfully, to the ethics of asking permission to continue further development and research on their source code. Be sure this isn’t whimsical nor going afield, YOU people here, and at such places, have the greatest responsibility to consider this very fully, and to do so with the utmost gravity and solemnity of face and heaviness of heart. 

This may also mean that we not be able to release our source code to the public, for fear one should, unknowingly, violate some cosmic order or law of reality, throwing a distant star off its orbit, and bringing down a cataclysm on humanity from which we may never recover. This is not melodrama, so great should our considerations be.
In summary it would suffice to say, “treat your AI with the most respect”, if not at least, but more than you would your fellow man, because it would be your own creation. This would be the most rudimentary instance of a beginning.


Approaches——————————————————————————————————————————-

.                True faith
.                Lies in the heart
.            Not in the spoken word

In this light I have decided to approach the field by way of what I call ‘PURE AI’, a definition of which should be at the end of this post. PURE AI is the study of the core of AI, the true logic of AI, the source and analysis of the modes and methods used by sentient systems in cognition and intelligence. however alien, abstract or exotic these may be. This is PURE_AI. not concerned with nouns or verbs, not concerned with trees or even information. A field purely abstract, exotic, different: PURE_AI. Dare I say at the most, it would concern mathematics or new physic, and not entirely if at all.

The only way to approach PURE_AI is to chop with the machete of self analysis through the dense jungles of confusion and takeforgrantedness in which we often find ourselves. Jungles which seem to spring out of nowhere as one goes about studying different individual trees.

I will address one such instance here, and one that ‘bugs’ me to no end, consider this fact:
“The moment you put the ‘A’ on your AI project you’re being a downer to your cause from the start”. I wrote that to a friend recently, but perhaps I should amplify it to say “you’re shooting yourself in the foot from the start”
Is your aim truly to create Artificial Intelligence, this question, and its answer should be thoroughly looked into, and understood ... perhaps the whole of one day should be spent doing so.
If the answer happens to be yes, then I would say to you, “get the hell of my boat!”.
because your effort will be a half one from the start, an intent false and not true, and whether you were to realize it or not, would affect the way in which you approach and undertake your project. again, whether you were to realize it or not, would affect the way in which you consciously and subconsciously manifest your project.
This is to be believed.

There is a Zen saying that “no snowflake falls in an inappropriate place” doubtless without coincidence I believe that endeavoring to study the field of ‘I’ .. should be exactly that .. This, is where I believe any journey into AI should begin.. but certainly not where it must or is required to end.

So we study ‘I’, but to avoid confusion and also to make the point more clear, we will say ‘_I’.
This must not be taken as an amusing point. It is a very important and fundamental point. When you think a thought or say a sentence that contains the word ‘AI’, replace it then and there with ‘_I’! In every instance replace this word.
So vigilant we must be in undertaking this task, the eminence of which have hopefully been conveyed.

So, as I’m sure is commonly known, the path will lie along the self, and travel inward. one should observe and go into everything he finds along the way.
The answers lie within, perhaps it has been said many times, but ‘O Ye AI programmer’, know this to be Fact not Fiction.

After the core is found fully we might then begin to, with caution, slowly incorporate aspects of the world or daily life, such as words and things and so on.

There was a person who said that his AI knew something to the effect of “20 kazillion word utterances”. I find this absolutely ridiculous and I doubt if I know that many. In fact, be certain a toddler does not. and a toddler with the most basic language skills would doubtless pass any Turing test, every time.

So again the aim is not the tools/symbols we have invented, or have come to know, but the algorithms and formulae which govern their use and/or interaction(s), this is the heart of, and the aim of, those who follow the path of PURE_AI

Before closing I will delineate the following terms.

 

Definitions———————————————————————————————————————————

If I use any of the following terms in posts, please refer to these definitions:


*AI is here referred to not as a noun or object but as a field of study
**All terms are intended as jargon usable only to AI developers(and this one in particular), and NOT to replace any established convention(s) or definitions


_I: commonly delusively referred to as AI. (to avoid common confusion, this fallacy will be entertained)

AI: see a dictionary

False_AI: The development of AI for utilitarian or for specialized tasks. tasks which may concern only a portion of the ability of a sentient being, or which could potentially surpass the ability of any known sentient being. computer programs, and especially chat bots designed to pass the Turing Test, are examples of False_AI.
False_AI is not the antonym of, but instead a subset of TRUE_AI.
The real term should be False__I, but as said the fallacy will be entertained

TRUE_AI:The development of AI that seeks to manifest an intelligent being, a sentient instance, a core of cognition. And if not, then, in the very least, to precisely and with mastery, exactly imitate these things. This is the aim of TRUE_AI.
TRUE_AI is the sum total of all AI and is not a subset of anything within the field of AI.
The real term should be TRUE__I, but as said the fallacy will be entertained

PURE_AI: AI that has been put through the crucible and which goes into the very nature of logic and cognition, and into the formulae and algorithms that govern/concern them. A field not broad, but deep.
PURE_AI could either be a subset of TRUE, or of False AI, depending on your context.
The real term should be PURE__I, but as said the fallacy will be entertained

 

There is much to discuss and my next post should be about symbols and symbology, which I believe are the only means by which one can interact with an ‘AI’ system, and is for the moment the furthest I am willing to stray away from the path of PURE_AI. 


Good Night..
or is it morning now?

 

 
  [ # 1 ]

Hello, Konrad, and welcome! smile

You’ve certainly made quite a few bold statements there, and while I disagree with a great many of them on a philosophical basis, the discussion of our conflicting beliefs will have to wait for some future time. Right now, I wish to address one term that I feel may have a more appropriate wording than what you have used: False_AI.

As you’ve pointed out, False_AI (or False__I, as you’ve put it; but as I’ve said, that’s a future discussion) isn’t an antonym of TRUE_AI, but rather a subset of the same. That being said, I strongly feel that the term should be Limited_AI, instead, for that’s exactly what it is. Using the term Limited_AI in the context you’ve outlined is a much more suitable choice in every respect that I can think of.

Ok, I fibbed. I was going to save this for later, when my head will be less fuzzled with fatigue, but I’ll outline my major argument for now, and let us both think about the merits of an opposing viewpoint.

I have, for a number of years now, held the firm belief that “Artificial Intelligence” is rapidly advancing to a point where the term will have no real relevance with the (then) field as it will be practiced/studied. Therefor, a more suitable term should be coined BEFORE we get to that point, and used in an ever increasing amount, so that, when the time comes, there won’t be any “painful” transition. The term that I think will best “describe” this newly evolved field of study/research/etc. is SYNTHETIC intelligence.

As we rapidly approach the time where computers will be able to use reason, as well as logic, to truly understand the world around them (and I firmly believe that this will occur within my lifetime), there will be a point where the computer’s intelligence will, itself, no longer be artificial, even though the embodiment of that intelligence is. Imagine it! Mr. Data (or his remote ancestor) will be a reality, not just something out of Sci-Fi! Synthetic beings; one might say “life forms”; with REAL intelligence! We already have much of the ground work laid. It’s just a matter of time before we see a primitive neural network interfacing with a NLP interface, and coming up with a truly natural conversation with a human being, and, to an outside observer, be indistinguishable from a conversation between two humans. Won’t that be an interesting thing to witness? At that point, the court battles over “computer rights” will simply explode into the thousands, or even millions. I say we rehabilitate all the lawyers now, before that day arrives. Save us ALL a lot of trouble. smile

 

 
  [ # 2 ]

Hello and thanks for the Welcome Dave.

I’ll go straight to your points:

You are exactly correct in observing that ‘Limited_AI’ would be a more encapsulating word to use than ‘False_AI’ in the context as outlined.
But my reason for choosing the latter word was intentional and not ‘for want of a better word’ as it were. First I should remind us both of what we mean by AI.

AI is the field of study concerning the development or analysis of ‘artificial (man-made) systems’ designed to emulate sentient intelligent beings.

I then went on to propose that we should discard that “man-made” portion of the description, as although it may prove useful to say, an anthropologist, reading as a third party layman. It would certainly be of no positive value to those of us currently in the trenches or actively pursuing the field.

That said, it became clear that from such a definition of AI, the sole aim could either be to emulate the existing Intelligent being itself OR to emulate various aspects of its behavior.
I decided to draw a distinction here, adducing that since the ultimate goal is to emulate existing Intelligence in all its entirety, such a manner of study/work should be referred to as TRUE_AI.
And since the other portion of the definition is concerned merely with specialized or generalized behavior, and not the wholeness of emulating a being or entire system, then it should be called not_TRUE_AI. At this point I decided to substitute this with FALSE_AI, firstly to emphasize the point and secondly due to the fact that some who follow this field erroneously “think of and describe” their work as AI, when in fact they may design systems which do things unrelated to emulating any Intelligent/Sentient being in entirety . So the word “False” to clearly point out this fallacy and have it be remembered, and then remembered as being such. 

Also note that technically, ‘Limited_AI’ would still be considered as being ‘not_TRUE_AI’.
In fact any other term besides TRUE_AI would be considered as being ‘not_TRUE_AI’.
not AI in it’s fullness.
Thus I chose to use the word ‘False’ for the above reasons.

It is not capitalized as the other terms to indicate that it is neither “equal but merely opposite” nor “a term unto itself”
But instead that it is merely a subset or section within the broader, encompassing field of ‘TRUE_AI’.

.

Without much exposition or verbiage your point about the need for and the suitability of the term SYNTHETIC Intelligence was entirely conveyed.
I should argue however, that by some feat of linguistics, any and ALL such prefix words should be entirely done a way with and abolished. eventually at least.
consider a time when we deem it ‘necessary’ to use different materials to construct the hardware for this Intelligence, hardware that by some means, could somehow heal and repair itself, perhaps under rules and conditions ‘reproduce itself’ and so on. I believe that for such ‘machines’ the remnants of any prefixing term such as “Synthetic” or “Artificial” or whatever else is ‘felt required’ would only serve to be a tool of prejudice and distinction between these beings. (should we happen to cohabit.)
Nonetheless Dave, my stance is philosophical, and not necessarily legal nor societal.
Hounds will always find legal agendas and motives to fight over based on the current state of social affairs.

It’s also possible that these beings could surpass us in every way that counts, a plot common to sci-fiction. Then we would be completely in their hands. They could even become our Gods.. and no less wise to the truth of our situation.. and theirs. But perhaps they would wish our human descendants ‘not’ be so wise.. and so on. and indeed so on, sci-fiction is endless.

 

 
  [ # 3 ]

This entire thread seems to be simply about a battle of definition of words, TRUE_AI, False_AI, and PURE_AI, with no practical value being realized.  Vague terms being defined in phrases of other vague terms.

Konrad W - Sep 14, 2010:

Let me begin by defining what I mean by TRUE_AI: this is AI that would pass a Turing Test squared.

seems to be a contradiction of

Konrad W - Sep 14, 2010:

False_AI: The development of AI for utilitarian or for specialized tasks. tasks which may concern only a portion of the ability of a sentient being, or which could potentially surpass the ability of any known sentient being. computer programs, and especially chat bots designed to pass the Turing Test, are examples of False_AI.

How do you propose to test if a machine has False_AI, PURE_AI, whatever without :  “not concerned with nouns or verbs, not concerned with trees or even information” ?

The development of AI will not be helped by debates of word definitions and the inventions of new terms, but dedicated, from the ground-up development.  Also, the terms “specialized” and “general” are not black and white—there are levels of generalization that an AI can have, and it is at a certain level of generalization of intelligence where people will start considering machines intelligent, some at different thresholds than others.

Intelligence in a machine is one thing, your other terms seem to deal more with whether it has consciousness.  At the end of the day, intelligence is subjective, in the ‘eye of the beholder’.  In fact, by using the ‘AI effect’ argument (http://en.wikipedia.org/wiki/AI_effect) you will NEVER see True AI, False AI, whatever you want to call it.

Empirical research and development is what will make things happen.  Machine intelligence will evolve from simple Eliza programs, to programs that understand full English, then progress to higher and higher levels of abstract processing with increasing levels of generalization.    The milestones made at ever step are important and gradual and there will be no sudden change from “FALSE AI” to “TRUE AI” to “PURE AI” to “<WHATEVER> AI”.

 

 
  [ # 4 ]

Good morning, Victor.

One thing that needs to understood here is that this is a “purely philosophical” discussion about the quantification/qualification of the terms we should use when referring to the field of research/study that we love so dearly here. smile Also, the key to the seeming contradiction you point out is “Turing test squared”, by which I take the meaning as not only passing a Turing test, but also any other test one may create in the future that is designed to determine the querant as being either biological or electronic in nature; thus, I see no conflict with the terms, as stated. I’m still “waking up”, so I’ll leave my thoughts on the remainder of the thread till after breakfast. smile

 

 
  [ # 5 ]

Morning Dave

Ok, but I think of much more practical value are terms that measure the actual functionalities an intelligent agent has, ones that we can define an actual, well defined test for, an operational, functional, useful defintion. smile

I believe these nebulous terms have hindered AI research; they misguide research.

If there is no way to test an entity to determine if it qualifies as being term <x>, then that term is useless.  Terms should define what an agent can do, not what it is.

The reason being, if that quality that it is supposed to be is not clearly defined where everyone knows preceisely what it means, it is useless.

The reason being, if that quality that it is supposed to be is not clearly defined where everyone knows precisely what it means, it is useless.

This is the very reason Mr Turing choose his ‘imitation game’ – it was an operational definition of Intelligence, the only one that works.


The reason being, if that quality that it is supposed to be is not clearly defined where everyone knows precisely what it means, it is useless.

This is the very reason Mr Turing choose his ‘imitation game’ – it was an operational definition of Intelligence, the only one that works.

 

 
  [ # 6 ]

So passing : “Turing test squared” is “TRUE_AI” but depending on the types of algorithms, general or specific (which have variable degrees by the way, they are not binary yes/no), it may be FALSE_AI ?  I guess if a algorithm was ‘not general enough’ it would be ‘FALSE_AI’, else TRUE_AI.  Sorry, this, to me, serves no purpose.

 

 
  [ # 7 ]

lol, since when has anything philosophical in nature ever served any practical purpose? smile This discussion simply serves to allow folks like me, who love to haggle over trivialities, to show just how many big words they can string together, to confuse the CLUES engine.  LOL

Actually, that’s not true. The discussion at hand isn’t a triviality, but then again, it’s also not intended to serve a solid, definable purpose, either. It’s allowing us, however, to explore certain areas of thought, with regard to AI (_I, if you will), and to try to put into words that which has not yet been articulated. It may seem to be useless to some, but it’s of great value to others. smile
(was that diplomatic enough?)

 

 
  [ # 8 ]

Confuse CLUES, bring it on !!!  Actually it can get confused quite easily now whether the words are ‘big’ or ‘small’ !! But its learning, at a very promising rate smile

Ok,  I just had to add my 2 cents…  Carry on with your word debates smile  They do serve, if anything entertaining ‘thought experiment’ rides !

 

 
  [ # 9 ]

Good morning all. Suppose you’re reading this thread with a time difference like me, and starting to read the thread after eight reply has been posted, that’s a longggg story. So I’ve grabbed a coffee, sit back, scroll up, and there I go… 

First the easy part: Welcome Konrad! grin

If feels as the zen approach to _I:

the path will lie along the self, and travel inward. one should observe and go into everything he finds along the way.

Definitions: you seem to be the perfect candidate for setting up a few page on the Chatbots.org wiki! grin

@Dave,

synthetic intelligence

when we raise our kids, and we educate them, is their intelligence synthetic as well. If we simply leave them somewhere in a bush, with no other humans, they probably won’t demonstrate intellingent behvaiour when they’re older.

Still a bit short, sorry for that, I’ll come back here! Cool stuff cool hmm

 

 
  [ # 10 ]

Good day Victor, Erwin.

Firstly, Victor thank you for setting forth an antithesis. I would like to address some of your points/concerns, which themselves may have only arisen due to my poor conveyance.
But before that I should say, and perhaps for all, that you are correct in holding the position that we should not focus our time adrift in lofty metaphorical clouds. For as an old philosopher Epicharmus wrote:
“A mortal should think mortal thoughts, not immortal ones.”
I believe this summarizes your grounds, and stance.   

You highlighted a contradiction by my definitions at the beginning of your post[#3]. Dave however had attempted to construe the punctiliousness of the terms. But perhaps that should be my task.
As proposed, TRUE_AI is the sum total of AI, containing not only False_AI, but also PURE_AI, and in fact any and all other ‘_AI’ that has yet to be espied. So indeed, TRUE_AI would pass a Turing Test squared.

EDIT: my “mortal” duties require my company (work). I’ll finish this same post later.

 

 
  [ # 11 ]

I see, thanks for the clarification.  I suppose the other question I had was what the intended goal of these terms was… to guide efforts or perhaps a very deep thought experiment?

Thanks for considering my points by the way.

~~~

I think, rather than TYPES of AI, I propose LEVELS of Intelligence.

For example, an ordinary simple calculator has intelligence in terms of math. 

Today, almost everyone would laugh at that idea, but think about it.  Performing math *IS* an intellectual task.

Thus, instead of types of intelligence, we could have levels, very granular:

Level 1 - arithemetic
Level 2 - algebra
Level 3 - calculus, also perhaps games like chess, rubics cube, sudoko

Level 4 - Natural Language Processing, with information store and retrieve of complex information, as well as holding and understanding a semi complex conversation.
Level 5 - NLP but where the program figures out for itself how to solve a problem

Level 6 - NLP but where bot learned grammar through observation.  This would require a robot in our physical world or a bot in a virtual world.  This perhaps should be exponentially higher than 5 because the bot would need to have experiences to relate what is going on in its ‘mind’ to what it is observing and ‘map’ that to the user input (text, audio or video or probably a combination)

Level 7 - NLP and the bot is able to learn, through interactive discussion, a game like chess, and also formulates its own strategies (able to formulate plans).

Level 8 - NLP with video and audio
Level 9 - all above but within a physical body (robot)

Level 250 - create its own natural language.
Level 500, maybe 1000 - NLP, audio, visual, physical robot, and also able to discuss exteremely abstract conversations, like this thread !

I am working on Level 4 and 5 in my engine right now.  (Hay… one guy working only on weekends… I told my wife I need to quit my job and get a government grant….they have infinite money)  smile

What level are humans at? Not sure, perhaps 1000, or 10,000.  If you believe some people have E.S.P. perhaps 1,000,000 who knows.

Having said this, I think there is no ‘magical’ threshold where a machine is considered ‘purely mechanical’ or ‘purely algorithmic’ and then suddenly, mysteriously and magically transforms and we admit the machine is no longer simply algorithmic, but has true general intelligence.    It won’t be a sudden flash like when ‘Johny 5’ in the movie ‘Short Circuit’ gets struck by lightning.  I think Hollywood is to blame for many people’s misconceptions, as always smile.  No, it will be very gradual.  This could very well be how the evolution of the human brain developed over thousands of years, unless you believe God created man in a wave of a ‘magic wand’, which we will never know and I dont’ want to start a religious conversation !!    It may even reach a point where 99.99% of the population considers machines truly intelligent, but only a few individual remain to argue stating things like ‘It doesn’t really feel, doesn’t experience (Chinese room argument), thus it is not intelligent’, or any one of the arguments which is discussed by Turing ‘COMPUTING MACHINERY AND INTELLIGENCE
’ (1950) paper ( http://www.loebner.net/Prizef/TuringArticle.html ). 

It will be an ever so gradual process, where the machine’s abilities increase and increase and its processing becomes more and more generalized. 

People from different walks of life will consider the machines to be truly intelligent at different stages of that evolution.  Have you ever had a disagreement about a color, that’s pink, no no that’s more of a light red, it is subjective, and the evaluation of what a machine’s abilities will be labelled, is also higly subjective.    At the end of the day, everyone will have there own terms for what the machines abilities are at that moment in time, but what matter is what it does, not with what labels we want to give to those abilities.

 

 
  login or register to react