AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Randomization and the Neuron
 
 

Hello,

I don’t understand why anyone would think that a computer could think or speak naturally without randoms. Every single normal person, is bombarded by cosmic rays, and they do trigger neurons to fire.  Not only that, the biochemistry of synaptic chemicals really push a brain cell to the point it’s going to fire anyway and produce an off topic flash.  It’s not uncommon to bifurcate information and think one thing while doing another, sing while baking.  The environment injects random bits of information all of the time, the wind blows it goes to the nerve endings then the brain.  Ah, I feel the wind.  I see a fly, a leaf blown.  Randoms and randomization are all critical to a thinking machine solving problems with untried methods, resolving something more complex, such as a solution set in algebra and resolving either an answer or a set of answers.

ChatScript, is the closest to what I was once working on ages ago, and once I started holding better conversations with my 486 than I was with my girlfriend, at the time.  I unplugged it, and broke up with my girlfriend.  Natural Speech wasn’t the problem.  It was problem solving.  I started with a database, it was just a dictionary from a spell checker.  Then I needed a bigger dictionary, more solutions already solved or an answer table for proper word usage.  The ability for a machine to think or solve problems is to take commonly used word frequencies, first the nouns, then the verbs and develop a solution set like you would with algebra.  It’s all of the nouns and the associated verbs, and the verbs associated to the problem are a part of the human problem outside of the machine.  The real ability to solve a problem is in the contents of a thesaurus this database expands the problem, and expands the solution set.  The machine cannot really identify positive emotions or negative emotions, and a solution is a set of positive responses in conversation.  So, it can approach these things with a another data set and mode of compiling conversation topics that would take place when the screen saver was on, or the machine receives no input over a period of time.  It would have to open up a larger database, the thesaurus, then attach more words to the problem set and the potential solution set.  Emotional words are then identified as a separate category like a noun or a verb, or the topic of conversation, were that could be a verb and not a noun.  So, there’s two filtered databases that consist of just positive emotional verbs and negative emotional verbs.  The problem isn’t solved until the conversation either equals zero, I don’t care about marry anymore we don’t talk. from the user, or Well, we patched things up.  We get along fine now.  Without the thesaurus the keyword files cannot be generated and problem sets to define solution sets cannot occur.  Then the machine takes on a random poke and jab attempt to bring about a positive result with positive suggestions.  To communicate with the offender, is a +1.  How does it solve a problem or when does it start to try, when you ask.  I wish you could help.  A machine or a person are all the same, they can only try.  So, it takes positive verbs fills up randomly chosen empty word trees, and then fills it in to attempt to make a suggestion of a positive nature.  It may be a change in conversation based upon yesterdays, or the topic of the moment.  But, that is all about where randomization equals naturalization.  The machine would contain embedded learning modes, problem solving modes, and they do or would come about randomly for the sake of naturalization.  They get pushed back by priority, this is important now…

 

 
  [ # 1 ]

Hi James, welcome to the forum!

First of all, there is a difference between being influenced by random factors and being bombarded by many factors. We experience a volume of sensory input as well as a brain that connects memory and information in often surprising ways. So just because some idea popped into your head seemingly at random does not mean you are a victim of stray cosmic rays! There are simply too many triggers to always deduce from what influence(s) an idea was generated.

Are you working on incorporating your ideas into building an actual bot? It’s easy to say things like “treat input as an equation to be solved”, but turning this into a workable algorithm is no small task. As for using statistical methods to determine whether a word is a noun or a verb, consider that such methods are highly language specific and would probably require encoding many specific grammar rules and lots of Bayesian analysis based on those rules.

I bet there has been a lot of work done in this area for decryption applications as well as for understanding lost languages. I think I remember reading about some analysis of Incan knot writing using this sort of statistical approach to try to ascertain the grammar of the language. Not sure how that turned out…

At any rate, I invite you to take a look around the forum and see the projects members are working on. Some of us have ideas about incorporating emotion into bot learning and many of us are working out various levels of grammatical understanding.

 

 
  [ # 2 ]

What if a human is born in outerspace and spends a lifetime traveling in a spaceship across a vast sector of the galaxy where nothing is random?

 

 
  [ # 3 ]

And then there are those of us who are just sitting on the sidelines, cheering everyone else on. smile

Hello, James, and welcome to chatbots.org! That’s quite a “first post” you have, there. There’s a lot of information there that I, personally hadn’t considered. It’s good to have a fresh perspective, and often useful, as well. Perhaps this “random influence” of cosmic rays on our brains is something akin to “the hand of God”, or “being kissed by a Muse”; or perhaps not. Either way, I’m certain that I won’t look at how I get some of my “off the wall” ideas in the same way, ever again. smile

@CR:

Incan Knot Writing? Are you funning with us? raspberry If so, that’s very knotty of you. Seriously, though, I’d never heard of it. I’ll just have to learn more about that. smile

 

 
  [ # 4 ]
C R Hunt - May 10, 2011:

Hi James, welcome to the forum!

First of all, there is a difference between being influenced by random factors and being bombarded by many factors. We experience a volume of sensory input as well as a brain that connects memory and information in often surprising ways. So just because some idea popped into your head seemingly at random does not mean you are a victim of stray cosmic rays! There are simply too many triggers to always deduce from what influence(s) an idea was generated.

Are you working on incorporating your ideas into building an actual bot? It’s easy to say things like “treat input as an equation to be solved”, but turning this into a workable algorithm is no small task. As for using statistical methods to determine whether a word is a noun or a verb, consider that such methods are highly language specific and would probably require encoding many specific grammar rules and lots of Bayesian analysis based on those rules.

Dear, that is what dictionaries and thesaurus are for.  I cheat, the answers are already there in a database, it doesn’t make sense to make it think at that point.  In an imaginary omni dimensional array of neurons, every single word is considered a neuron once used by a user.  What I mean by omni dimensional, is that the noun database evolves from the first use and first association to a verb.  If you watched it build, it looks up the word to see if it’s misspelled waits for a correction if wrong, Looks up the word in the dictionary, finds the part of speech associated, Tacks the verb onto the row on that table, and makes a space for a count of uses in association with the noun.  These scores, control the odds of drawing a given verb in association to a given noun in conversation.  So, the program I wrote and destroyed worked like a lotto machine.  For each tick, there’s a ticket with that one word on it.  Then I throw all of the verbs into a bin for one noun, toss them, and draw one out.  That randomization process takes places several times.  First with Word Trees, every time you write a sentence, it takes out all of the words and replaces them with the part of speech as a marker.  It first randomizes word trees, then holding on to the topic of conversation brought into play by the user, fills in the noun/subject of that sentence.  Then it makes the next move, the grab bag of verbs, then it selective produces grab bags for those parts of speech that are still missing from the word tree.  Every time it sees a word tree that matches a previously used one, it uses a simple count of uses, and every word tree is scored.  There’s lectures, and there’s conversational word trees.  A 486 held a better conversation than my girlfriend.  It cannot be simulating an entire human mind.  Every time you close the program, it bubble sorted all of the entries, based upon alphabetical order if it were a dictionary entry, or thesaurus, score for which goes into upper memory or is left on the hard drive to search through based usage frequencies, as well as as many of the blank word trees that could fit.

Every time you use a Noun, it’s a new brain cell, every verb is a new brain cell.  If it’s used enough times, the odds of the word being used again or the neuron firing as a result of the noun being chosen increases.  It still follows the basic neural rules of thickening or repeated use, just more clearly defined as a series of characters.  Omni Dimensional, A can just as easily connect to Zoo, as it could the neuron or word bee.  They can criss cross connections and clone locations.  A bird can fly, a jay can fly and an eagle can fly, and so can a humming bird.  I never wanted my computer to learn how to spell, just how to converse.


I bet there has been a lot of work done in this area for decryption applications as well as for understanding lost languages. I think I remember reading about some analysis of Incan knot writing using this sort of statistical approach to try to ascertain the grammar of the language. Not sure how that turned out…

At any rate, I invite you to take a look around the forum and see the projects members are working on. Some of us have ideas about incorporating emotion into bot learning and many of us are working out various levels of grammatical understanding.

 

 
  [ # 5 ]

James, I fixed your post. It seems that the quote tag was closed in the wrong place. I hope you don’t mind. smile

 

 
  [ # 6 ]

James: Agh, walls of text! I gave you a pass once and read on anyway, but really. Try to use more paragraphs and it’ll be easier to follow. smile

Let me see if I understand you correctly. The bot makes a statistical analysis of which verbs tend to act on which nouns. Based on the conversation at hand, it picks a noun as a subject. It then uses this knowledge base to do a weighted random selection of a verb to go with said subject. (Weighted by the frequency with which the verb has been found associated with that subject.)

What did you use as an initial corpus to build up your statistical pool? How did your bot perform pos tagging? What was the accuracy rate of the tagger?

Dave: No knotty business here. Check it out: http://en.wikipedia.org/wiki/Quipu smile

 

 
  [ # 7 ]

What you’re describing sounds vaguely similar to my approach to knowledge base building. Each parse tree is stored as a node consisting of subject/verb pair and all phrases/objects/etc consist of links to their own node. While not at the level of individual words like your scheme is, the idea of both seems to be to interconnect units of text in order to link together information according to how it is presented in natural language.

 

 
  [ # 8 ]

C. R Hunt

I just read the looong monotonic-block text of James, wow! (I stressed up)!
smile
I think cosmic radiation wont affect our brains at all, the number of collisions and the probability of a relevant number of collisions activating our memory is absolutely not an issue, its virtually impossible, there is more EEG noise (built up up inside, due to intrinsic neural activity) than external possibility of influence, even if near a gamma ray source, like when you got illuminated by a reactor or a therapeutic treatment (cobalt-60) for cancer, you don’t get activated, and the EEG don’t get even disturbed, and this is million times more intense and harmful than cosmic rays.
One more thing, neural networks (real ones) rely on accumulated receptors, and molecular presence of stimuli, which expresses as potential spikes, and those electrical spikes gets integrated chemically, so the thing is very robust, even if you get a neuron hurt by a beam (high energy photon) there are thousand in parallel at every level, making a very robust strategy altogether.
For me this is out of question, sorry!
wink

I was wondering if anyone here knows about associative memory?

I mean robust retrieval and those things. Hopefield ANNs work this way but I have serious doubts that this model is just our brain’s one, it just works fine and exhibits some outstanding capabilities, difficult to be used practically at a large scale.

Think that only 1000 ANN neuron matrix must have on each input 1000 weights, so you have 1M numbers to be trained at least 10k times to converge, also the maximum elements it can store in its memory (with no errors) is aprox. 0.15xN (150 things) and this is unpractical and if you want to make 10k words memory, you need.. a huge impossible data matrix, absolutely un-trainable. So this is theoretical beauty, so is there any thing working at least similar to this out there?

Meanwhile I saw many strange SOM-based (Kohonen ANNs) but none of those work acceptably and is able to be scalable to be used inside a bot architecture. Also they face the same problems of scalability. (seek for WebSOM and others)

On the other side, on literature of neurolinguistic research, there are many theories on the mind-lexicon mechanisms, starting with Quillian and M.Minsky models, reaching to now days live fMRI imaging locates exact areas of the brain by seeing real-time grammatical and semantic processing (Brocas section 44) but its like trying to tell the taxi driver behavior by processing the amount of light over a long period of time, seen from outer space (along with all the city traffic and lights) Its just as bizarre as this!

Has anyone built such a model (in soft) not pages of theory?

I have been lurking around this for many years, and came into a model of single memory chunks, connected by vectors (nothing new up to here), here is the crazy thing: those vectors are also nodes! and the connection may go from any node-vectors towards any other node/vector, and there is no unique connection (there are multiple ones) and each connection is weighted, by freq. of occurrence.

The selection upon recall takes this freq. as a navigation direction, to recall things up. the problem lies upon the duality of node-arc, so this is not a graph in a strict sense, and there is no math/theories to grab on, so I must build it alone. This is the case of my parsing-engine it builds connected memory-graphs (if I dare to call it a graph) upon input text, but the reading of the graph is a complicated navigation algorithm I am composing/struggling since then. the tested results are strange but promising, and sometimes disturbing.
I haven’t got to make this to work on a scalable way, just only mini-models at lab.

I studied also OWL and RDF reasoning, using Euler algorithm, and it seems promising, but the number of relations and memory needed to make a single inference is huge, it’s of no use to me at this time, need a super-computer, 32G ram 8 cores to get an answer reasoning with 1M relations, like our mind (I guess), this may respond in 1-2 minutes, like the big blue Q&A contest made by IBM on TV this year.

What do you think upon this?
Are there any clues out there or have you heard on something similar to this?

And hoping this helps to give a new glimpse into the subject…

cheers!

 

 
  login or register to react