AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

“Teaching” Grammar to my program
 
 
  [ # 16 ]

Maybe you could have it respond to the ‘enter’ key, so that you don’t always have to press ‘go’?

 

 
  [ # 17 ]

Hello Jeff,

The reason I suggested interface enhancment, was to try to increase usage for
debugging purposes.  However, the grammar processing may be the most related to
artificial intelligence.  You asked, “I’ve been testing everything on firefox and IE9,
but not on any of the others. Where have your challenges been surfacing?”

Well it is very preliminary and may be a glitch of my own, which I have not tracked
down yet… But it seemed that at one point while exploring Regex in Javascript,
that it worked in Firefox and not IE.  But this relates only to a suggestion I made,
and not directly related to Harri, as far as I know.

Here are some suggestions as your chatbot gets more complex… Save multiple
backup copies.  Easier said then done.  Who wants to stop and backup right in
the middle of an inspiration?  But when it gets twisted and suddenly croaks, you
will sure be happy to have the option to roll back to the last twisted working copy.
I just read this entire thread.  Now, let me go over and check the status of Harri.
(Don’t worry if Harri is not online, if you are working on it, I will try back later.)

 

 
  [ # 18 ]
Jan Bogaerts - Oct 28, 2012:

Maybe you could have it respond to the ‘enter’ key, so that you don’t always have to press ‘go’?

Hi Jan. Thanks for the input. You must be talking about the “View JSON” page, where you click “GO!” and a couple of alert boxes pop up? Is that correct?

Actually, that entire area is somewhat of a mystery to me at the moment. I know what I would like to do, but the path there is unclear. Let me explain, and maybe you’d have some suggestions.

Take the RDB readout area for example. Through it I can examine and modify all data and relations in the structure. The purpose of “View JSON” is much the same. I want to be able to re-mix the structure of an individual JSON datafile, and relate it to data in other JSON files. It would be great to be able to define new relations between different data sets as well, all from that one interface. An remixed json file should be able to be renamed and saved to escape the danger of being overwritten, but I would also need to be able to delete either a RDB table generated file, or a remixed file at will.

This description is defining a pattern that I want to replicate for the xml and the csv data families too. As you can see, the “GO!” button is a very temporary testing placeholder until I can get back to formatting it’s real purpose, (which I currently have no clue how to do!!!).

More than the above, it would be cool to have a JSON file relate to an XML file, or a sql to a csv. I know it’s possible to do, but how, first of all to do it, and secondly, to SEE it all so as to make intelligent relationships, there is the rub.

So, as all three “View” flat file areas come together, there will be a final few layers of detail to work out, and I am not the most qualified person to approach them.

 

 
  [ # 19 ]

Hey 8pla:
I don’t know of any reason other than origin policy issues that differ between IE and Firefox. Using the page resources that are outside of the local area of your directory tree will run you into that problem. EI allows js access to pages that w3c browsers disallow on the basis of three measures, but you already know about that, and I have not run into any other CB problems apart from that so far.

The advice regarding backup is very cogent. So far, herri represents quite a few hours of painstaking head scratching, and it would be foolish to let that get, either corrupted or lost. I have three copies ongoing at any time. One is online. Another is a working copy, and the third is a tweener, which I should re-save more frequently, yes.

Regarding harri’s abilities, I’m afraid it will be a week or two yet before you can talk to him. His brain project has gotten a little bit out of hand. I am trying now to get three flat file brain aspects to integrate with his RDB brain. That is proving to be a distraction from any current development effort into his conversant capability, but I feel it necessary because I want him to reflect a different level of sophistication in the hopes that better performance will be realized through that approach.

Frankly though, I feel a little pressure to hurry, and settle for less foundationally so as to progress more quicky viscerally, So far hawever, I have sucessfully resisted the urge.

 

 
  [ # 20 ]

I meant the ‘go’ button at the top, for conversing with the bot.

 

 
  [ # 21 ]

Oh. My mistake. I’ve been working on the data converter, so the conversant functionality is still a week or so from my attentive focus.

That’s a good idea. I will do that, and have submission methods capable. Thanks for the suggestion.

 

 
  [ # 22 ]

I’ve been thinking. The curse of thinking is that everything which seemed so simple actually wasn’t, and we were blissful until we sat down and finally realized it.

Anyway, today I’m working out the dynamic nature of the AJAX calls inside the VIEW JSON file so that they will occur in a loop and automatically grab every JSON file in the directory, making them all available and accessible in one place for manipulation. Once that is worked out, it will supply an object pattern to use inside the XML viewer as well.

Once everything is available, I will need to display it somehow, and create some file editability both within each file in isolation, and editability too that allows interfile relations to be resolved and dissolved. All of what I am working out to be controlled initially through UI based micromanagement, is just a warm-up for ongoing attempts to make it all dynamic. Harri should someday be able to do all this subconsciously. That is what we do.

What I mean is that when we access our memories, we actually create temporary subsets, reduced copies of archived memorial data that exist as currently relevant anomalies only for immediate usage. They are promptly destroyed by the next insistent demand for new relevance, which requires new temporary info sets which replace the old, retaining only streamers, reflections, or impressions; dramatically simplified summations of what a moment before, represented a rich information pool rife with relation potential.

How can we emulate this dynamic creation of data sets for temporary, immediate usage, dissolve them appropriately when required, while at the same time retaining diminutive vestiges, all while streaming into further oncoming increments of relevancy demand?

I submit that we don’t know how to do that. This having been said however, we do know many of the sub-routines that should necessarily run, and as we work faithfully to bring them online, one by painstaking one, we will be taught through the process how what that already is needs to change to accommodate what that is not yet but needs to be.

 

 
  [ # 23 ]

UPDATE::
I am well into the “JOIN TABLES” coding, having all the permutations isolated, and error messages appropriately formatted in alert boxes. I have only to formulate the sql statements and copy the convert functions over from the other single JSON conversion block.

I haven’t done the XML JOIN TABLES convertion yet, but the functionality will all be the same. Getting that going will involve a lot of copy - paste, and renameing of variables. The XML should dynamically name the nodes, but it doesn’t like special chars, and I have an underscore in the “id_defs” and “id_wds” tables, so I hard coded those node names differently when those tables currently convert. I will have to go back and rethink the naming conventions, and then communicate that in the documentation, (if I ever write any, and I think I should start). The code for dynamic naming is all worked out, but it is currently commented out in the doc, so I just have to remove the comments, the temporary hard coded naming section, and re-name the troublemaker fields without any special charaters.

I updated the online version of harri to reflect the current state of the JOIN functionality. I havn’t done inner, outer, union, or anything fancy. Just simple multi field joins are enough to crack up my tentative sanity!

Take a look. Remember, I am trying to keep the core data integral though ideal normalization in the sql format, be able to convert to flat file format in normalized form as separate JSON files thru “convert without join”, OR to de-normalize though the combined “join and convert” functionality so as to get down and dirty in the JavaScript logic as harri fights for some kind of messy, real world consciousness outside of his normalized, idealized information structure.

The code for getting those JSON files as accessible js objects with a dynamic Ajax loop that runs again and again until all the appropriate files in the the directory tree have been acquired, is all about 90% complete, as you can see whenever the JSON view window loads.

I have to hold steady here for a few more days, and then I think I will be able to start the fun part. I want to see deeply into harri’s brain as I work on logic and information structure WHILE AT THE SAME TIME have access to his conversant responsiveness, so that I can give him a prompt, see his response, fool with his data, dink with his logic, and submit the same prompt to compare the effect of the changes.

 

 
  [ # 24 ]

Ok. The XML tables and the JSON tables will now join with one other of their kind. I guess they can have little families of their own now.

With that accomplished, I went ahead and actually used the system last night. I managed the sql database, and did what you see there now all in about 15 minutes. The relations in the merger table are complex and confusing, but with the driller window, it was a snap. I just had a note paper beside me to remind me of what I had planned, and that was all it took. I like it a lot! I don’t like the names though, and will go back and modify them thru the UI, just because I so easily can.

I was trying to keep the names of tables and fields short to save room, but the window bi-ldimentially scrolls, so name length is not a problem. It can be very human friendly.

I need to see into the JSON and XML now, and see it all at the same time I am looking into the sql. The viewers are set up, and I’ve take a little time to refactor the AJAX calling to make it dynamic. Again, Im working on the JSON first, and then I’ll just modify the functiality for the XML and move it over to that viewer once the JSON is all worked out.

It has to make everything available, and then drill into it in a way that shows all the node relations at once, similar to how the sql driller works. Or maybe I have to choose 2, and there will be two comparitive windows set to display the content of both files with node names, values, and all. I don’t know about editability, if I should do that in the same window or not. It seems like the space here is getting crunched, and I don’t want to compromise my sweet little viewport and it’s ability to see all in a screen shot.

Anyway, I have to get everything avaliable first, before I decide what to do with it.

I’ll be out of town the weekend, so might not be able to do a lot. I’ll post again sometime through weekend though.
Bye…

 

 
  [ # 25 ]

harri’s JSON viewer ihas been updated online.

His JSON brain is now integrated fully with his SQL brain. If you copy and paste the name of one of the JSON files into the search boxes in the JSON View window, it displays the contents of that file with node key names in <th> cells at the top.

The ability to combine sql tables for JSON conversion, and then the ability to explore them with tandem view into the sql, begins to open the way for me to start experimentation on data structure for harri’s learning. I will work on the XML viewer later, when I, for some reason, have need of XML data over JSON.

Here again I am tempted to move straight into grammar parsing, and begin building responsiveness on a level diminished instead of working on his ability to actually learn. To hold true to my purpose, I feel that state comparative ability might be the next important element to approach.

I have been thinking about a categorical nomenclature using alpha numeric prefixes to categorize memory tracks, building into the table names themselves an additional layer of organizational handles.
State Comparative Memory Track 1 might be labled:  aa_looper, ab _looper, ac_looper… etc.
SC Memory Track 2 might be:  ba_looper, bb_looper, bc_looper, bd_looper, etc…
SCM Track 3 might be:  ca_looper, cb_looper, cc_looper, etc…
SCMT 4 might be:  da_looper, db_looper, etc…

I want harri to remember what just happened an increment ago and be able to compare it with what happened two increments ago, and then again with what happened three increments ago, etc.. all in order to seek trends that give decision basis for his interaction witihin the formation of the next, oncoming increments.

With a rhythmic naming convention, inter track comparative might be augmented by cross track comparative, giving deeper richness to harri’s self contemplations. I think this will serve well in making all kinds of decisions, grammar response not the least.

Another thing I might want to do before moving too far in hard coding grammar patterns, is to go back and get some of the memory node creation, relation, and combination a little farther along the way to true automation. As I’ve said all along here, the DATA viewer / editor, although not necessarily meant to be temporary, is meant to become less manually micromanaged as harri takes on more of his own conceptualizing, ordering ability. I think these prefixed tables will have a lot to do with his ability to order his information himself, but I can’t see that very clearly without going at least some distance in that direction.

I may need to get more sophisticated table joining logic worked out to make this self ordering ability begin to happen. Long term and Short term json structures will need be created out of sql tables through an auto join process. Undoubtedly the long term json memory structures will initialize and update regularly, tied tightly to the data in the sql tables, and they will change as the sql changes, evolving as harri evolves.

But another layer of data; short term, state relevant matrices will somehow need to construct as current states demand, and subsequently destruct again as harri no longer immediately needs them. I foresee only loose guidance from the sql tables being involved in this, but the remixation of json data being the backbone due to speed and flexibility issues. Once again, I can’t see how these two memory processes will come together without pushing them at least 10 or 15 percent of their way along.

 

 < 1 2
2 of 2
 
  login or register to react