Thank you, Andy. Can I ask you, what is your workflow for reviewing log file responses?
I imagine there is probably a better way to do this, wondering what you both do. I am afraid to ask Bruce, I am sure his method is light years from what I am doing.
This is mine….
I am currently pulling the logs, combine them, make them pipe-delimited in notepad+++ (mostly the “:” become “|”), and then import them into excel, pipe delimited. Then, sort them by ID and date, usually.
Then I review them, looking for bad responses, in excel, one by one, marking up the bad ones.
When I have all of the bad responses, I review the CS code by looking up the rule number (all of my rules have names and numbers now).
I look for common problems.
Then I review the logic for changes.
Often, I add/modify the rule so that it takes into account a slightly different rule/response.
Sometimes I adjust the topic search words, so it goes to the right topic. Or reorg the topics.
Sometimes I add new topics or split them. Or add gambits.
Sometimes I adjust the control script, to put a priority on some other topics.
If the topic is big, with lots of data, I put it in Postgres and create an API to do lookups. I did this with Wikipedia and some other big data items.
Just curious about your workflow.
Feedback logs. I have a feedback mechanism; it does not work that well. I want the end-user to point out bad responses, and have this go to a special log file. I have something working, but I must admit it was not that useful. I guess I have to to look at it again, to see what I need to add to it to make it more useful.
Right now, the log files are a lot more useful, because you get the entire user message trace. With this, you can feed it back after your changes to see if you are making the right changes.
Just wondering if there is a better way to go about improvements.