> http://mindloop.blogspot.com/
Hi Harry, am I right assuming this is your mindloop blog, above?
> http://www.quora.com/Marcus-L-Endicott/answers
You can more or less find out where I’m at by reading through my Quora answers, above.
Lately, I’ve been thinking outside the box, the chatterbox that is, about what it would take to make a desktop AI that could both visually interpret the screen and manipulate the browser in the way that people do. I don’t know of examples of anything quite like this at the moment. I was reminded of this by your visual OCR experimentation.
Your graphical simulation efforts remind me of the current Internet of Things (IoT) hype, something like a GUI for the “Internet of Sensors”, with every mobile device a potential sensor.
> http://nxxcxx.github.io/Neural-Network/
I’ve recently seen the above simulation of a “Neural Network” going around, and hope to pair even a faux simulation of this kind with a conversational AI, like a voice GUI, or beginning of a bona fide cortical reflection for natural language…. ;^)