|
|
Experienced member
Total posts: 93
Joined: Jan 2, 2017
|
For the third time in 60 years, AI has become a headline technology. I am skeptical so I have been reading articles to see what people think are reasons to believe AI has “arrived” [if you know or have heard other reasons, please post them] :
- Voice recognition is finally working.
- The platforms are ready for it: large hard disks; fast CPUs and GPUs; “Big Data” support - the cloud and servers; fast calculations from dedicated neural network chips; ready availability of tools for building conversational interfaces and chat bots.
- There have been recent fundamental improvements in neural networks technology.
- Products and customers are moving away from the graphic GUI model, driven especially by smartphone use.
- Companies have finally discovered the value of predicting customer behavior and the value of personalized customer experiences. Old statistical technologies were too hard but, now, it is “easy”.
Nuance Systems (formerly “Dragon Systems”) voice recognition works well. Now. But it took them 40 years to develop the domain expertise. They did not just throw a bunch of examples into an AI black box. At the same time, the only reported advance in neural network technology is convolutional neural nets. These are actually about 20 years old and they add spatial structure to the network in order to handle spatial input data).
And so, in essence, this AI boom looks a lot like the last one. I believe what has changed is (a) voice recognition makes AI critical for audio devices like smartphones; and (b) there is tremendous pressure from a TV watching public to have it be real. The thought that you can proceed without understanding how AI works, without spending significant time learning what are the most critical dimension of your data, without learning concepts and thought patterns that are domain specific and, ultimately, without putting in any customized effort, is wrong. I read a lot of articles that describe AI is a magic black box. It isn’t and it looks to be as difficult now as it was the last time.
That does not mean it is out of reach to small organizations. There is the thought that AI must involve Big Data, must involve huge training sets for Machine Learning and must, in the end, be accessible only to the largest and wealthiest technology companies. This is an unpleasant thought. I take heart from how these big tech companies seems to be in a Mexican standoff, where each company claims to have purchased the best new Machine Learning startup, each company claims to have the greatest Neural Network experts on staff and yet these same companies quietly dump their failures into open source. Also their business models do not lend themselves to domain specifics, so one hopes there will still be enough room for the small developers, who take the time to be domain experts.
|
|
|
|
|
Posted: Jan 3, 2017 |
[ # 1 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
All the advances that we’re seeing at the moment and which have in turn lead to renewed excitement, are due to the availability of large sets of training data. This was enabled by advances in storage and networking and processing power. For example, resources like ImageNet were specifically conceived and designed to facilitate training neural networks and have resulted in an explosion of practical applications.
https://www.youtube.com/watch?v=40riCqvRoMs
http://image-net.org/
However stochastic solutions based on neural networks and statistical parsers are still only stopgap measures. They allow computers to guess “right” answers with uncanny accuracy but they don’t facilitate understanding. I think of them as “brute force” solutions, similar to the way Deep Blue beat Gary Kasparov almost twenty years ago. It did not use sophisticated algorithms but instead relied on being able to search through vast numbers of possible moves compared to the human player. This is similar to the way that AlphaGo beat the best human Go player last year, for all that it employed moves that were described as creative at the time. It already “knew” every good game that had ever been recorded.
I was struck by something that was said in a recent discussion between Marvin Minsky and Ray Kurtzweil. Chess software has improved radically since Deep Blue. Nowadays a computer with only a fraction of the power of Deep Blue can routinely beat any human player because the software has been imbued with many more rules and a deeper understanding of the game. I believe that this means the momentum gained from recent advances in brute force learning could be maintained by developing an understanding of how these learning processes actually work.
https://youtu.be/RZ3ahBm3dCk?t=15m35s
|
|
|
|
|
Posted: Jan 4, 2017 |
[ # 2 ]
|
|
Experienced member
Total posts: 93
Joined: Jan 2, 2017
|
I care about handling things like “but” as opposed to “and”; and I have to wonder, since traditional logic conflates them, how a large data base and statistics could help correctly process a sentence with “but” and a few negations - except on average, and with low accuracy.
P.S. Your profile says you are interested in different approaches to NLP.`I have a very lightweight approach to NLP that might be of interest at:
https://github.com/peterwaksman/Narwhal
I am hoping to drum up some interest.
|
|
|
|
|
Posted: Jan 4, 2017 |
[ # 3 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Hi Peter, I wouldn’t say I’m interested in “different” approaches to NLP so much as I am interested in the most traditional deterministic approaches to NLP.
Statistical NLP uses the likelihood of groups of one or more words to be of a certain category. If you take all the words in a sentence and assign each the most common category, you’ll be able to parse the sentence correctly 80 percent of the time. If you take all the pairs of words in a sentence, you get closer to 90 percent accuracy, and if you take all the triples you’ll get into the low 90’s. The amount of computation and data that’s required as you increase the tuple size climbs exponentially and although Google has enough computing power to use tuples of up to five words the results are still far from perfect, let alone the 97 percent accuracy that would be needed to emulate human proficiency.
Therefore I’ve concentrated all my efforts on achieving semantic parsing using rule-based or deterministic algorithms. Although statistical methods, neural networks, and big data have been garnering all the publicity in recent years, as far as natural language processing and understanding are concerned, the real progress is still being made the old-fashioned way. New algorithms such as Tomita parsers combined with resources such as VerbNet and the English Resource Grammar are opening up all sorts of fascinating new possibilities.
I have had a look at your Narwal project but I don’t think it will suit my purposes. It looks like you are using String Kernels and although that approach has been demonstrated to be very effective for restricted domain applications, I will be satisfied with nothing less than broad coverage and I’ve been developing advanced software to that end. I’d definitely like to compare notes and talk further though.
|
|
|
|
|
Posted: Jan 4, 2017 |
[ # 4 ]
|
|
Experienced member
Total posts: 93
Joined: Jan 2, 2017
|
Hi Andrew,
Thanks for having a look. Do you want chatbots to understand ANY topic? That could happen in a distributed system where each recognition tasks gets a preliminary topic classification, that is then handed off to a topic-specific reader.
It would be scalable - adding topic specifics, one reader at a time. So someone builds a ‘distributor’ to handle incoming text but each ‘spark’ goes to a different topic-specific reader. In a community of interacting chat bots, where each can announce its capabilities to the rest of the community, you might have overlapping chat bot capabilities such that single bot with a highest “goodness of fit” score becomes the accepted handler for this text, in this community.
I would be happy to talk more about this.
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 5 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Repackaging a problem doesn’t make complexity go away and I don’t think creating lots of specialised listeners is going to make this problem any easier to manage either. There would still be a lot of guessing going on in the magic box that performs the arbitration.
I’m dividing the problem up a different way, into layers handling syntax, semantics and pragmatics. I think that will be the best way to achieve the broad coverage and high accuracy that I’m seeking.
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 6 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
Andrew, are you developing conversational AI or are your efforts more in the realm of logic or are you focusing more on real world modeling and interactions about the model?
I guess I’m asking, “What is the problem?”
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 7 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
The problem that we are discussing is “semantic parsing” where you translate from natural language to meaning representation in the form of nested function calls and parameters. Peter has been developing a package that uses “string kernels” to achieve this aim. It is very practical and efficient in narrow domains such as analysis of hotel reviews. It belongs to the class of solutions known as “shallow parsing”.
I’ve been taking the long view. I spent years researching parsing and grammar theory and I’ve developed a suite of libraries capable of handling unrestricted context free grammars large enough to cover something like the English language with its millions of grammar rules. Now I am in the process of encoding those grammar rules which I am compiling from the various rich sources that are available online such as VerbNet and the English Resource Grammar. The software that I am developing belongs to the class of solutions known as “deep parsing”.
Ultimately my aim is to implement software for knowledge acquisition and theorem proving with conversational interfaces.
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 8 ]
|
|
Senior member
Total posts: 308
Joined: Mar 31, 2012
|
Then what would you say that Google Now, Alexa, Viv, Cortana and Siri are and how will your creation compare to them?
Certainly no offense meant Andrew!
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 9 ]
|
|
Experienced member
Total posts: 93
Joined: Jan 2, 2017
|
Hi:
Thanks for the kind words Andrew. I also wanted to ask about applications. Mathematical proof is a good one. For example the recent “proof” that took too many pages for a human to ever read in a lifetime. Let me describe one application of what Andrew calls “shallow parsing” and I call “narrow world language processing”:
My company does custom design (~800 units a day) and much of it has been completely automated - from order to shipping. However the online customers have a text field for “notes” which needed a human reader. They asked me to write a ‘splitter’ that would decide if the text was actionable (and would get sent to a human reader), or was something that could be ignored (like “Merry Christmas!”) and automated. I read thousands of examples and found 14 topics, each with its own keywords that were, largely, non-overlapping. The ‘splitting’ task turned out to be pretty concrete and easy. (Parenthetically one colleague told me I should use Bayesian analysis - he tried it and it was a flop. He never came to grips with the actuality of their being 14 topics not 2. By contrast I learned a lot about our customers). I suspect this is a pretty standard use of keywords.
The danger zone was “any note with a design specification”. I was unsatisfied blocking those orders from automation and learned there were three or four design sub-topics, each with very distinct vocabularies and ALSO very distinct underlying ‘stories’. After enough examples you find “wrap around” where the same stories are being used over and over. So I parsed the individual stories, knowing where to look for numeric identifiers, and I was able to automate about 1/3 of the designs. If I had more time I could have addressed the other 3 sub-topics. We are now automating ~47% of customer orders with ~1/3 containing customer notes. That represents a significant increase in yield from automation and cost saving for the company. It was all in C++.
At home as a Python project, I rewrote the software to decouple the ‘keyword’/‘key story’ discovery from the mechanics of text processing. The idea is that domain experts can use these techniques without needing to become NLP experts. The code is working nicely but, for example, I haven’t implemented a ‘numeric’ component to handle the kinds of details needed for my companies custom design sub-topics. At home I used a corpus of hotel reviews. I think it would be interesting to work on product reviews for marketing, a customer service for companies needing it, or (maybe) something that can detect hate speech, for all of us.
The Python project is called “Narwhal” (after Narrow World, Narratives) . Please take a look at:
https://github.com/peterwaksman/Narwhal
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 10 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Art Gladstone - Jan 5, 2017: Then what would you say that Google Now, Alexa, Viv, Cortana and Siri are and how will your creation compare to them?
Certainly no offense meant Andrew!
As one grumpy old man to another, no offence taken.
I think the platforms that you mentioned are all implemented using technology similar to the one that Peter is developing, but they already have many many narrow domains at their disposal, like the distributed system described above. I would also add Wolfram Alpha to that list, especially as it was one of the first to be made publicly available.
I’m not trying to emulate any of them, I’m just trying to leverage all the amazing research that has been published regarding parsing and grammar in recent years to create a semantic parser with the broadest possible coverage. It would have many applications but I am most interested in using it for knowledge acquisition and theorem (not just mathematical) proving.
If you would like to try using a semantic parser without waiting for mine to be published, check out Monty Lingua.
http://alumni.media.mit.edu/~hugo/montylingua/
|
|
|
|
|
Posted: Jan 5, 2017 |
[ # 11 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
@Peter that’s an inspiring story.
That’s the kind of success that would make me want to get out of bed and develop software every day.
Have you considered developing a web interface and offering Narwhal as a “Software as a Service”?
|
|
|
|
|
Posted: Jan 6, 2017 |
[ # 12 ]
|
|
Senior member
Total posts: 194
Joined: Sep 23, 2011
|
I’ve also been watching the latest boom of A.I. and being skeptical of it sticking around this time.
The last boom in the early 2000’s is what got me into bots in the first place, and back then it seemed anyone on AOL IM or MSN Messenger knew at least one chatbot. Companies like ActiveBuddy were hedging all their bets on A.I., filing patents for the concept of chatbots and striking up contracts with companies like Comcast for A.I. driven tech support bots, and putting all their engineering money into making whole complex platforms/SDK’s around it all.
And then it all kinda fizzled out and what ActiveBuddy turned into ended up getting bought by Microsoft and shelved, for a decade or so until Microsoft decided to get back into the A.I. game.
IMHO, what I think companies will discover at the end of this A.I. boom is the same as the last one: at the end of the day, users just don’t want to talk to their devices (in text or especially out loud—think of all the introverts out there who would feel weird talking to their A.I. in public).
The old bots could do all sorts of things (weather, math, movie showtimes…), but when it’s so much easier to just click a couple buttons and get your information rather than type out a message like “what is the weather like in my city?”, users would prefer a UI with buttons. Facebook Messenger added support for calling up an Uber using text, but is that more convenient than just clicking a “pick me up” button and selecting an address? If anything NLP would be more error-prone compared to traditional interfaces.
|
|
|
|
|
Posted: Jan 6, 2017 |
[ # 13 ]
|
|
Experienced member
Total posts: 93
Joined: Jan 2, 2017
|
Hi again,
Not as much fun as it sounds
Narhwal is development tool, like a minuscule competitor for NLTK. It would be great to deploy some service that uses it.
I am not sure about the comparison with Google and Apple since they claim to use statistics and Narwhal is supposed to be a kind of geometry.
|
|
|
|
|
Posted: Dec 11, 2018 |
[ # 14 ]
|
|
Member
Total posts: 3
Joined: Dec 5, 2018
|
Many people are scared about Artificial intelligence growth will take the opportunities of theirs. Especially IT professionals. If the machine starts to learn itself or takes his own decision no one knows the future. Let’s see the future of AI and robotics.
https://www.impigertech.com/blog/future-of-artificial-intelligence/
|
|
|
|
|
Posted: Dec 11, 2018 |
[ # 15 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
Erick,
In theory, many people are scared to chat about Artificial intelligence here. Maybe, that theory is wrong. Let’s put that theory to the test. It is now Tuesday. After reading this, leave a reply, for our experiment to see how long it takes to get replies here.
|
|
|
|