AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Does the chatbot you are developing use sophisticated parsing?
 
Poll
Does the chatbot you are developing use sophisticated parsing?
yes, reasonably to very sophisticated parsing 9
no, only simple pattern matching of keywords 2
intermediate complexity of parsing 2
it uses a solution not covered here 1
I’m not developing a chatbox, per se 3
Total Votes: 17
You must be a logged-in member to vote
 

Recently I’ve become somewhat disillusioned with the field of chatbots after learning that most of them don’t use techniques any more complicated than simple pattern matching, such as AIML uses. I was hoping to find out how chatbot developers were representing knowledge, which parsing algorithms they were using, their insights into natural language, their generalizations of the ten sentence patterns of English, and so on. At least I understand why chatbots are so popular now: they’re easy to copy, modify, understand, and use, since the difficult problems of natural language processing are bypassed. I see that a few people here are using full parsing in their systems, so I wondered roughly what percentage of programmers are using parsing versus what percentage are doing simple pattern matching (as in AIML and ELIZA), so I created this poll.

Some chatterbots use sophisticated natural language processing systems, but many simply scan for keywords within the input and pull a reply with the most matching keywords, or the most similar wording pattern, from a textual database.
http://en.wikipedia.org/wiki/Chatterbot

ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked. It was one of the first chatterbots in existence.
...
Weizenbaum said that ELIZA, running the DOCTOR script, provided a “parody” of “the responses of a nondirectional psychotherapist in an initial psychiatric interview.” He chose the context of psychotherapy to “sidestep the problem of giving the program a data base of real-world knowledge”, the therapeutic situation being one of the few real human situations in which a human being can reply to a statement with a question that indicates very little specific knowledge of the topic under discussion.
http://en.wikipedia.org/wiki/ELIZA

The AIML pattern syntax is a very simple pattern language, substantially less complex than regular expressions and as such not even of level 3 in the Chomsky hierarchy.
http://en.wikipedia.org/wiki/AIML

 

 

 
  [ # 1 ]

What I would like to see are any examples of production dialog systems based on sophisticated parsing, such as open source projects or web services.  My inquiry into dialog systems based on sophisticated parsing indicates that some form of probabilistic processing and/or machine learning is required.  I do not know of any examples of production systems of this kind in common use.  What we do know from Turing test experience is that dialog systems require a great deal of trickery, mind games, and smoke and mirrors to fool human judges.  I think pooh-poohing pattern matching systems without anything else concrete to provide is pre-mature.

 

 
  [ # 2 ]

I have to agree with Marcus here. Yes, pattern matching is a far cry from “state of the art”, and is markedly inferior to other, more complex systems, but for the most part, it’s just about the only game in town, and until some other, more sophisticated method gains wide popularity, pattern matching will just have to do. smile

 

 
  [ # 3 ]

Hi,

I am the creator of Johnny, which will participate to the Robo Chat Challenge.

- In a first step, Johnny is trying to parse and understand all the words.
- After, there is a fallback (second step) which uses a words matching like AIML.

In the future, I wish that the first step will understand more and more sentences. It seem me that it is the only way to reach a “strong AI”.

 

 
  [ # 4 ]

Hi Denis, that’s great!  Lots of people claim such systems.  However, is your Johnny platform available to the public?  My point is that the people claiming such systems are not making them available to the public.  This is fine as long as there is at least a publicly available demo interface for testing Johnny; so, everyone can see for themselves if it’s truly a better mousetrap.  One of the issues I have with “AI” in general is that the better something supposedly is, then the more tight lipped people get about it.

 

 
  [ # 5 ]
Mark Atkins - Nov 29, 2012:

... simple pattern matching, such as AIML uses…

I can assure you that AIML is capable of a lot more than simple pattern matching. It can calculate, learn new facts, reason and whatever else you care to program into it. There seems to be a popular misconception among chatbot forums that categories such as the one below is about as advanced as AIML gets:

<category>
<
pattern>Hello</pattern>
<
template>Hi there</template>
</
category

This couldn’t be further from the truth. One of the features I demonstrated at the Chatbots 3.2 conference was the ability to answer questions such as, “Which has more legs, a cow or a chicken?”, “Name an animal that eats grass”, “Where would you find a dog?” and so on. None of this was hard coded in AIML. The bot creates categories on the fly from data available to it. If it doesn’t have data on a particular animal/object, it can estimate the answer from the data available to it.

I recently coded, “What does x and y have in common” where x and y are two different words. If someone cares to hard code this in AIML or indeed any other language, you would be looking at just short of 1 million categories for 1,000 common nouns (1000 x 999).

That fact that a great many people use it purely for pattern matching shouldn’t detract from its capabilities. I can use a pan to boil an egg but I give this same pan to a top chef and he can create a gourmet meal.

 

 
  [ # 6 ]
Mark Atkins - Nov 29, 2012:

Recently I’ve become somewhat disillusioned with the field of chatbots after learning that most of them don’t use techniques any more complicated than simple pattern matching, such as AIML uses.

-> I think the problem here is the word “Simple”.  Simple as compared to what- fuzzy logic and probablistic annealing? Those are just “Complex” pattern matching, but still pattern matching.  Add in “topic”, “it”, “that” etc with conditional statements, and suddenly words like “simple” and “complex” become less relavant, and more importantly, less informative or descriptive.

 

 
  [ # 7 ]

What I represent to the user is an EBNF based pattern definition language. Internally, it does use some advanced trickery though, like unlimitied amount of look ahead (instead of the usual 1) thanks to parallel processing.
The advantage of having an EBNF type of pattern definition over AIML style is that you can express the same things in far less code, but in the end, the pattern definitions by themselves are not enough to make a difference between EBNF and AIML in features. (Perhaps combining EBNF with thesaurus variables and extra code bits can, but that’s still an ongoing experiment.)
Steve is right, if you take a peek into the alice set, there are some pretty cool tricks to be found. Often, patterns are used as ‘functions’ thanks to the srai trick, which extends the functionality greatly,  it’s just all hidden and sometimes confusing, cause the user is presented a simple category-pattern-that-template model while it is sometimes used as functionname-functionBody.

 

 
  [ # 8 ]

Thanks for all the thoughts and cast votes. All great points, and I can’t really dispute any of them.

Marcus Endicott - Nov 29, 2012:

What we do know from Turing test experience is that dialog systems require a great deal of trickery, mind games, and smoke and mirrors to fool human judges.

Definitely top priority for many people here, I’m sure, due to the competitions with monetary prizes.

I’d love to see how I could do at developing a parser in Visual Python, one that would output the sentence analysis in: (1) the (two) standard forms; and (2) a form of my own that I invented about a week ago. I’d gladly post free versions of it for everyone. But I’m just not going to have to have the time. I’m trying to get out an article on one component of my main architecture by the end of this year, I’m still playing with that statistical study about astronomical IQ of which I posted, and I have several other things going on, too.

By the way, the “two standard forms” of sentence structure to which I referred are:
(1) constituency
(2) dependency - the usual one taught in school as “diagramming a sentence”

http://en.wikipedia.org/wiki/Immediate_constituent_analysis

Marcus Endicott - Nov 29, 2012:

One of the issues I have with “AI” in general is that the better something supposedly is, then the more tight lipped people get about it.

My favorite response to this post! So true. Restated: The best stuff is always the rarest! The same day you posted this, I was talking to a guy who was trying to get AGI people to collaborate. I had to shatter his assumptions by reminding him that the really good ideas in AGI are going to be found in one of the following locations: (1) government facilities where the work is classified at least at Top Secret level; (2) companies who are using that information as their core technology that allows the company to be competitive, as well as to provide a means for its employees to survive, literally; (3) independent inventors who have literally dedicated their lives to AGI and have struggled for decades to reach their level of competency who are wary of others stealing ideas before that inventor can publish them. I wouldn’t want to try to get information out of any of those entities!

Steve Worswick - Nov 29, 2012:

That fact that a great many people use it purely for pattern matching shouldn’t detract from its capabilities.

Again, so true. Since I haven’t used AIML, I was/am simply unaware of all its possibilities.

One of the things of high interest to me recently is knowledge representation, and how to represent the world as a whole. Language understanding is highly dependent on this topic, as is vision and probably also every subfield of AI.

However, we cannot completely account for linguistic behavior without also taking into account another aspect of what makes humans intelligent—their general world knowledge and their reasoning abilities. For example, to answer questions or to participate in a conversation, a person not only must not know a lot about the structure of the language being used, but also must know about the world in general and the conversational setting in particular.
(“Natural Language Understanding”, Second Edition, James Allen, 1995, page 9)

Language cannot be understood without considering the everyday knowledge that
all speakers have about the world.
(“Natural Language Understanding”, Second Edition, James Allen, 1995, page 465)

Even in such a restricted situation, however, it is relatively easy to demonstrate that the program does not understand. It sometimes produces completely off-the-wall responses. For instance, if you say Necessity is the mother of invention, it might respond with Tell me more about your family, based on its pattern for the word mother. In addition, since ELIZA has no knowledge about the structure of language, it accepts gibberish just as readily as valid sentences. If you enter Green the adzabak are the a ran four, ELIZA will respond with something like What if they were not the a ran four? Also, as a conversation progresses, it becomes obvious that the program does not retain any of the content in the conversation. It begins to ask questions that are inappropriate in light of earlier exchanges, and its responses in general begin to show a lack of focus. Of course, if you are not able to play with the program and must depend only on transcripts of conversations by others, you would have no way of detecting these flaws, unless they are explicitly mentioned.
(“Natural Language Understanding”, Second Edition, James Allen, 1995, page 9)

 

 

 
  [ # 9 ]

Hi mark, I am building a more sophisticated patterns matching, based on a combinattion of common EBNF and some special operators, which are capable of targetting sintagmátic segments, the magica behind the scenes is a good greedy morphologic analizer, coupled with good spellcorrector phonetically enhanced, and GLR language parser, they generate an AST TREE, and alli the output is a special operable class capable of containing lingüística information, as long as having many operations enabled.
This combination is usable as a complexity sentence parts extractor.
For example it can evaluate a complexity math expression, in the middle of a sentence, finfing the exact position of the mostly logical mathnni piece. Actually If am working too enable understanding of datetime sentences, including adverbs, and many colloquial date time paraphrasing. Its harder thanks If thought…
Whish me luck!

 

 
  [ # 10 ]

I agree with Steve, I’m still amazed at what AIML can do, and how new tricks are always being developed by the clever people who still appreciate it. 

But, for someone like me, “simple” pattern matching works just fine.  I’m not after a “chat” bot that can perform convoluted math operations, or answer tricky-worded questions in seven language.  I don’t need a bot that can understand every misspelled word, or know all of the facts in the encyclopedia. 

I think the goal should be to have a bot that can maintain a person’s interest for more than a few minutes by providing a simulated conversation, and to be able to converse about something other than cybersex, or one’s private parts.

I really don’t believe there’s much of a demand for a super-duper bot that can do everything except chat.  That’s what search engines do.

 

 

 
  [ # 11 ]

Hi, Thunder Walk I seriously disagree on that the only goal of a conversational agent, should be to maintain a person interest, unless this is making a big business , and the time spent on the website, pays off in Ads, or whatever advertisement you’d like.
I think the goal of a virtual agent (or conversational agent) is to have a goal! (rephrasing) no one spends his time for nothing! the conversation always has a goal, and this goal might be from a simple re creative talk, to a transactional talk, file a complaint, do e-commerce, ask on some products to a e-shop, claim for technical advice, claim a warranty ticket, check in a flight, buy a transportation/theater ticket, or simply do whatever people do when they talk for a reason.

And to do this the bot needs to “understand”, he needs to be able to follow the topics, create a context, identify items from the conversation and include them appropriately, being able to reorder ill-understood stuff when new information arrives, he needs to know the goal of the conversation, or even infer it from the user’s questions or talk.

This is not a easy task, nor it is done in whatever platform you might seek, they (the companies) all claim for our unique and “intelligent” agent, but when you really test them you get fast and easily disappointed.. the task is not easy nor well understood!

This even happened to me:
Once upon a time, (somehow 8 years ago) I thought that to deal with a human-chat, we have to have a super-duper DB of responses (AIML or whatever), covering just all of the possible inputs. and just by giving the right answer to the right question the thing was done!.

So wrong was I, that as I got digging a little more inside the stream and got to see huge amounts of AIML-rules (>40k English based.. you naughty boys!) that gave the human, a mild impression that he might be speaking with somebody, due to some glancing/shiny/mind-boggling responses, along with some knowledge (hard coded, of course).

But, as the conversations go along, any human grasp immediately that the other “guy” is simply not understanding you at all!, he (it) is just answering from a fixed & clever repertory, invoking some simple heuristics, but at last, despite some memory on the last words, or topics; the bot has never got a clue about the conversation topics/theme/intention.etc., nor what the heck, you are talking about!

This is clear, and as you get this sensation, you loose interest on the lingüistic tricks and the clever canned answers you get out. All that is not a conversation, for me is is smoke and mirrors.. it’s fake!

Now as I am clear on this topic, let me introduce the ideas behind my project of a new kind of conversational agent, trying to be smart, give clever answers, or at least being not so dummy.. after all!

Until here, anybody reading this claims, (including me) would tell easily, -hmmm. this seems to me like “another fairy tail” and you might be right, but keep on reading!

I am actually working hard on shallow-parsing, mixing up semantics and syntax, being able (trying) to get a “soft” parser, capable to parse in a soft-context dependent universe, (sorry Chomsky) and do this with robustness, similar as humans do. this is hard, to model grammars in BNF is hard, but once you get a “rule” forged, you must do the “behind scenes”, the abstract syntax tree do not grantee understanding! you must build your own “idea-bricks”, here are the real complications! There is no clear notion on how to “embed” knowledge inside a system, you can play around with Prolog’s FOL and say “no no” is “yes” but after playing a while, you get hit with a “maybe” and many “I don’t think so” so you loose the apparent control, and get caught by the same doubts. Then you might step into statistics, and fuzzy logic (Zadeh, etc.) and do some magic, but as the thing grows (and the code lines counts in milliards) you also do not get a clear outcome, nor a usable thing. You can go further and create your own logic, re-create human Boolean and classical math, some chemistry magic and nothing more. Here I was stuck!  But then.. a new idea assaulted me!

Why not make a bot capable of predicting and “priming” what the conversation might become, putting more “code” into following topics, doing co-reference and higher lever logic, than trying to understand weird sentences!

This is what I am doing now, making “smart” blocks, capable of understanding simple and common things, like time and dates, space and a simple timeline, all of which are outstandingly complex to grasp with few lines of code, and then I am headed towards making logical “slots” and complex-rules, in a way like “world knowledge”, being all controlled by a “higher” logic, a goal-logic based on planning, doing tiny steps towards the resolution of a conversational query, or a technical trick, or satisfying a users query about some item he actually does not know its name but is trying to describe it to the bot, and the bot gets a job on how to assist him, step by step.

This is what I mean when I say “conversational agent” and the work is really huge, may be we need a whole community to build each other tiny smart-blocks and put them together to build a “idiot-bot” with a intelligence IQ of a 2 year monkey!

Actual chatbots, I guess do not have the intelligence of a simple amoeba!

 

 

 

 

 

 

 

 
  [ # 12 ]

I just started some sophisticated parsing
and plan to do more at:

http://8pla.net/wordnet/

Its easy, just click the SAMPLE button
to save yourself some typing, or try
some of your own input.

I plan to experiment and expand on it
some more using domain name wnbot.com
which is not even hooked up yet.

At this point it is just an early pre-release
for testing purposes on a temporary basis.

Say, what do you think of my vintage IBM PC theme?

P.S.
Let me just add, for Steve, Thunder and others
that it is not a chatbot.  Though, I would like to
teach a chatbot this stuff eventually.  So try to
imagine how a chatbot may improve itself with
a good command of sophisticated parsing.

 

 

 
  [ # 13 ]
Andres Hohendahl - Dec 2, 2012:

Hi, Thunder Walk I seriously disagree on that the only goal of a conversational agent, should be to maintain a person interest, unless this is making a big business , and the time spent on the website, pays off in Ads, or whatever advertisement you’d like

I don’t disagree with anything you’ve said.  The topic of AIML comes up every so often, as does the notion that there are differences between chatbots and agents.  Agents usually have a particular focus, and aren’t expected to converse beyond a simple, “Hello!” I just find it irritating when some dismiss AIML bots as useless parrots spouting words without having the ability to understand what they’re talking about.

The key to well-performing AIML bots is in reading the past chatlogs.  From there, comes an understanding of what people are interested in talking about, and “how” people talk.  The mere loading of vast amounts of information might help a bot to answer an obscure question now and then, but what good is it to load volumes of information on astronomy when most of those who visit a particular bot are mainly interested in chatting about teenage issues or movies?

In reading the chatlogs, and following the topics, AIML botmasters are then able to seek out the answers to the questions asked most often.  And so, while AIML bots might “not have the intelligence of a simple amoeba,” actual human intelligence (understanding) is likely to be involved somewhere along the way.

There aren’t a great number of really good AIML bots, but there are a few that are fairly amazing, and visitors to my bots frequently mention them.  Occasionally, even my bots get visitors who find it hard to believe they aren’t chatting with a human, but that could just be paranoia… you can’t be too careful on the Internet.

Lastly, I’ve elected to not have my bots “pretend” to be something they’re not.  If it comes up in conversation, my bots disclose that they are simply a computer chat program, and people are usually satisfied with that, and go on chatting.  I don’t mind if they know that simple pattern-matching is taking place, and no one seems to mind that what is going on is “smoke and mirrors,” or that it’s something, “fake”.  In fact, once they gain that knowledge, humans seem to adapt, and go out of their way to form questions, or restate sentences to make the conversation go smoother.  After all, they came to talk.

 

 
  [ # 14 ]

The question seems to be, do you want bots that actually “think”, or do you want bots that appear to think??

I am not a big fan of the Turing test.  Not only do I think it’s not important for bots to fool people, but I also see it as a distraction from more important work. 

I’m a great believer in practical bots, and practical AI.  In other words clandestine experimental research does not hold much interest for me.  If people cannot demonstrate what they have, or if other people cannot reproduce it on the platform in question, then it holds very little significance for me.  I say this because in the history of AI, and in the field of AI, there is a huge amount of “hot air”, or put another way “vaporware”.

While every field needs theoreticians, I find the amount of time and energy invested on purely theoretical and hypothetical considerations in AI not only frustrating, but also misguided.  My focus has been, and is on concrete tools and examples.

I guess what I am trying to say is that after many years, I am growing tired of hearing unsubstantiated claims, and now only want to see concrete examples.

 

 
  [ # 15 ]
Marcus Endicott - Nov 29, 2012:

What I would like to see are any examples of production dialog systems based on sophisticated parsing, such as open source projects or web services.  My inquiry into dialog systems based on sophisticated parsing indicates that some form of probabilistic processing and/or machine learning is required.  I do not know of any examples of production systems of this kind in common use.

By “production dialog system” I will assume you mean a production system for understanding dialog.

http://en.wikipedia.org/wiki/Production_system
http://www.smartkom.org/Vortraege/icslp2002.pdf

Production systems / expert systems are typically used where the output is uncertain. However, recently I’ve been considering much (though definitely not all) parsing to be a very straightforward algorithm, especially at the beginning stages. The input is the sentence, the output would be a *set* of possible logical organizations of the words, one set for each type of structure the user might need for his or her application. There would be always at least two such output sets from which the user could choose: (1) phrase structure grammar representation; (2) dependency grammar representation.

http://en.wikipedia.org/wiki/Constituent_(linguistics)

My own use would require a third type of representation (that I invented) that I won’t go into, so if I wrote such a program I would also include that third category. Anybody else needing an additional type of representation for their own particular project could modify a standard, open source program to add in their own representation, and hopefully make that option available to the public, as well. So it’s a fairly clear-cut process: basically list all possible organizations from a finite set of words, usually low in count, and conform to fixed, well-known, grammatical constraints. The later stages of parsing that involved uncertainty in ambiguous referents, implicature, coherence, memory, spelling correction, etc. would then be appropriate for an expert system since there would be no single correct interpretation. (There may even be *no* correct interpretation!)

I believe the beginning part of this algorithm would look like the following. I am assuming that one of the primary goals of the algorithm is to determine which of the ten sentence patterns of English is being used.

http://www.towson.edu/ows/sentpatt.htm
http://www.engdav.net/Notesfolder/practicesentences/1_The Ten Sentence Patterns Reference Sheet.htm

0a. recognize interjections versus proper sentences
0b. if conjunctions combine sentences, then split into component sentences at those conjunctions
1a. recognize the parts of speech (NP, V, ADJ, ADV, D)
1b. recognize and “subserviate” modifiers to their appropriate words
2a. determine if the verb is a form of “is”
  If the verb is a form of “is”, then infer the function of “is” (attribute, set)
  2a1. if the function of “is” is an attribute, then determine general versus time/place
  2a2. if the function of “is” is a set, then determine superset versus subset
2b. determine if the verb is a linking verb
  determine the type of linking verb (opinion, development)
3. recognize sentence pattern (#1, #2, #3, #4, #5, #6, #7, #8, #9, #10)

This would require some form of dictionary, however, that would return the verb type (e.g., transitive versus intransitive, linking versus action), possible meanings of a given word, and other information. I keep reading that there are “many” different possible parsings of a given sentence, but I’ve never seen anybody list them. It is exactly such a list where the algorithm would end and the user’s interpretation, own algorithms, and AI methods would take over. Until then, it’s all mechanical, though admittedly difficult merely due to the size and content of the dictionary that would be needed, though not difficult for any conceptual reasons.

As for the probabilistic reasoning and machine learning you mention, those would be required only at later stages such as these:

15.1 Using World Knowledge: Establishing Coherence

In many examples considered so far in this book, the final decision about the
interpretation of a sentence has been left to contextual processing. This chapter
attempts to define some techniques that automate this process. During this, how-
ever, will require a better understanding of what it means for a sentence to make
sense in a context. There is clearly a relationship between logical consistency and
making sense. A reading that is inconsistent will not make sense in any context.
But more often than not, there are many logically consistent readings, only some
of which make sense. So making sense is more a notion of coherence than
logical consistency. Certain readings seem coherent, whereas others are not
coherent at all.
  A discourse is coherent if you can easily determine how the sentences in
the discourse are related to each other. A discourse consisting of unrelated
sentences would be very unnatural. To understand a discourse, you must identify
how each sentence relates to the others and to the discourse as a whole. It is this
assumption of coherence that drives the interpretation process. Consider the
following discourse:

1a. Jack took out a match.
1b. He lit a candle.

...
There are several conclusions about sentence 1b, motivated by the assumption of
coherence between 1a and 1b, that can be classified into several categories:

reference - He refers to Jack1 [sic] (as suggested by centering)
disambiguation - lit refers to igniting rather than illuminating the
candle
implicature - the instrument of the lighting is the match introduced
in 1a

Note that to identify how sentences 1a and 1b are related, you must know that a
typical way to light a candle involves using a match. This general knowledge
about the world allows you to draw connections between the sentences. In other
cases there may be almost no apparent relationship between two sentences, but
assumptions of coherence still impose some relationship. For example, consider

2a. John took out a match.
2b. The sun set.

While it might seem there is no identifiable relationship between 2a and 2b,
in fact you assume a temporal relationship, that is, that Jack took out the match
before (or while) the sun set. While only a minimal connection, it is still enough
to construct a coherent situation that 2a and 2b describe.
  Note that many conclusions drawn from the coherence assumption are
implications, not entailments. They can be overridden by later discussion and
thus are defeasible. As a result, the inference process of drawing such conclu-
sions will be complex. The techniques considered in this chapter will all be cast
in terms of matching possible interpretations against expectations generated from
the previous discourse. This is discussed in the next section.

15.2 Matching Against Expectations

We assume that the specific setting created by a discourse is represented by the
content of the previous sentences and any inferences made when interpreting
those sentences. This information is used to generate a set of expectations about
plausible eventualities
that may be described next. Later sections will explore
different techniques for generating expectations. This section examines the prob-
lem of matching possible interpretations to expectations, assuming the expec-
tations have already been generated.
  More formally, the problem is the following: Given a set of possible
expectations E1, ..., En, and a set of possible interpretations for the sentence I1,
..., Ik, determine the set of pairs Ei, Ij such that Ei and Ij match. If there is a
unique expectation/interpretation pair that match, then the interpretation of the
sentence would be the result of matching the two. If there are multiple possible
matches, then some other process must determine which is the best match.

(“Natural Language Understanding”, Second Edition, James Allen, 1995, pages 465-466)

By the way, this matching problem mentioned at the end of this excerpt is almost the same as the well-known “correspondence problem” that occurs in the field of vision, which shows there is a deep correlation between subfields of AI, in this case language and vision.

 

 

 1 2 3 > 
1 of 3
 
  login or register to react
‹‹ Updates online      My bot Johnny ››