Working on a new bot, this one is physical and it has facial recognition.
At any given time, X number of people will be identified by facial recognition.
The closest face will be the primary speaker.
I can get this information to CS, which would basically be a list in priority order.
Primary face X1
2nd face X2…..
N face Xn
The challenge i have is this. In real life, context is everything.
And CS has a great way of keeping track of a conversation with multiple individuals, individually.
For me, this is realized by giving each device a specific identity and using that as the CS user. So anything coming from a specific device is coming from the same person.
But in this case, the user may change. And we may have multiple users listening.
One being a primary user or the one who is talking.
I am stuck because suddenly it is very complex and I am not sure how to organize this within CS code.
Has anyone else run into this and I would appreciate some insight into a good approach managing a response that could include one or many end users that changes.