Hello, guys.
Colombina’s author is here.
Does anybody have the feeling that the competition quality and the quality of chatbots estimation processes were a bit weird this year?
I’m not saying it were not opaque because it were not opaque from the beginning of time. Or is it only my problem and I don’t understand the estimation rules in a right way?
Not sure about the leaders (I don’t want to judge their authorities right now),
but I see some entries (let’s take Uberbot/Alice/... for example), which answers were mostly poor and wrong compared to the ones that were generated by my entry.
They still have better score.
I am okay with that because my entry was 100% automatically translated from its native (Russian) language with
obvious quality loses and was written from scratch during 1 year of work during the holidays.
The size of the entry is up to 20 Mbs tho so I think 60% is good enough.
Can somebody explain me what is wrong with my understanding of the rules, guys, if you please? Is the score bad because of wrong English language structures?
Or is the purpose of this competition to encourage programs that just can only generate the exact answers like ‘Definitely’ or ‘Yes’
and are not even trying to pretend they are humans personalities?
Thanks and sorry for my English.
It was really interesting competition and it was a great honor for me.