Just an interesting article (Clarke, ‘Information Normalization Theory’) on “calculating” humor that may expose to the light some interesting fundamentals that could be used in AI.
FTA:
“A new theory suggests an equation for identifying the cause and level of our responses to any humorous stimuli:
h = m x s.
where,
degree of misinformation, m = 0 to 1
susceptible to taking it seriously, s = -1 to 1humour rewards us for seeing through misinformation that has come close to taking us in. The pleasure we get (h) is calculated by multiplying the degree of misinformation perceived (m) by the extent to which the individual is susceptible to taking it seriously (s).”
So I would guess most bots have a very high (s), but could theoretically score (m) based on comparison to “truth” (based on, or extrapolated from, standard knowledge). S may be also estimated based on the veracity of the “truth” data.
This type of approach would dove tail nicely with the “emotional” bot, since humor is an important (and hard as hell to define) emotion. In addition, humor “scores” would be useful to dynamically learning bots where spurious input (jokes for instance, or purposeful misinformation) need to be appropriately classified and reacted to.