On further thought, I think the general procedure for curiosity is:
1. Notice something out of the ordinary (this may include location, colour, shape)
2. Investigate.
3. Stop investigating once sufficient information or procedure has been established to deal with #1 (curiosity threshold).
Link to source post
Don, Perhaps curiosity can be modeled as a behavior of a [insert quantifier] self aware entity; the behavior is triggered by a [new or remembered] awareness of some lack of knowledge, or inconsistency of belief or model.
(The behavior may be constrained or prioritized by a new knowledge reward/cost evaluation function in a high situation awareness entity.)
The behavior is to establish a goal to learn the unknown or resolve the inconsistency. (The entity is usually unaware of the total effort the goal will trigger.)
The goal instantiates one of the entity’s most appropriate learning behaviors, (apropo to the entity, but sometimes the most inappropriate to the situation or cooperative entities).
By a principle that the entity must almost know something to learn it, the curiosity behavior may be recursively triggered as the entity becomes ever more aware of the breadth or depth of its lack of knowledge in a subject (which was evaluated to have high reward for knowing). You are, thus, right on to include a curiosity thresholding function to limit the learning goal generation.
(Additionally, the non-apparent curiosity threshold may be much higher than an initially cooperative knowledge source understands or agrees to serve.)
Given a curiosity mechanism, directed learning can be caused by asking questions for which the entity becomes aware it cannot answer due to a lack of knowledge or inconsistency of its beliefs or models.
Your model of curiosity sounds a lot easier to program.
Alan