The second video that I linked to contains details about how the voices are composed. There is even a brief scene with the woman who provided the base “model” for Hatsune Miku’s voice, though of course she certainly couldn’t sing like that, if at all.
The technology for “tuning” arbitrary sounds has been around for decades. I can remember hearing tunes played by changing the frequencies of a dog barking while I was still at university. However the Vocaloid software that is being used by this new generation of avatars achieves a whole new level of sophistication.
For one thing, it is able to create “emotional” variations so that it won’t necessarily sound the same for each performance, just as it would with a human artist. For another thing, it emulates the effects of the singer’s body movement and disposition during the performance, including pauses for breathing. The latter effect was explained in more detail in one of the videos about HRP-4C.
Did you happen to see any of the videos of HRP-4C dancing, by the way? The original video that I posted here was just of the robot standing still and singing, but since then there have been videos of her dancing being posted all over the net:
http://www.youtube.com/watch?v=xcZJqiUrbnI
http://www.youtube.com/watch?v=y0T5dxge2Kw
Just for some real irony, check out the videos of HRP-4C dressed as Hatsune Miku during some of her appearances.