According the paper: http://affectivecomputing.org/paper/2006/EmotionalSpeech_IEEETransASLP06.pdf
The first methods they adapted is the LMM model(Linear modification model), which makes direct modification of F0 coutours(F0 top, F0 bottom, F0 mean), syllabic duratioins, and intensities from the acoustic distribution analysis results. (They use 12 patterns(four emotions with three degrees,”strong”,"'"))And they shows that the LMM model does not provide very good output.
Here is the documents recording the experiment setup,