I plan to compare students' self-assessment scores (within the range A-E, or something similar) to that of the scores these students are allocated by their examiners (the 2nd examiners, say). I would like to consider using a weighted Kappa statistic. However, several questions have emerged about how best to proceed. I would be most grateful for assistance with each of them.
Could I please have some advice on how to decide between linear and quadratic weighting as the best choice in this case.
Further, whilst the raters fall conveniently into two perfectly distinguishable groups ('student and '2nd examiner'), these raters change from student to student, although occasionally, one 2nd examiner may rate more than one student. As I understand it, in its original form, the weighted Kappa statistic was designed not only under the assumption of there being two distinct classes of raters but also that these raters would not change from subject. I am therefore concerned that a standard weighted Kappa statistic may not be the correct one for me.
A related question is what formula to use for the standard error of the weighted Kappa which I should use.
To summarize, I have raised three main questions, the first of which relates to the type of weighting to assume, the second to the appropriateness of using a standard Kappa statistic for my problem and the third of which relates in turn to what formula
to use for the standard error of the recommended Kappa statistic.
I look forward to receiving some much needed help.
Thank you so much
All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine