(08-25-2012, 10:48 AM)Shad Wrote: If you want to try to put together a more meaningful algorithm than just a straight average, I can give you a data dump of scenario, rating, and userIDs to play with in Excel.
Heh ... the assign-the-suggester-as-Chair-of-the-Task-Force ploy! Nope! I was just offering a possibility more just for the sake of discussion than expecting any practical application.
Quote:Fluidity of ratings is meaningless to PG-HQ. ... Anything in the queue gets a full rating recalc.
Understood: what I was suggesting, however, is that many people have a tendency, over both short and long time scales, to judge/grade more harshly or more benignly and, therefore, any such trend would affect ratings calculated using the method I mentioned above. I note this myself when grading examinations and try to build in breaks and checks (e.g. grade one direction of a stack of exams for one problem, then reverse) to avoid it. So I was only speculating from an academic perspective that it might be interesting (if not practicable given the time and effort involved to code in the calculation within the structure of PGHQ) to note or observe whether there is a grading slope individually or collectively and, of course, to ask - as you did - whether that is due to product improvement, or the reverse, or possibly other factors.
PS If anyone is interested in coding up something like this in response to Shad's offer, though, I'd be happy to discuss.