One of the things I noticed during the workshop I attended last week, is the fundamental differences to the approach of computer based assessment between psychometrists and educators. it all boils down to the use and value of statistics.
I think within education we often don't evaluate our teaching and assessment practice enough, in particular by means of objective standards. For assessment practice the most well known methods of scientific evaluation are Classical Test Theory and, most importantly, Item Response Theory. I don't think many educators really bother with these, and some will stare very blankly should I ever bring up these terms in conversation. In stead we rely on our 6th pedagogic sense that rather mysteriously enables us to divine what assessment methods and questions work, and which do not.
The psychometric approach is radically different. Almost religiously sometimes, items are tested and analyzed using IRT. the most meticulous detail (question order, item order, delivery medium etc.) is reviewed for it's influence on responses. These statistical analysis only focus on one thing though, and that is the alignment of the item with the overall test. What the church of IRT seems to sometimes forget however, is to question whether or not the test in itself actually measures what it is assumed to measure. To a degree IRT is a bit of a circle argument, if not used carefully and in conjunction with other arguments.
It seems to me we could both do with a bit of each others zeal. Educators should really try and build in some structured objective evaluation of their assessment practices, and psychometrists should perhaps question the appropriateness of their means more fundamentally.
Subscribe to:
Post Comments (Atom)
1 comment:
Great sharre
Post a Comment