Monday, 26 November 2007

The ideal assessment engine

I've been looking into criteria for assessment technologies a lot lately. One reason is that we are looking into migrating our current system to a new platform (as the old one, Authorware, is no longer supported). The other reason is that I have been invited by the Joint Research Centre to take part in a workshop on quality criteria for computer based assessments. I will be posting on the outcomes of that workshop next week. For now though, here are some of my thoughts on the topic.

The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
  • Flexibility in design: The layout of the question can be modified as desired, using media and such to create an authentic and relevant presentation
  • Flexible interactions: There is no point in systems that have parameterized 5 question types for you, and all you can do is define a title, question text, alternatives and select the right answer. Interactions testing and supporting higher order skills are, or should be, more complex then that.
  • Detailed and partial scoring: A discriminating question does not just tell you whether you were completely right, or completely wrong. It can tell you the degree to which you were right, and what elements of your answer had any value. It might also penalize you for serious and fundamental mistakes.
  • Detailed feedback: A lot of mistakes learners make are predictable. If we allow assessment systems to capture these mistakes and give targeted feedback, learners can practice their skills while lecturers can focus there time on more in depth problems that require their personal engagement.
  • Extensive question generation and randomization options: For the re-usability of assessments, generating questions using rules and algorithms given a single question almost infinite re usability. On the assessment level, the same is true for assessment generation based on large banks with questions tagged with subject matter and difficulty.
So far, no real news for TRIADS users (although no proprietary system I know of really supports this well).

Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on, the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.

1 comment:

Samuel Liu - IT Management said...

Very good summary!

I am running an e-Assessment engine which is quite comprehensive. It include all necessary tools - question authoring tool, taking exam, invigilation, exam paper marking. It has addressed all major problem in paper based exam. I have more than a millions exam scripts done in the system. THe major feature is the ability to complete exam even server and network went down.