Wednesday 20 February 2008

Standards in assessment

The attempts to define standards for computer based assessments have so far been largely unsuccessful. I think that one of the problems is the lack of clarity in the functional domain. Do we really understand the ontology of an assessment, and a question? I don't really think we do, and perhaps we never will. It is easy enough to find a way to define a multiple choice question in XML, but to do the same for 'any' question... I think it's a bit much to ask. You inevitable end up constraining what you can do.

This was one of the main problems with IMS QTI 1.2. The specification was incredibly limited, and thus any systems supporting the standard by definition were as limited. Worse, most systems did not even implement the standard fully, or correctly, and so QTI 1.2 never really got anywhere.

Version 2 was supposed to solve this. The specification (currently still a draft, version 2.1) is indeed a lot better, and allows for much more questions types, feedback, and scoring strategies. The problem is that to make all this possible in a standard XML definition, the specification has gotten rather complicated. I'm not sure it is a viable proposition to expect any vendor to support the standard in full. To make matters worse, all the big vendors, but also the Open University's OpenLearn, seem to be pushing the Common Cartridge, which includes an amended version of IMS QTI. 1.2. While it would be nice to be able to exchange and run questions that are embedded in learning materials from Blackboard or Moodle, it it does strike me as very unlikely that any vendor will now have a serious incentive to support anything beyond the Common Cartridge.

And so we might have to live with the fact that we are not going to have any standard for the exchange of question and/or assessment information. I'm not sure that s a bad thing though. We would probably be better of designing a decent system first, in stead of trying to standardise functionality that hasn't even been implemented anywhere yet. What use is interoperability, if there isn't anything to exchange?

Monday 18 February 2008

Open Source Assessment tools

I attended a JISC - CETIS workshop today discussing the latest set of Open Source assessment tools that JISC has commissioned. The triads of projects is to deliver Authoring, item banking and delivery tools based on the IMS QTI 2.1 standard. For more information on the individual projects, they are:
  • AQuRate (The authoring tool, developed by Kingston University)
  • Minibix (The item banking tool, developed by Cambridge University)
  • ASDEL (The delivery engine, developed by Southampton University)
While I applaud the 3 project teams for the work that they have done, I must also say that I was concerned.

There have been a lot of projects funded by the sector that were supposed to kick start the development and uptake of standards-based e-assessment. Projects like TOIA, APIS, R2Q2. None of these project ever became much more then a proof of concept. The current set of projects seems to be on course to be heading that same way. None of these projects ever have the institutional backing of a stakeholder that understands the long term business need for such a solution. In stead they are research bids by researchers and developers who's only mandate is to fulfill the requirements of the project plan, and who's only resources are those granted by, in this case, JISC. And so after the kick start the project dies, as the funding dries up.

Are we then forever in the hands of the commercial vendors? I certainly hope not, as so far they have been completely unable to impress me with their products. Most commercial tools offer little of the pedagogical affordances and support that they should be giving and are often even technically rather weak. I deeply believe that the only serious hope that we have in ever getting a valuable and usable set of assessment tools is by collaboratively developing them ourselves. Unfortunately the success that Moodle has become in the world of VLE's seems unlikely to be repeated in the area of e-Assessment anytime soon.

Ideas anyone?

Sunday 17 February 2008

Personal Learning and other challenges

The National Academy for Engineering has been trying to identify the grand engineering challenges for this century. It obviously features several challenges in environmental sciences, artificial intelligence an virtual reality. I was very pleased, and slightly surprised, to also see Advance personalized learning as one of the grand challenges.

While the explanation seems to start of with a bit of a disappointing focus on learning styles, it then picks up with applications that I find much more interesting, such as tailored support to learners based on ubiquitous data collection of their progress. I am not quite sure this is an engineering challenge though. 99% of the technology that is needed to meet this challenge already exists. It is primarily our inability and sometimes unwillingness to implement this properly that makes it a challenge.

A good start could probably be made in the education of those who are going to be delivering this personalised learning. From what I recall from my various bits of formal teacher training, the emphasis was on a rather old fashioned model of learning. I was taught how to teach, but seldom did we learn how people learn.

A second area that needs challenging in my opinion, is regulations and management. In most institutes I have worked for, innovation was strangled by conservative financial management (where risk is a dirty word, and profits are always expected in advance to cover investments... a very peculiar idea). In many areas professional bodies also seem to work more to the detriment then the benefit of innovation. The message there often seems to be 'do as we have always done, and you'll be alright'.

Personal Learning definitely is one of our great challenge. But the challenge is not to invent it, or make it technologically possible. The challenge is 'simply' to implement it, and make it work.

Friday 15 February 2008

Edutagger

I think it was Stephen Downes that referred me to Edutagger in one of his posts (where the man finds the time to post the extraordinary amount of stuff that he does is really beyond me by the way).

This I think is a really great idea, fitting in perfectly with developments around OER and the new role(s) of the university that I referred to in earlier posts around this topic. As mentioned in the posting on assessing informal learning, I strongly believe that the true value of the University, is in the guidance and support is provides around learning, and the accreditation of that learning. Edutagger to me is the perfect example of how, in this case in K12 education in the US, Web2.0 technology is utilised to realise one of the components of this guidance: "Where do I find reliable and useful resources to learn about topic X?". I think every module or program should probably have a collection of tagged and rated bookmarks like these in addition to (and eventually in stead of? ) their reading lists.

Monday 11 February 2008

Peer Assessment project: WebPA

One topic that I'm very interested in, both from a pedagogical perspective as a workload management one, is peer review and peer marking. I was therefor delighted when I was asked to be involved with the WebPA project at Loughborough University. The WebPA project is building a tool to support peer marking of group assignments. The system has been used with great succes in Loughborough for many years, and the project aims to make the tool available as an open source solution that can be implemented at other Universities.

We have just held our first workshop in the University of Derby, preparing for a pilot roll out later this semester. For those interested however, there is also a workshop running in Loughborough on the 5th of March. If you are interested in peer assessment, then I would thoroughly recommend that.

Quote

I don't normally make a habit of posting quotes, although I do like them. this one however seemed too good.

"Live as if you were to die tomorrow. Learn as if you were to live forever.
" - Mahatma Gandhi.

Amen.

Friday 8 February 2008

OpenLearn: back to basics?

Both Donald Clark and Seb Smoller have posted rather critical reviews of the content published by the Open University on their OpenLearn learning space.

One of the challenges for the OU I think, is the scale and methodology on which they (have to) work. Issues like scalability, reliability and accessibility will have been very high on the list of priorities, and whether we like it or not, all of these ussually make is a lot harder to be creative and innovative. Nevertheless, in spending over 5 million on repurposing this set of, mostly rather old and dull, resources it does seem that the OU has let this 'overhead' get way out of hand.

It is also a matter of expectations perhaps. I know when we attend conferences and presentations there is a lot of interesting and exiting stuff floating around, but if you poke a bit deeper into most of these presentations, you will find that the majority actually links to very small pockets of practice, pilots, or plans. Very few truly innovative practice actually develops into a mainstream embedded practice. The uncomfortable truth of projects like OpenLearn is that it suddenly exposes a lot more then the tiny tip of the iceberg that ussually makes it's way to dissemination.

And so perhaps this is really a good thing. It is an honest look into the state of higher education, and it gives some very clear, and perhaps uncomfortable, truths about the state and quality of the majority of our learning materials and activities. We should perhaps log out of second life, close our facebook for a minute and start cleaning up some of the more mundane mess in the backyard.