Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Monday, 8 December 2008

Are left handed people stupid?

No of course not, would be my first response. However researchers from Bristol seem to disagree, as can be read in an article on the BBC website "Left-handers' lower test scores". In the article researchers seem to conclude that the lower scores obtained by left-handers and mixed-handers mean they are more prone to cognitive developmental problems. They even advise that a test of 'handedness' is administered to guide early intervention strategies.

Now I haven't had a chance to examine this research, but on the face of it this seems a bit odd. As someone with a background in computer based assessment, I am very acutely aware of validity issues. When computers are used to assess, the question 'is this medium disadvantaging students' is asked very regularly (perhaps even somewhat too often). It strikes me that with our pen and paper based assessments, this question is not asked often enough.

Might it be that our traditional assessment system, that has a very high emphasis on writing skills, is disadvantaging students who are not naturally equip ed to deal well with our particular written tradition?

But even if my doubts are unfounded, is pre-emptive testing really the answer to this issue? Are we going to translate this statistical trend into something that is going to stigmatise individuals without them necessarily having any related difficulties? I think that is really taking things a bit too far.


Wednesday, 12 November 2008

Do essays promote surface learning?

I was reading an article this morning which referred to the book 'Academic Discourse'. The book investigates the importance of language in learning. I think everyone will recognise that language, and in particular the jargon and linked body of concepts in a discipline, are a key part of learning. To engage effectively with a subject, it is important that one is familiar with important constructs and the way they are expressed and referred to. And so it is only logical that an important part of our teaching, and assessment, focuses on those key constructs.

In Higher Education, essays are often the medium of choice to evaluate learning. The wisdom handed down through the ages dictates that essays are suitable to assess higher order skills and understanding. But is that really the case? Of course the freedom to construct your own answer, or perhaps even choose which questions to answer, gives the student maximum freedom in expressing his or her understanding. But that freedom is also very easy to abuse.

Because we must realise that students aren't always looking to express what they learned. They might be looking to meet the expectations that will lead to the desired result, usually a grade. And when pursuing this quest, students often find that writing a good essay is a problem that can be solved with some linguistically skills, and doesn't necessarily require the attainment of any new understanding. And so in this light, perhaps we should investigate the value of very open and unfocused assignments. Because, while in a very different way then for instance multiple choice exams, they too can promote surface learning strategies when not designed with due care.

Furthermore, it also questions the value of computer marked essays. Most of these systems are designed largely around linguistic criteria, and so only exaggerate this problem. This is especially true if we consider the consequences of students understanding how their essays will be marked by such a system.

Sunday, 5 October 2008

Open accreditation

About 2 weeks ago a very interesting discussion on open accreditation started, I think on D'Arcy Norman's blog. Some of the responses, such as for instance David Wiley's response, are very edupunk. Do we even need degrees? I'm not sure that's a viable position to be honest. I think George Siemens hit the nail on the head when he said that "providing a statement of competence is only value when the provider of the statement is also trusted". Traditionally it has been institutions like our Universities that have instilled that trust. It was against this background that I have argued that accreditation is a key part of the value proposition for HE. But to be honest, I'm not so sure about that anymore.

In a draft of a call for action I read recently, Microsoft, Cisco and Intel are calling for serious reforms to our assessment system, as they feel it no longer assesses the skills that they value (creativity, collaboration and communication to name a few). That is a very serious indictment, but I think not an unjust one. many of these skills are, or should be, implicitly part of what we think of as "a degree". But if they are not assessed, how do we ensure they are taught, and more importantly, learned? This becomes even more important when we are increasingly atomising the curriculum. If we want to let students pick and mix, we should at least be able to ensure that the sum of their choices still adds up to what we consider to be the whole of their degree.

I think a transparent and reliable way to assess these 21st century skills would go a long way to solving some of our problems in lifelong learning. It would make the accreditation of prior learning easier, as in my opinion it is this 'hidden curriculum' that often concerns people when considering accrediting prior learning. And with prior learning, instantly we have a vehicle to enable a flexible curriculum that spans multiple universities, or the incorporation of non-institutional learning into a qualification. But more crucially, if we can measure these things transparently perhaps trust becomes less important. If degrees no longer are black boxes with a reputation, but an open book that we can all evaluate ourselves... Portfolio anyone... ?

Friday, 3 October 2008

Evidence based teaching

One of the topics that came up several times over the past days in Reykjavik, is that of the differences in culture around assessment. Different countries have different ways in which they perceive and deal with assessment, and this can have a significant impact on the effect of the assessments, and the success of the educational system as a whole.

One particularly interesting approach was outlined by Jakob Wandall, who's work in the Danish national tests I have blogged about last year in High stake national assessments and ranking. I tried to capture Jakob's slide on a picture, but unfortunately that failed rather miserably, so I have tried to recreate his message in the graphic below:




The graph outlines how both the focus of the assessment (on the horizontal axis) and the purpose for which the results are primarily used (on the vertical axis) vary from country to country. I thought the visualisation was very interesting. Comparing this to, for instance, the outcomes of the 2006 PISA, it is interesting to note that neither the approach of the Scandinavian schools (who focus primarily on learner focused formative assessment) nor the Anglo-Saxon approach 9that is much more heavy on the measurements of indicators for performance, tied in to funding) really yields the best results.

The starts of PISA are of course the Finnish, and the unique approach is apparent from this graph. in stead of moving somewhere between the top left and the bottom right of the graph, they sit toward the top right. The Finish system highly values national measurements, evaluating the success of the system by objective measurements. However these measurements are not tied to any control, either through formal channels or more informal ones such as public rankings. In stead the measurements made in the Finnish system have the purpose to inform teaching and learning. An evidence based approach to teaching shall we say.

When I translate this to our own practice, I can't help but relate this to demands to increase the amounts of formative assessment in our teaching. And while I am sympathetic to these demands, these assessments are similar to those in the top left of the above graph, informing and supporting individual learning processes. And so perhaps in stead of focusing primarily on formative individual assessment, we should focus (also) on assessment and evaluation that informs teaching. Building an infrastructure through which lecturers can stay in touch with the progress, successes and difficulties of all their students, and modify their teaching based on this understanding continuously.

Sunday, 28 September 2008

Hello Reykjavik

I just arrived in Reykjavik for a conference on PISA 2006 and the transition to e-assessment. It's my first time in Iceland, and I must say it was a bit surreal. I'm just reading Green Mars by Kim Stanley Robinson, which contains a lot of descriptions of a newly terraformed Mars: cold, lots of rocks and lots of lichen. Trust me, walking around in Iceland came scarily close to how I had been imagining the novel in my head up till then.

I'm hoping to find some time over the next days to post my thoughts on the conference. There is a very impressive lineup of international speakers scheduled to speak, and i am looking forward to exchanging ideas and opinions with them. Pictures will have to wait I'm afraid, as I forgot to pack the cable that connects my camera to my laptop...grrr...

Wednesday, 13 August 2008

Conscious competence and certainty based marking

Certainty based marking (sometimes erroneously refered to as competence based marking) is an advanced scoring strategy that requires learners to classify how certain they are of their response when submitting it. A higher certainty carries a possible higher reward, but also a much higher penalty when the response is incorrect. As such certainty based marking can mitigate guessing on constrained response items, but it is also ver useful as a stimlans for reflection. More information can be found in articles like "Certainty-Based Marking (CBM) for Reflective Learning and Proper Knowledge Assessment".

There are other interesting options to explore however, and I was reminded of one when I read
Conscious Competence - a reflection on professional learning, which talks about the conscious competence model. In my opinion, these two fit together very nicely, as depicted in the diagram below. Candidates providing the wrong answer, but indicating a high degree of certainty about their answer can be considered as 'unknown incompetent', as they seem unaware of their misconceptions. Candidates providing the wrong answer with a very low degree of certainty have progresssed to 'known incompetence', as they have at least correctly identified their lack of understanding. When providing the correct answer with a low degree of certainty, learners can be assigned to the unknown competence stage untill finally tey progress to known competence if they provide a high certainty correct response.

Although I am still looking for an opportunity to actually try this in practice, I think it has a lot of potential in spporting an integrated formative and summative assessment strategy.

Friday, 1 August 2008

The big assessment question

Assessment has been in the news an awful lot lately, albeit not very positively. There is of course the whole SAT's palava, but i will resist the temptation to comment on that. My position on this is outlines in previous posts on this blog, and I can only say that it is good to see that a lot of the momentum around this seems to be finally heading in the right direction. Its a shame we often need some sort of disaster to finally be open to change. A more surprising current issue is that of the Dyslexic student's exams battle. Which deals with a medical student's problems with multiple choice tests, something further clarified by the BBC in a follow-up article: Why can't people with dyslexia do multiple choice?

The comment by the student's solicitor that "Every professional body or employer who relies for a professional qualification, or as a promotional gateway, on multiple choice questions is heading for a fall." is of course a bit of a joke. Quite frankly I am rather appalled by what seems like a rather misguided attempt to 'make a splash' at the expense of something as crucial as our exams system. While there are many gripes that you could reasonably hold against multiple choice question, I don't think the link to dyslexia is really that valid. Considerations around presentation, or even using screen readers, can reasonably address most potential issues that might result from a disability. in addition, I think we should not shy away from critical reflection on the degree of special provisions that we put in place to accommodate students, as these provisions could significantly alter the nature of an assessment and then compromise the validity and equitability of the award. There will always be differences between learners in how well they perform in various types of assessment. This is one of the reasons to make sure there is a variety of assessment methods being used.

The more interesting question though, is around authenticity. The student in question is quoted in the article, saying that "In normal day life, you don't get given multiple choice questions to sit. Your patients aren't going to ask you 'here's an option and four answers. Which one is right?". And to an extend I think she has a point there. While there will always be situations in which we will have to rely on 'proxy's' to infer attainment, I do agree that currently we rely way too much on proxy's that are sometimes quite remote from the competencies that we try to measure. In this sense education system is stuck in it's traditions, in stead of applying the objective and critical reflection that we say we value so much in higher education.

A similar point, and some suggestions for moving forward, are made in the blog post 21st Century Assessment, where this 'formula' is proposed for a modern fit-for-purpose assessment system. Especially the elements of collaboration and peer assessment are extremely important and very much underutilised in our current practice. Partly I suspect that this links in with how uncomfortable we still are with the loss of our position as the holder and tranferrer of all knowledge. This role warranted a 1 to many broadcast model of education. Education today however is moving much more towards a many to many model, whereby the role of the teacher is much more one of guidance, coaching and accreditation of a learning process that involves peers, external resources and actors and experiences from previous professional roles. I'm not quite sure we are really ready to fulfil that role yet though.

Tuesday, 10 June 2008

Towards a research agenda on computer-based assessment

At the EU workshop I attended in Ispra, Italy last year (see blogposts Psychometrics versus pedagogy and High stakes national assessments and ranking) we agreed to write some articles on quality aspects of computer based assessments to go towards a report for the European Commission. I'm glad to say that the report has now been published, and can be accessed online via the following link: Towards a research agenda on computer-based assessment

I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.

Sunday, 1 June 2008

Review: Classmarker

As we're in the middle of a review of the tools we use in support of assessment, I thought I'd share my analysis of the various tools that we come across. As today is a Sunday, we'll start off with a simple one:

Classmarker

Classmarker is an online quizmaker that offers free quizzes (supported by advertisement) with upgrades (including removing the advertisement) for an additional fee.
Type: online service
Cost: Free with paid upgrades
Features: Multiple choice quiz, free text quiz or punctuation quiz.
Interoperability: None
System requirements: Any browser

The first thing I notice when registering, is that the UK doesn't exist, although the 4 home nations do. A more serious point to note, as with many online services, is that all content (and so that includes all personal information, questions and results) will be the property of Classmarker.

The features of this service are extremely limited. While the Classmarker supports 3 question types, it only allows you to use one of those per test. Options such as randomisation, feedback and branding are all features you will have to pay for. There seems to be no way to import or export your questions.

The site seems to be built mainly around Google Adsense. The advertisement and a Google search box is present on every possible page, and that will include the ones your learners visit. Upgrading to get rid of the advertisements costs $24.95 (or 49.95 for a business account, whatever that means). But then your users will still ave to register with the service before being able to take the test. Allowing for unregistered learners to take a test will cost you $0.10 or more per learner. Not really value for money given the incredibly limited features that are on offer.

Conclusion: I really can't see anything of value here. If you need something that is hosted for you, most survey services offer you more functionality. If you have your own space to host your assessments, even the simpler tools available wil offer more then Clasmarker.


Apologies to have to start of with such a negative review. I just stumbled across this tool today, and I thought I might as well write this up now. Do let me know if you have any comments, or perhaps sugestions for other tools I could review.

Thursday, 29 May 2008

New podcast on assessment

I've been toying around with the idea of doing something useful online, in stead of just venting my unsollicited rants here. I've come up with the idea to start a podcast around assessment practice, as I think there aren't nearly enough easilly available resources on the topic. The podcast, and the first test episode, can be found here. Please feel free to have a look and give me some feedbacks or tips, I could really do with some good advice and practical tips.

Saturday, 17 May 2008

SAT troubles

There's been a lot of upheaval this week about SAT tests. After a report published by the Children, Schools and Family committee of the House of Commons, MPs warn that national Sats tests distort education, which then lead to the schools minister defending the Sats, followed by technical difficulties with the tests. Personally I am not convinced the tests are really the problem.

One of the keynotes at the Blackboard Europe 2008 conference was given by Andreas Schleicher, the director of the PISA program for the OECD. He presented a very compelling set of ideas around successful (secondary) educations. Some of the conditions he identified (and all of these are based on the data gathered by the programme over the past years) are:

  • No stratification. Education systems that have separate streams, schools and or qualifications for learners based on their performance tend to do poorly. An example of this is the Dutch system, where secondary education is stratified over VMBO, HAVO and VWO based on a learners performance in primary school. The British system actually comes out quite well here (if we ignore the stratification that takes place because of the divide between private and public schools that is).
  • Standards. It is important to work to common standards. Central examinations are one way of enforcing common standards, and so the SAT tests do satisfy this condition.
  • Autonomy. It is crucial for schools and teachers to have a high degree of autonomy as long as their performance raises no concerns. Here we obviously fail completely as the British system dictates how schools teach and assess to a very high degree.
  • High Expectations, challenge and support. Both for teachers and learners, education should provide challenge, the expectation of high performance, but also plenty of support (staff development for instance). I think this is another area in which we fail to deliver.

Our main problem lies in the area of autonomy. We no longer trust our teachers and schools do do what they do best based on their professional judgments. In stead there is this weird notion that education is better served by central generic judgments made by policymakers. The problem with SATs isn't that they provide a common high stakes benchmark for learners. The problem is that this information is abused for public league tables and the like, which inevitably leads to pressures on learners that have nothing to do with their personal learning. It's the same pressures that lead to Universities coercing students into filling out the national student survey more favorably.

In Finland schools have no idea about their performance related to their neighbors. Funny enough in Finland it doesn't really make a difference. Only 4% of the variance in scores on the PISA tests can be assigned to the difference in quality between schools. Finnish schools have around 9 applicants for every position offered, and this is not because of higher salaries or anything like that. It is because the system in Finland provides a challenging environment in which people are valued, can grow and develop and actually make a difference.

Thursday, 15 May 2008

Blackboard world Europe 2008 (2): Assignment submission

Right then, some more from the past Blackboard conference, as promised...

I attended 2 very interesting talks around a building block developed for Sheffield Hallam University called: 'The Assignment Handler'. It is basically an extention of the gradebook functionallity that already exists within Blackboard.

Sheffield Hallam have decided on a policy that all grades should be fed back to students in a central place, together with feedback. The central place they chose was the Blackboard gradebook. To do that they implemented the following features:
  • A transparent and consistent handling of online exams, online submitted exams and exams submitted through the assignment handling office. All these can be set on Bb, submission is logged on Bb, and results and feedback are published through bb. this creates a central place where student process can be comprehensively managed (by staff and students)
  • Some bulk-upload and download functionality. Assignments are renamed using module codes and student numbers. Feedback and marks can be uploaded in a single archive, which is useful with larger cohorts.
  • The option to withold a mark until the student has reflected on, and responded to, the feedback provided. The University is now researching to what extend this actually motivates students to engage genuinely with their feedback.
  • Generation of confirmation e-mails as receipts of submission
  • Support for group assignments
As we have just started to look into a structural sollution around online submission ourselves, this presentation was brilliantly timed. There was a lot of mumbling in the audience on the non-responsiveness from Blackboard on this issue, as many institutions have requested functionality like this before. and in all fairness, most of it is pretty generic and sensible and should probably have been part of the core product for years. In stead it is now a buildingblock that Blackboard will most likely charge us a nice extra fee for.

Wednesday, 14 May 2008

Blackboard Europe conference 2008

As we use Blackboard at the University of Derby, I attended the European Blackboard conference in Manchester this week. The conference was of to a bit of a poor start. No wireless available for conference go-ers, just the crappy connection for which the hotel charged £15 a day. I decided that was a bit ridiculous, hence the late submission of this post. The keynote and my first workshops on Tuesday were really poor, and I started to loose heart. Luckily some little gems did manage to arise from the rubble of disappointment.

Blackboard NG (next generation)
I was very please to see assessment high on the agenda for the next generation(s) of Blackboard. Tools supporting peer and self assessment, a new and expanded Grade centre (replacing the somewhat limited Gradebook) and the integration and expansion of the existing WebCT and Blackboard quiz tools will certainly add a bit of meat on the meager bones of the platform's support for assessment. What actually surprised me (and I would still like to actually see this before I truly believe it) is the announcement that Blackboard will be working towards interoperability with other CMS-es such as Moodle and Sakai. We saw a demonstration of a learner portal page that listed courses and notifications from courses in various platforms transparently, which was very promising. This would allow for an institution to grant much wider freedoms to faculty intheir choice of platform without loosing the integration that only a single platform can currently offer. Watch this space.

More tomorrow, it's time to spend some time with my family now...

Monday, 28 April 2008

Problem based Learing in Second Life

I attended a presentation by Daden, who are doing a lot of very impressive and interesting things in Second Life (and other virtual worlds). I thoroughly recommend having a look at their space in Second Life, where they have some great mash-ups with Google Earth. I would post some links here, but the ones I could find on their site aren't working, which is a bit rubbish.

Aside from the things appealing to my inner geek, there were also some ver interesting applications in learning. One project I found particularly interesting was the JISC funded 'Problem based learning in Second Life'. We were shown a simulation of a road traffic accident used for assessment. The detail was quite incredible (including the ability to listen to the patients breathing, which adjusted over time based on the actions of the attending paramedic). The medical sciences as usual are front runners in the use of new technologies, but I could see many applications in other domains.

The question that does still bug me is whether we should be doing this in open worlds, like Second Life, or if we should be using more private spaces. Perhaps a happy medium will be found in the Second Life Grid, which seems to be looking to offer the best of both worlds... so to speak.

Friday, 25 April 2008

Assessment standards: a manifesto for change

A group of 34 prominent academics has taken a laudible stance against our current assessment culture (see also this THE article). You can find the manifesto and it's supporters at the bottom of this post. Point 3 especially I think is very poignant within the context of e-Assessment, where our obsession with the measurable (I'm thinking Item Response Theory here) has gotten way out of hand at the expense of validity.

The Weston Manor Group


Assessment standards: a manifesto for change


  1. The debate on standards needs to focus on how high standards of learning can be achieved through assessment. This requires a greater emphasis on assessment for learning rather than assessment of learning.


  1. When it comes to the assessment of learning, we need to move beyond systems focused on marks and grades towards the valid assessment of the achievement of intended programme outcomes.


  1. Limits to the extent that standards can be articulated explicitly must be recognised since ever more detailed specificity and striving for reliability, all too frequently, diminish the learning experience and threaten its validity. There are important benefits of higher education which are not amenable either to the precise specification of standards or to objective assessment.


  1. Assessment standards are socially constructed so there must be a greater emphasis on assessment and feedback processes that actively engage both staff and students in dialogue about standards. It is when learners share an understanding of academic and professional standards in an atmosphere of mutual trust that learning works best.


  1. Active engagement with assessment standards needs to be an integral and seamless part of course design and the learning process in order to allow students to develop their own, internalised, conceptions of standards, and monitor and supervise their own learning.


  1. Assessment is largely dependent upon professional judgement, and confidence in such judgement requires the establishment of appropriate forums for the development and sharing of standards within and between disciplinary and professional communities.



Supporters:


Professor Trudy Banta

Dr Simon Barrie

Professor Sally Brown

Ms Cordelia Bryan

Dr Colin Bryson

Ms Jude Carroll

Professor Sue Clegg

Professor Linda Drew

Professor Graham Gibbs

Professor Anton Havnes

Dr Mary Lea

Dr Janet Macdonald

Professor Ranald Macdonald

Dr Debra Macfarlane

Dr Susan Martin

Professor Marcia Mentkowski

Dr Stephen Merry

Professor David Nicol

Professor Andy Northedge

Professor Lin Norton

Ms Berry O’Donovan

Dr Thomas Olsson

Dr Susan Orr

Dr Paul Orsmond

Professor Margaret Price

Professor Phil Race

Mr Clive Robertson

Dr Mark Russell

Dr Chris Rust

Professor Gilly Salmon

Professor Kay Sambell

Professor Brenda Smith

Professor Stephen Swithenby

Professor Mantz Yorke

Sunday, 20 April 2008

Crowdsourcing assessment preparation

An article in the Wired Campus made me aware of a new service for test preparation called Socrato. It's seems to be a sort of massive online study group where people can submit, view and practice all sorts of tests (although at the moment mainly MCAS). The downside could be that this is a beta for which the final business model has not yet been chosen, so I'd be careful of the stuff you submit.

Friday, 18 April 2008

Resources to support the assessment of learning

The latest entry in JISC Inform 21 links to "Resources to support the assessment of learning". I must say that the collection is far from comprehensive, and ery JISC/CETIS focussed. Still it's worth a look.

Wednesday, 16 April 2008

Efficiency or effectiveness

The BBC reports that our government will be reviewing the efficiency of our exam system. I'm developing a rather serious aversion to efficiency, as it usually translates rather neatly into degradation.

It would be nice if the government would review the effectiveness of our exam system. Effectiveness is about reaching intended outcomes, not just about saving pennies. As the general secretary of the Association of School and College Leaders, John Dunford, said in the article: "It is vitally important that the government not only conducts a cost-benefit analysis of the current exam system but evaluates its effect on teaching and learning." Perhaps (god forbid) we could also review the effects of all the links to targets, KPIs and league tables on the quality of learning, as they certainly compromise the validity of the whole system. I will again point to the efforts of colleagues in Denmark, who seem to have understood this a whole lot better.

Tuesday, 25 March 2008

Heisenberg in education

In physics the Heisenberg uncertanty principle is a well known limitation of measurements. The principle explains the fundamental conflict between establishing a particle's speed, and it's position. The more we focus on making one of these explicit, the more uncertain our understanding of the other. This is not a shortcomming of our instruments or anything like that, it is a fundamental property of the universe. Perhaps it is time that we realise that in education our ability to measure things like student attainment is even more limited. It is not a limitation that we can overcome by measuring more. In fact that just makes the situation worse, as our measurements then start to influence what we are trying to observe and ussualy not for the better. This effect is called the observer effect, and it is a crucial element to take inot consideration when delivering high stakes assesments.

With the increasing pressures on education to measure and report, calls to take into account the observer effect (although ussualy not referred to as such) are becomming louder. The National Union of Teachers conference has spoken out against the practice, and I have raised the issue on this blog before in the post titled High stake national assessments and ranking. A very thoughtful analysis of the problem is given by Wesley Fryer in his post Raising expectations. Wesley argues for the return of teachers designing and delivering high stakes tests, in stead of these being set by governments and awarding bodies. While a lot can be said in favor of this idea, I do think it is important to realise that this is only possible if we combine this with a very serious upgrade of the staff development that is given to our teaching staff on the subject of assessment. Nevertheless, Wesleys post is definetly worth a read. Especially this little gem:
"... bestowed upon the plebeian masses by the academic elites filling the hallowed halls of commercial companies now profiting handsomely from our myopic focus on summative, simplistic, high-stakes assessments". That must be the best and most colourful descriptions of our asessment culture that I have ever read.

Monday, 24 March 2008

Marking free text

One of the frequent criticisms on e-Assessment is the perceived limits in item types that can be supported by technology. While there are long debates to be had about assessing higher order skills with constrained response item types, I don't think these debates are going to take away the prime concern: Free text items.

I must say that I have serious doubts about marking free text by computer. I don't know enough about the principles involved to say this with any sort of authority, but I am aware of the kind of heuristics used in automated essay marking for instance. These heuristics are often grammatical and rhetorical in nature, and have fairly little to do with the subject matter (although it must be said that many human markers have been shown to use similar heuristics). Nevertheless, interesting progress is being made in this area, and eventually I am sure that language processing will be commonplace.

One of the interesting project that I recently became aware of, is the OpenComment project, which is lead by Denise Whitelock at the Open University. The project is looking to use latent semantic analysis to analyse learners responses to open ended questions in history and philosophy. Another interesting fact is that the project is developing this as a question type in Moodle, and so it should be relatively easy for everyone to reap the benefits of this technology within their own learning environments.

Automated marking is by no means the only value of using technology in assessment. The OpenMentor project, again from the Open University, is a great example. OpenMentor compares a mark assigned to a piece of work to the amounts of positive and negative feedback given, and checks this for consistency. In this way it can help in the coaching process of new teachers. Given the importance of feedback, I think it's wonderful to have explicit standards and training in giving it.

The ABC (Assess By Computer) software has so far escaped my radar. I wasn't aware of it until queried by the Times Higher Education for the article they were doing. The software has a support role similar to OpenMentor, but this time the support is provided around the marking process. The software can highlight keywords, compare answers to model answers and more. All of this for the sole purpose of making it easier on the human marker, but also improve consistency between human markers. Especially the latter is very welcome I think, as marking open ended questions and assignments can sometimes be somewhat of a dark art.

I only just discovered that bits of the e-mail I sent to the reporter actually appear in the article. If I would have known that I probably would have paid a bit more attention to my grammar :S