Certainty based marking (sometimes erroneously refered to as competence based marking) is an advanced scoring strategy that requires learners to classify how certain they are of their response when submitting it. A higher certainty carries a possible higher reward, but also a much higher penalty when the response is incorrect. As such certainty based marking can mitigate guessing on constrained response items, but it is also ver useful as a stimlans for reflection. More information can be found in articles like "Certainty-Based Marking (CBM) for Reflective Learning and Proper Knowledge Assessment".
There are other interesting options to explore however, and I was reminded of one when I read
Conscious Competence - a reflection on professional learning, which talks about the conscious competence model. In my opinion, these two fit together very nicely, as depicted in the diagram below. Candidates providing the wrong answer, but indicating a high degree of certainty about their answer can be considered as 'unknown incompetent', as they seem unaware of their misconceptions. Candidates providing the wrong answer with a very low degree of certainty have progresssed to 'known incompetence', as they have at least correctly identified their lack of understanding. When providing the correct answer with a low degree of certainty, learners can be assigned to the unknown competence stage untill finally tey progress to known competence if they provide a high certainty correct response.
Although I am still looking for an opportunity to actually try this in practice, I think it has a lot of potential in spporting an integrated formative and summative assessment strategy.
Wednesday, 13 August 2008
Friday, 8 August 2008
The value proposition of HE
I have previously expressed some ideas about the value of higher education and how, at least for the less research intensive institutes, it is moving away from content and knowledge, and towards guidance and accreditation. However a few separate experiences this week have lead me to start thinking slightly differently about the future and value of higher education.
It all started with my enrolment on the connectivism course that is being prepared by Stephen Downes and George Siemens. I think it was Stephen who made the case for assessment to be individual. After all learners come to a course or activity with individual goals and ambitions, and so it doesn't really make sense that they would be assessed in the same way. While this doesn't invalidate the importance of assessment and accreditation, it does question the validity of having predefined outcomes and criteria for these perhaps.
over coffee this morning I had a discussion with a colleague, who was explaining to me the importance of the community of practice, and how we needed to find a way to make learners part of a community of practice before and after their actual enrollment on a module or course. He made a very strong case for what should be a major benefit of doing a course with the University: Joining a community of peers and experts. Very consistent with Stephen's ideas I thought.
Then this afternoon, while I was wrestling the backlog in my GReader, I stumbled on a piece on the value of social networks by Engeström (via Grainne's blog) which again confirmed this notion. Basically Engeström explains that a relation, and thus a network, only has value as a result of the object that this relation is built on. In Flickr these are pictures, in Delicious they are bookmarks. Similarly in education, these could be courses or subjects, just like my colleague was proposing with the communities of practice.
And so maybe the value of HE is not primarily around accreditation. Perhaps the most important value we can offer is the organisation and support of learning networks around subjects of interest. In that case, we have a lot of work to do...
It all started with my enrolment on the connectivism course that is being prepared by Stephen Downes and George Siemens. I think it was Stephen who made the case for assessment to be individual. After all learners come to a course or activity with individual goals and ambitions, and so it doesn't really make sense that they would be assessed in the same way. While this doesn't invalidate the importance of assessment and accreditation, it does question the validity of having predefined outcomes and criteria for these perhaps.
over coffee this morning I had a discussion with a colleague, who was explaining to me the importance of the community of practice, and how we needed to find a way to make learners part of a community of practice before and after their actual enrollment on a module or course. He made a very strong case for what should be a major benefit of doing a course with the University: Joining a community of peers and experts. Very consistent with Stephen's ideas I thought.
Then this afternoon, while I was wrestling the backlog in my GReader, I stumbled on a piece on the value of social networks by Engeström (via Grainne's blog) which again confirmed this notion. Basically Engeström explains that a relation, and thus a network, only has value as a result of the object that this relation is built on. In Flickr these are pictures, in Delicious they are bookmarks. Similarly in education, these could be courses or subjects, just like my colleague was proposing with the communities of practice.
And so maybe the value of HE is not primarily around accreditation. Perhaps the most important value we can offer is the organisation and support of learning networks around subjects of interest. In that case, we have a lot of work to do...
Friday, 1 August 2008
The big assessment question
Assessment has been in the news an awful lot lately, albeit not very positively. There is of course the whole SAT's palava, but i will resist the temptation to comment on that. My position on this is outlines in previous posts on this blog, and I can only say that it is good to see that a lot of the momentum around this seems to be finally heading in the right direction. Its a shame we often need some sort of disaster to finally be open to change. A more surprising current issue is that of the Dyslexic student's exams battle. Which deals with a medical student's problems with multiple choice tests, something further clarified by the BBC in a follow-up article: Why can't people with dyslexia do multiple choice?
The comment by the student's solicitor that "Every professional body or employer who relies for a professional qualification, or as a promotional gateway, on multiple choice questions is heading for a fall." is of course a bit of a joke. Quite frankly I am rather appalled by what seems like a rather misguided attempt to 'make a splash' at the expense of something as crucial as our exams system. While there are many gripes that you could reasonably hold against multiple choice question, I don't think the link to dyslexia is really that valid. Considerations around presentation, or even using screen readers, can reasonably address most potential issues that might result from a disability. in addition, I think we should not shy away from critical reflection on the degree of special provisions that we put in place to accommodate students, as these provisions could significantly alter the nature of an assessment and then compromise the validity and equitability of the award. There will always be differences between learners in how well they perform in various types of assessment. This is one of the reasons to make sure there is a variety of assessment methods being used.
The more interesting question though, is around authenticity. The student in question is quoted in the article, saying that "In normal day life, you don't get given multiple choice questions to sit. Your patients aren't going to ask you 'here's an option and four answers. Which one is right?". And to an extend I think she has a point there. While there will always be situations in which we will have to rely on 'proxy's' to infer attainment, I do agree that currently we rely way too much on proxy's that are sometimes quite remote from the competencies that we try to measure. In this sense education system is stuck in it's traditions, in stead of applying the objective and critical reflection that we say we value so much in higher education.
The comment by the student's solicitor that "Every professional body or employer who relies for a professional qualification, or as a promotional gateway, on multiple choice questions is heading for a fall." is of course a bit of a joke. Quite frankly I am rather appalled by what seems like a rather misguided attempt to 'make a splash' at the expense of something as crucial as our exams system. While there are many gripes that you could reasonably hold against multiple choice question, I don't think the link to dyslexia is really that valid. Considerations around presentation, or even using screen readers, can reasonably address most potential issues that might result from a disability. in addition, I think we should not shy away from critical reflection on the degree of special provisions that we put in place to accommodate students, as these provisions could significantly alter the nature of an assessment and then compromise the validity and equitability of the award. There will always be differences between learners in how well they perform in various types of assessment. This is one of the reasons to make sure there is a variety of assessment methods being used.
The more interesting question though, is around authenticity. The student in question is quoted in the article, saying that "In normal day life, you don't get given multiple choice questions to sit. Your patients aren't going to ask you 'here's an option and four answers. Which one is right?". And to an extend I think she has a point there. While there will always be situations in which we will have to rely on 'proxy's' to infer attainment, I do agree that currently we rely way too much on proxy's that are sometimes quite remote from the competencies that we try to measure. In this sense education system is stuck in it's traditions, in stead of applying the objective and critical reflection that we say we value so much in higher education.
A similar point, and some suggestions for moving forward, are made in the blog post 21st Century Assessment, where this 'formula' is proposed for a modern fit-for-purpose assessment system. Especially the elements of collaboration and peer assessment are extremely important and very much underutilised in our current practice. Partly I suspect that this links in with how uncomfortable we still are with the loss of our position as the holder and tranferrer of all knowledge. This role warranted a 1 to many broadcast model of education. Education today however is moving much more towards a many to many model, whereby the role of the teacher is much more one of guidance, coaching and accreditation of a learning process that involves peers, external resources and actors and experiences from previous professional roles. I'm not quite sure we are really ready to fulfil that role yet though.
Labels:
Assessment,
Authentic,
Disability,
exams,
MCQ,
Peer assessment
Subscribe to:
Posts (Atom)