No of course not, would be my first response. However researchers from Bristol seem to disagree, as can be read in an article on the BBC website "Left-handers' lower test scores". In the article researchers seem to conclude that the lower scores obtained by left-handers and mixed-handers mean they are more prone to cognitive developmental problems. They even advise that a test of 'handedness' is administered to guide early intervention strategies.
Now I haven't had a chance to examine this research, but on the face of it this seems a bit odd. As someone with a background in computer based assessment, I am very acutely aware of validity issues. When computers are used to assess, the question 'is this medium disadvantaging students' is asked very regularly (perhaps even somewhat too often). It strikes me that with our pen and paper based assessments, this question is not asked often enough.
Might it be that our traditional assessment system, that has a very high emphasis on writing skills, is disadvantaging students who are not naturally equip ed to deal well with our particular written tradition?
But even if my doubts are unfounded, is pre-emptive testing really the answer to this issue? Are we going to translate this statistical trend into something that is going to stigmatise individuals without them necessarily having any related difficulties? I think that is really taking things a bit too far.
Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts
Monday, 8 December 2008
Tuesday, 10 June 2008
Towards a research agenda on computer-based assessment
At the EU workshop I attended in Ispra, Italy last year (see blogposts Psychometrics versus pedagogy and High stakes national assessments and ranking) we agreed to write some articles on quality aspects of computer based assessments to go towards a report for the European Commission. I'm glad to say that the report has now been published, and can be accessed online via the following link: Towards a research agenda on computer-based assessment
I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.
I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.
Labels:
Adaptive testing,
Assessment,
CAA,
CBA,
Conference,
e-Assessment,
Education,
Evaluation,
Feedback,
Psychometrics,
Research,
Resources
Monday, 26 November 2007
The ideal assessment engine
I've been looking into criteria for assessment technologies a lot lately. One reason is that we are looking into migrating our current system to a new platform (as the old one, Authorware, is no longer supported). The other reason is that I have been invited by the Joint Research Centre to take part in a workshop on quality criteria for computer based assessments. I will be posting on the outcomes of that workshop next week. For now though, here are some of my thoughts on the topic.
Flexibility
The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on http://www.dylanwiliam.net/), the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.
Flexibility
The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
- Flexibility in design: The layout of the question can be modified as desired, using media and such to create an authentic and relevant presentation
- Flexible interactions: There is no point in systems that have parameterized 5 question types for you, and all you can do is define a title, question text, alternatives and select the right answer. Interactions testing and supporting higher order skills are, or should be, more complex then that.
- Detailed and partial scoring: A discriminating question does not just tell you whether you were completely right, or completely wrong. It can tell you the degree to which you were right, and what elements of your answer had any value. It might also penalize you for serious and fundamental mistakes.
- Detailed feedback: A lot of mistakes learners make are predictable. If we allow assessment systems to capture these mistakes and give targeted feedback, learners can practice their skills while lecturers can focus there time on more in depth problems that require their personal engagement.
- Extensive question generation and randomization options: For the re-usability of assessments, generating questions using rules and algorithms given a single question almost infinite re usability. On the assessment level, the same is true for assessment generation based on large banks with questions tagged with subject matter and difficulty.
Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on http://www.dylanwiliam.net/), the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.
Labels:
Assessment,
CAA,
CBA,
e-Assessment,
e-learning,
Eduction,
OpenSource,
Research,
Software,
Technology,
tools
Wednesday, 21 November 2007
e-APEL article in Response
The new online journal Response has published a 'work in progress' report I wrote on the e-APEL project that I'm involved in. I'm afraid it is rather dated, as the journal took more then 8 months to actually publish this version. Still, for those interested in the accreditation of prior learning, or IT projects in education in general, it might be a worthwhile read.
Labels:
APL,
e-Assessment,
Research,
Resources,
Technology,
Work Based Learning
Tuesday, 11 September 2007
Is web 2.0 dumbing us down?
As some of you might know, I'm an avid listened of podcasts (mainly to make my daily hour long commute seem a little less wasteful). Two of the recent casts I listened to grabbed my specific attention.
The first is a presentation titled Republic 2.0 by Cass Sunstein. In this presentation mr. Sunstein explains the risks of web 2.0 to democracy. While the increased access to the expression and consumption of information and opinion seems like a wonderful thing, there are downsides to how we engage with blogs, wiki's and social networks. Due to the vast amount of information out there, but also because of the nature of these new social artifacts, we tend to expose ourselves only to information and opinions from those that we are close to (ideologically or otherwise). Research has shown that in homogeneous groups like these, polarization takes place: views and opinions become more singular and extreme.
This is a concern in itself, and something to keep in mind when considering aspects of our education system, such schools based on subgroups of our society over dimensions such as religion, class or even geography. The concern got a new dimension for me however after listening to one of the seminars of the Long Now Foundation In his talk, Ignore Confident Forecasters, Philip Tetlock shares some insights from his research on peoples ability to make appropriate predictions about complex future events (in this case in world politics). He found 2 types of thinking, leading to 2 distinct patterns of predictions. one group was classified as 'hedgehogs'. These were people that had a single specialism or conviction, and tried to explain everything in the world from this single perspective. The second group, the 'foxes' were more broad in their thinking, and the constructs they applied to solving problems. Foxes significantly outperformed the hedgehogs.
So this begs the question: If we allow ourselves to be exposed only to those views and people that we have sympathy with, something the web increasingly allows us to do, are we really depriving ourselves of the tools for a balanced and effective mental development?
The first is a presentation titled Republic 2.0 by Cass Sunstein. In this presentation mr. Sunstein explains the risks of web 2.0 to democracy. While the increased access to the expression and consumption of information and opinion seems like a wonderful thing, there are downsides to how we engage with blogs, wiki's and social networks. Due to the vast amount of information out there, but also because of the nature of these new social artifacts, we tend to expose ourselves only to information and opinions from those that we are close to (ideologically or otherwise). Research has shown that in homogeneous groups like these, polarization takes place: views and opinions become more singular and extreme.
This is a concern in itself, and something to keep in mind when considering aspects of our education system, such schools based on subgroups of our society over dimensions such as religion, class or even geography. The concern got a new dimension for me however after listening to one of the seminars of the Long Now Foundation In his talk, Ignore Confident Forecasters, Philip Tetlock shares some insights from his research on peoples ability to make appropriate predictions about complex future events (in this case in world politics). He found 2 types of thinking, leading to 2 distinct patterns of predictions. one group was classified as 'hedgehogs'. These were people that had a single specialism or conviction, and tried to explain everything in the world from this single perspective. The second group, the 'foxes' were more broad in their thinking, and the constructs they applied to solving problems. Foxes significantly outperformed the hedgehogs.
So this begs the question: If we allow ourselves to be exposed only to those views and people that we have sympathy with, something the web increasingly allows us to do, are we really depriving ourselves of the tools for a balanced and effective mental development?
Thursday, 5 April 2007
It's funny how things coincide sometimes. Today a student came to our office, wondering if we could help her develop an assessment in support of her dissertation. For some reason several students have made this request this year, while none used to before. Personally I think it's a wonderful thing, and testament to how students now view technology as an integral and important part of their lives and careers. It also shows how they are much less subject to pigeonholing technology.

For years the Centre for Interactive Assessment Development has been supporting lecturers by developing rich assessments. The university of Derby has in general adopted a far more innovative approach then most, progressing e-Assessments far beyond the domain of the multiple choice quiz. still the applications sought for innovative assessment practice have been rather limited. Primarily assessments were measurements of learning, mostly summative or formative only in the sense of providing practice and a benchmark for a later summative exercise. Assessments that actually teach, or diagnose are a relative new addition to our portfolio. Assessments for other purposes, such as research or evaluation, have never even been considered as part of the centres value and expertise. This is something I am desperate to change. I'm glad at least students seem to agree with me on that one.
For years the Centre for Interactive Assessment Development has been supporting lecturers by developing rich assessments. The university of Derby has in general adopted a far more innovative approach then most, progressing e-Assessments far beyond the domain of the multiple choice quiz. still the applications sought for innovative assessment practice have been rather limited. Primarily assessments were measurements of learning, mostly summative or formative only in the sense of providing practice and a benchmark for a later summative exercise. Assessments that actually teach, or diagnose are a relative new addition to our portfolio. Assessments for other purposes, such as research or evaluation, have never even been considered as part of the centres value and expertise. This is something I am desperate to change. I'm glad at least students seem to agree with me on that one.
Subscribe to:
Posts (Atom)