Tuesday 16 December 2008

Moved

I've moved my blog to my own domain: http://www.renemeijer.com
Those of you who have subscribed to my blog via feedburner should be redirected automatically, but if you are seeing this post it means you are not. To update your subscription, please use
http://feeds.feedburner.com/RenesAssessment

You can also subscribe by going to my website, but at the moment that will still give you the wordpress fee, not the feedburner feed (which means that in the unlikely event that I move again, you will be in the same position you are now, and you'll have to move manually).

Thanks to all of you who have bothered to read my ramblings, I do hope you follow me to my new corner on the web.

Monday 8 December 2008

Are left handed people stupid?

No of course not, would be my first response. However researchers from Bristol seem to disagree, as can be read in an article on the BBC website "Left-handers' lower test scores". In the article researchers seem to conclude that the lower scores obtained by left-handers and mixed-handers mean they are more prone to cognitive developmental problems. They even advise that a test of 'handedness' is administered to guide early intervention strategies.

Now I haven't had a chance to examine this research, but on the face of it this seems a bit odd. As someone with a background in computer based assessment, I am very acutely aware of validity issues. When computers are used to assess, the question 'is this medium disadvantaging students' is asked very regularly (perhaps even somewhat too often). It strikes me that with our pen and paper based assessments, this question is not asked often enough.

Might it be that our traditional assessment system, that has a very high emphasis on writing skills, is disadvantaging students who are not naturally equip ed to deal well with our particular written tradition?

But even if my doubts are unfounded, is pre-emptive testing really the answer to this issue? Are we going to translate this statistical trend into something that is going to stigmatise individuals without them necessarily having any related difficulties? I think that is really taking things a bit too far.


Friday 5 December 2008

Why I prefer open source

I've recently been given a more active role in the ownership of our VLE, Blackboard. And while at heart I am an open source fanatic, I do also believe that in the end the tools aren't necessarily that important, it is how you use them. With that in mind I was planning to take a positive approach to my new-found challenge.

My initial exposure was quite positive. I attended the Blackboard Europe conference 2008 in Manchester in spring, and was positively surprised to hear Blackboard talk about openness, open standards and connectivity to, or even integration with, Moodle and Sakai. I was also very impressed by some of the community work being done, in particular the work around the Assignment submission building block at Sheffield Hallam University. Unfortunately this exuberance was not going to last.

My first frustrations started when trying to get more information in the assignment handler. I was very keen for us to have a look at it, and would have been more then happy to make a case for buying it. However, Blackboard was strangely evasive. The building block wasn't exactly ready, and they didn't really know what they were going to do with it. In our most recent discussion this changed to 'We don't really want to sell it to you, you can hire us to redevelop it'...

What? So you have a great bit of functionality, but in stead of selling it, or helping us integrate it, you want us to actually fork out the full development cost again?

I'm not quite sure how this fit's in with Blackboards new found spirit of openness, but if this is the way in which they see their relation with the community then I think I'll consider myself thouroughly disillusioned. In stead of supporting and empowering their community to build more value around their product, they seem to choose to stiffle innovation and collaboration. Similarly in our own efforts to start upskilling our team to create new functionality through building blocks I have not found a great deal of support either. Blackboard seem to not offer much in terms of training or support here, but in stead offer to build a buildingblock for us and let us watch and learn while they do it, and then leave us to it.

It's a shame that some vendors behave in this way, as it creates such an antagonistic atmosphere. You would think we both have similar goals and interests here, yet we are treating eachother like potential enemies and rivals. For example, I still don't know officially what Blackboard are going to release in version 9, as they feel they need to avoid anything that might be mistaken as a guarantee or legal commitment to deliver. But where does that leave us with our roadmap planning?

And I guess that's why I prefer Open Source software. Not because everything needs to be free, but because I want a mature constructive,collaborative relationship with the partners that we work with. And unfortunately many commercial vendors seem to have great difficulty doing that.

Wednesday 12 November 2008

Do essays promote surface learning?

I was reading an article this morning which referred to the book 'Academic Discourse'. The book investigates the importance of language in learning. I think everyone will recognise that language, and in particular the jargon and linked body of concepts in a discipline, are a key part of learning. To engage effectively with a subject, it is important that one is familiar with important constructs and the way they are expressed and referred to. And so it is only logical that an important part of our teaching, and assessment, focuses on those key constructs.

In Higher Education, essays are often the medium of choice to evaluate learning. The wisdom handed down through the ages dictates that essays are suitable to assess higher order skills and understanding. But is that really the case? Of course the freedom to construct your own answer, or perhaps even choose which questions to answer, gives the student maximum freedom in expressing his or her understanding. But that freedom is also very easy to abuse.

Because we must realise that students aren't always looking to express what they learned. They might be looking to meet the expectations that will lead to the desired result, usually a grade. And when pursuing this quest, students often find that writing a good essay is a problem that can be solved with some linguistically skills, and doesn't necessarily require the attainment of any new understanding. And so in this light, perhaps we should investigate the value of very open and unfocused assignments. Because, while in a very different way then for instance multiple choice exams, they too can promote surface learning strategies when not designed with due care.

Furthermore, it also questions the value of computer marked essays. Most of these systems are designed largely around linguistic criteria, and so only exaggerate this problem. This is especially true if we consider the consequences of students understanding how their essays will be marked by such a system.

Sunday 5 October 2008

Open accreditation

About 2 weeks ago a very interesting discussion on open accreditation started, I think on D'Arcy Norman's blog. Some of the responses, such as for instance David Wiley's response, are very edupunk. Do we even need degrees? I'm not sure that's a viable position to be honest. I think George Siemens hit the nail on the head when he said that "providing a statement of competence is only value when the provider of the statement is also trusted". Traditionally it has been institutions like our Universities that have instilled that trust. It was against this background that I have argued that accreditation is a key part of the value proposition for HE. But to be honest, I'm not so sure about that anymore.

In a draft of a call for action I read recently, Microsoft, Cisco and Intel are calling for serious reforms to our assessment system, as they feel it no longer assesses the skills that they value (creativity, collaboration and communication to name a few). That is a very serious indictment, but I think not an unjust one. many of these skills are, or should be, implicitly part of what we think of as "a degree". But if they are not assessed, how do we ensure they are taught, and more importantly, learned? This becomes even more important when we are increasingly atomising the curriculum. If we want to let students pick and mix, we should at least be able to ensure that the sum of their choices still adds up to what we consider to be the whole of their degree.

I think a transparent and reliable way to assess these 21st century skills would go a long way to solving some of our problems in lifelong learning. It would make the accreditation of prior learning easier, as in my opinion it is this 'hidden curriculum' that often concerns people when considering accrediting prior learning. And with prior learning, instantly we have a vehicle to enable a flexible curriculum that spans multiple universities, or the incorporation of non-institutional learning into a qualification. But more crucially, if we can measure these things transparently perhaps trust becomes less important. If degrees no longer are black boxes with a reputation, but an open book that we can all evaluate ourselves... Portfolio anyone... ?

Friday 3 October 2008

Evidence based teaching

One of the topics that came up several times over the past days in Reykjavik, is that of the differences in culture around assessment. Different countries have different ways in which they perceive and deal with assessment, and this can have a significant impact on the effect of the assessments, and the success of the educational system as a whole.

One particularly interesting approach was outlined by Jakob Wandall, who's work in the Danish national tests I have blogged about last year in High stake national assessments and ranking. I tried to capture Jakob's slide on a picture, but unfortunately that failed rather miserably, so I have tried to recreate his message in the graphic below:




The graph outlines how both the focus of the assessment (on the horizontal axis) and the purpose for which the results are primarily used (on the vertical axis) vary from country to country. I thought the visualisation was very interesting. Comparing this to, for instance, the outcomes of the 2006 PISA, it is interesting to note that neither the approach of the Scandinavian schools (who focus primarily on learner focused formative assessment) nor the Anglo-Saxon approach 9that is much more heavy on the measurements of indicators for performance, tied in to funding) really yields the best results.

The starts of PISA are of course the Finnish, and the unique approach is apparent from this graph. in stead of moving somewhere between the top left and the bottom right of the graph, they sit toward the top right. The Finish system highly values national measurements, evaluating the success of the system by objective measurements. However these measurements are not tied to any control, either through formal channels or more informal ones such as public rankings. In stead the measurements made in the Finnish system have the purpose to inform teaching and learning. An evidence based approach to teaching shall we say.

When I translate this to our own practice, I can't help but relate this to demands to increase the amounts of formative assessment in our teaching. And while I am sympathetic to these demands, these assessments are similar to those in the top left of the above graph, informing and supporting individual learning processes. And so perhaps in stead of focusing primarily on formative individual assessment, we should focus (also) on assessment and evaluation that informs teaching. Building an infrastructure through which lecturers can stay in touch with the progress, successes and difficulties of all their students, and modify their teaching based on this understanding continuously.

Sunday 28 September 2008

Hello Reykjavik

I just arrived in Reykjavik for a conference on PISA 2006 and the transition to e-assessment. It's my first time in Iceland, and I must say it was a bit surreal. I'm just reading Green Mars by Kim Stanley Robinson, which contains a lot of descriptions of a newly terraformed Mars: cold, lots of rocks and lots of lichen. Trust me, walking around in Iceland came scarily close to how I had been imagining the novel in my head up till then.

I'm hoping to find some time over the next days to post my thoughts on the conference. There is a very impressive lineup of international speakers scheduled to speak, and i am looking forward to exchanging ideas and opinions with them. Pictures will have to wait I'm afraid, as I forgot to pack the cable that connects my camera to my laptop...grrr...

Sunday 21 September 2008

Connecting connectivism

The article Learning Networks and Connective Knowledge has been really valuable for me in understanding the ideas behind connectivism a lot better. It is a bit of a read, but in my opinion well worth the time and attention.

What resonated particularly well for me, is the idea of building an emergentist theory of learning. I have always preferred a holistic approach to understanding. One of the major weaknesses in our 'Western' view of the world is the idea that we can understanding everything by reducing it to it's component parts. I suppose it is something that developed with my long term practice of Chinese martial arts and philosophy. More recently I have found tremendous value in 'system thinking' as described by Peter Senge in his book 'The Fifth Discipline'. In this book Senge criticises the reductionist approach to running businesses such as our obsession with KPI's and the like. I think I'm starting to realise that connectivism really is based on similar principles, applied to learning.

Reflecting further on connectivism, and in particular on the idea of 'levels of knowing', there are several other things falling into place as well. I have been a fan of the SOLO taxonomy ever since being introduced to it by Graham Gibbs about 3 years ago. For me it makes so much more sense then the archaic taxonomy of Bloom. It classifies levels of understanding by the amount of connections that a learner makes, and the broadness of those connections (for instance into other domains of knowledge). It seems to me to be an excellent reflection of how learning would develop according to the connectivist model.

So after a somewhat sceptical start, I must say that I'm beginning to warm to some of the ideas behind connectivism. I do still think some of the theory and arguments behind it need more refinement, and perhaps that is something I should try and articulate over the next few weeks to help this discussion along. For the moment though many of the ideas are still somewhat in the 'primordial soup' stage, and so I will give myself a few weeks before venturing down that path further.

Tuesday 16 September 2008

Yay we won!

Woohoo, our SLAM for ALT-C 2008 on the Digital Divide has been found:



















And the organisers have been kind enough to award us their special pick. I feel so warm and fuzzy inside now :)

Do have a look at the other SLAMS, and award winners on the Digital Divide Slam homepage!

Monday 15 September 2008

Free at last?

Research done by PISA has shown convincingly that school systems whit a high degree of autonomy perform better (see my post on SAT troubles for a bit more info) . It seems that the Liberal Democrats have now formally adopted this position, and have outlined plans to scrap the national curriculum. A brave move. It will be interesting to see how this discussion unfolds, and if it will survive the inevitable backlash from the control brigade.

Sunday 14 September 2008

What is Connectivism?

It's only the first week and I'm already behind schedule, how embarrassing. Either way, here are my reflections on the first week of connectivism:

Levels of analysis
Although not a part of this weeks reading, I did find a lot of value in a video recommended by Clark Quinn: (not Donald Clark as I erroneously said earlier):



It seems to me that a lot of the differences in the various theories and views on learning really boil down to the level of analysis or perspective that you take on the problem. Connectivism in that sense is the result of the analysis of learning within a new level or structure that has been created through new technology.

Analogies
Aside from the level of analysis, analogies can form another perspective on a problem. Often we start employing an analogy because it aids in the representation of an aspect of an idea. However, analogies are always flawed, and so when we start employing our analogy to liberally we inevitably run into problems. Unfortunately our brain seems to like, and need, simplicity and so we often find ourselves stuck in our own analogy.

The brain as a computer is a very obvious analogy. Knowledge as an object that can be internalised is perhaps also the result of a subconscious analogy. In the days where books were not too abundant and the number of views expressed in them relatively limited perhaps it was logical to see the book as a synonym for knowledge. And so reading the book, internalising it, equivalent to learning. the observation had very little to do with what learning really is. It is more an expression of how learning commonly took place.

And so for the blogging, networking and
podcasting fanatics amongst us, networked learning has become our preferred mode of learning. And while it serves a lot of us very well, I am not sure it actually makes it a theory of learning, or if it is merely an instantiation of it. And to be very precisely, perhaps it is more a means of sense making, more then learning. Learning, to me, is still something I cannot easily separate from the individual.

Friday 12 September 2008

ALT-C 2008

This week I have had the pleasure of attending ALT-C in Leeds. We had an awesome opening by Hans Rosling, but unfortunately I cannot find the recording for that. For those of you who have not heard of Hans, I thoroughly recommend looking at his TEDtalk, and the Gapminder website.

While several papers and presentations were the ussual rehashes and repeats of previous years, there were also some very interesting nuggets. One was from the University of Vienna, who have been looking at the development of an IMS LD design tool for lecturers within the EU funded project Prolix. While I couldn't easily find much documentation on the tool they developed, you can download the source for GLM (based on Eclipse) from sourceforge. I'll certainly be having a look at it over the next few weeks.

Another highlight was the talk by George Siemens. If you are interested you can still look at the recordings for the session in Eluminate (you'll have to download their Java applet for it though). it was a shame I only found out George was staying in the same hotel as I was when I was checking out, and too hung over to try and engage in some conversation.

My third pick would be the SLAM session on the Digital Divide, where we all created small clips on the digital divide in little groups which was a lot of fun. The recordings should all be up on the wiki, although the last time I checked ours still hadn't made it there :( However, I did find this picture by Christina Costa of the group I worked with.

Friday 5 September 2008

Introduction to Connectivism course

I have enrolled on the Connectivism and Connective Knowledge course, together with many, many of my colleagues (about 1600 in total I think!). This is my introductory post, which is part of the suggested pre-course activities.

My background
My name is René Meijer, and I am currently managing the Educational Development Unit at the University of Derby in the UK. I moved to the UK about 4 years ago from Holland, where I developed IT and e-learning projects and policies for secondary education.

Why I am interested in this course
Firstly I see this as an important part of my own professional development. I am looking forward to meeting new people and learning about new ideas. I am particularly interested in better understanding more about what 'models of learning' and what 'value propositions' are relevant in Higher Education today, and of course tomorrow. Secondly, I am also working on the design of professional development for our own lecturers, and I am very interested in looking at this 'model' of learning to see how appropriate it would be to apply there.

When would I consider this course a success?
I think success for me is very much linked to this model of learning. How will participation be, how valuable is the network and networked information that results from it. In what way are there financially viable ways of using this model in other provision? Success I guess, will be linked to a positive answer on each of those questions

Other random info about me
I suppose there's more then enough random info on this blog, feel free to have a look around.

Wednesday 13 August 2008

Conscious competence and certainty based marking

Certainty based marking (sometimes erroneously refered to as competence based marking) is an advanced scoring strategy that requires learners to classify how certain they are of their response when submitting it. A higher certainty carries a possible higher reward, but also a much higher penalty when the response is incorrect. As such certainty based marking can mitigate guessing on constrained response items, but it is also ver useful as a stimlans for reflection. More information can be found in articles like "Certainty-Based Marking (CBM) for Reflective Learning and Proper Knowledge Assessment".

There are other interesting options to explore however, and I was reminded of one when I read
Conscious Competence - a reflection on professional learning, which talks about the conscious competence model. In my opinion, these two fit together very nicely, as depicted in the diagram below. Candidates providing the wrong answer, but indicating a high degree of certainty about their answer can be considered as 'unknown incompetent', as they seem unaware of their misconceptions. Candidates providing the wrong answer with a very low degree of certainty have progresssed to 'known incompetence', as they have at least correctly identified their lack of understanding. When providing the correct answer with a low degree of certainty, learners can be assigned to the unknown competence stage untill finally tey progress to known competence if they provide a high certainty correct response.

Although I am still looking for an opportunity to actually try this in practice, I think it has a lot of potential in spporting an integrated formative and summative assessment strategy.

Friday 8 August 2008

The value proposition of HE

I have previously expressed some ideas about the value of higher education and how, at least for the less research intensive institutes, it is moving away from content and knowledge, and towards guidance and accreditation. However a few separate experiences this week have lead me to start thinking slightly differently about the future and value of higher education.

It all started with my enrolment on the connectivism course that is being prepared by Stephen Downes and George Siemens. I think it was Stephen who made the case for assessment to be individual. After all learners come to a course or activity with individual goals and ambitions, and so it doesn't really make sense that they would be assessed in the same way. While this doesn't invalidate the importance of assessment and accreditation, it does question the validity of having predefined outcomes and criteria for these perhaps.

over coffee this morning I had a discussion with a colleague, who was explaining to me the importance of the community of practice, and how we needed to find a way to make learners part of a community of practice before and after their actual enrollment on a module or course. He made a very strong case for what should be a major benefit of doing a course with the University: Joining a community of peers and experts. Very consistent with Stephen's ideas I thought.

Then this afternoon, while I was wrestling the backlog in my GReader, I stumbled on a piece on the value of social networks by Engeström (via Grainne's blog) which again confirmed this notion. Basically Engeström explains that a relation, and thus a network, only has value as a result of the object that this relation is built on. In Flickr these are pictures, in Delicious they are bookmarks. Similarly in education, these could be courses or subjects, just like my colleague was proposing with the communities of practice.

And so maybe the value of HE is not primarily around accreditation. Perhaps the most important value we can offer is the organisation and support of learning networks around subjects of interest. In that case, we have a lot of work to do...

Friday 1 August 2008

The big assessment question

Assessment has been in the news an awful lot lately, albeit not very positively. There is of course the whole SAT's palava, but i will resist the temptation to comment on that. My position on this is outlines in previous posts on this blog, and I can only say that it is good to see that a lot of the momentum around this seems to be finally heading in the right direction. Its a shame we often need some sort of disaster to finally be open to change. A more surprising current issue is that of the Dyslexic student's exams battle. Which deals with a medical student's problems with multiple choice tests, something further clarified by the BBC in a follow-up article: Why can't people with dyslexia do multiple choice?

The comment by the student's solicitor that "Every professional body or employer who relies for a professional qualification, or as a promotional gateway, on multiple choice questions is heading for a fall." is of course a bit of a joke. Quite frankly I am rather appalled by what seems like a rather misguided attempt to 'make a splash' at the expense of something as crucial as our exams system. While there are many gripes that you could reasonably hold against multiple choice question, I don't think the link to dyslexia is really that valid. Considerations around presentation, or even using screen readers, can reasonably address most potential issues that might result from a disability. in addition, I think we should not shy away from critical reflection on the degree of special provisions that we put in place to accommodate students, as these provisions could significantly alter the nature of an assessment and then compromise the validity and equitability of the award. There will always be differences between learners in how well they perform in various types of assessment. This is one of the reasons to make sure there is a variety of assessment methods being used.

The more interesting question though, is around authenticity. The student in question is quoted in the article, saying that "In normal day life, you don't get given multiple choice questions to sit. Your patients aren't going to ask you 'here's an option and four answers. Which one is right?". And to an extend I think she has a point there. While there will always be situations in which we will have to rely on 'proxy's' to infer attainment, I do agree that currently we rely way too much on proxy's that are sometimes quite remote from the competencies that we try to measure. In this sense education system is stuck in it's traditions, in stead of applying the objective and critical reflection that we say we value so much in higher education.

A similar point, and some suggestions for moving forward, are made in the blog post 21st Century Assessment, where this 'formula' is proposed for a modern fit-for-purpose assessment system. Especially the elements of collaboration and peer assessment are extremely important and very much underutilised in our current practice. Partly I suspect that this links in with how uncomfortable we still are with the loss of our position as the holder and tranferrer of all knowledge. This role warranted a 1 to many broadcast model of education. Education today however is moving much more towards a many to many model, whereby the role of the teacher is much more one of guidance, coaching and accreditation of a learning process that involves peers, external resources and actors and experiences from previous professional roles. I'm not quite sure we are really ready to fulfil that role yet though.

Monday 28 July 2008

Michael Wesch and the Future of Education

I completely forgot where I found this, as it's been sitting in my saved Firefox 3 tabs for a few days now, so apologies for the lack of attribution. This excellent talk by Michael Wesch (the guy that brought you the Youtube video "the machine is using us") gives a great view on what learning and teaching really should be like.

If you don't have time to watch the whole thing, at least have a look at the first 10 minutes, which will already give you some great ideas on the paradigms in which education seems to be stuck, an dhow to perhaps get beyond those.

Tuesday 10 June 2008

Towards a research agenda on computer-based assessment

At the EU workshop I attended in Ispra, Italy last year (see blogposts Psychometrics versus pedagogy and High stakes national assessments and ranking) we agreed to write some articles on quality aspects of computer based assessments to go towards a report for the European Commission. I'm glad to say that the report has now been published, and can be accessed online via the following link: Towards a research agenda on computer-based assessment

I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.

Sunday 1 June 2008

Review: Classmarker

As we're in the middle of a review of the tools we use in support of assessment, I thought I'd share my analysis of the various tools that we come across. As today is a Sunday, we'll start off with a simple one:

Classmarker

Classmarker is an online quizmaker that offers free quizzes (supported by advertisement) with upgrades (including removing the advertisement) for an additional fee.
Type: online service
Cost: Free with paid upgrades
Features: Multiple choice quiz, free text quiz or punctuation quiz.
Interoperability: None
System requirements: Any browser

The first thing I notice when registering, is that the UK doesn't exist, although the 4 home nations do. A more serious point to note, as with many online services, is that all content (and so that includes all personal information, questions and results) will be the property of Classmarker.

The features of this service are extremely limited. While the Classmarker supports 3 question types, it only allows you to use one of those per test. Options such as randomisation, feedback and branding are all features you will have to pay for. There seems to be no way to import or export your questions.

The site seems to be built mainly around Google Adsense. The advertisement and a Google search box is present on every possible page, and that will include the ones your learners visit. Upgrading to get rid of the advertisements costs $24.95 (or 49.95 for a business account, whatever that means). But then your users will still ave to register with the service before being able to take the test. Allowing for unregistered learners to take a test will cost you $0.10 or more per learner. Not really value for money given the incredibly limited features that are on offer.

Conclusion: I really can't see anything of value here. If you need something that is hosted for you, most survey services offer you more functionality. If you have your own space to host your assessments, even the simpler tools available wil offer more then Clasmarker.


Apologies to have to start of with such a negative review. I just stumbled across this tool today, and I thought I might as well write this up now. Do let me know if you have any comments, or perhaps sugestions for other tools I could review.

Thursday 29 May 2008

New podcast on assessment

I've been toying around with the idea of doing something useful online, in stead of just venting my unsollicited rants here. I've come up with the idea to start a podcast around assessment practice, as I think there aren't nearly enough easilly available resources on the topic. The podcast, and the first test episode, can be found here. Please feel free to have a look and give me some feedbacks or tips, I could really do with some good advice and practical tips.

Saturday 17 May 2008

SAT troubles

There's been a lot of upheaval this week about SAT tests. After a report published by the Children, Schools and Family committee of the House of Commons, MPs warn that national Sats tests distort education, which then lead to the schools minister defending the Sats, followed by technical difficulties with the tests. Personally I am not convinced the tests are really the problem.

One of the keynotes at the Blackboard Europe 2008 conference was given by Andreas Schleicher, the director of the PISA program for the OECD. He presented a very compelling set of ideas around successful (secondary) educations. Some of the conditions he identified (and all of these are based on the data gathered by the programme over the past years) are:

  • No stratification. Education systems that have separate streams, schools and or qualifications for learners based on their performance tend to do poorly. An example of this is the Dutch system, where secondary education is stratified over VMBO, HAVO and VWO based on a learners performance in primary school. The British system actually comes out quite well here (if we ignore the stratification that takes place because of the divide between private and public schools that is).
  • Standards. It is important to work to common standards. Central examinations are one way of enforcing common standards, and so the SAT tests do satisfy this condition.
  • Autonomy. It is crucial for schools and teachers to have a high degree of autonomy as long as their performance raises no concerns. Here we obviously fail completely as the British system dictates how schools teach and assess to a very high degree.
  • High Expectations, challenge and support. Both for teachers and learners, education should provide challenge, the expectation of high performance, but also plenty of support (staff development for instance). I think this is another area in which we fail to deliver.

Our main problem lies in the area of autonomy. We no longer trust our teachers and schools do do what they do best based on their professional judgments. In stead there is this weird notion that education is better served by central generic judgments made by policymakers. The problem with SATs isn't that they provide a common high stakes benchmark for learners. The problem is that this information is abused for public league tables and the like, which inevitably leads to pressures on learners that have nothing to do with their personal learning. It's the same pressures that lead to Universities coercing students into filling out the national student survey more favorably.

In Finland schools have no idea about their performance related to their neighbors. Funny enough in Finland it doesn't really make a difference. Only 4% of the variance in scores on the PISA tests can be assigned to the difference in quality between schools. Finnish schools have around 9 applicants for every position offered, and this is not because of higher salaries or anything like that. It is because the system in Finland provides a challenging environment in which people are valued, can grow and develop and actually make a difference.

Thursday 15 May 2008

Blackboard world Europe 2008 (2): Assignment submission

Right then, some more from the past Blackboard conference, as promised...

I attended 2 very interesting talks around a building block developed for Sheffield Hallam University called: 'The Assignment Handler'. It is basically an extention of the gradebook functionallity that already exists within Blackboard.

Sheffield Hallam have decided on a policy that all grades should be fed back to students in a central place, together with feedback. The central place they chose was the Blackboard gradebook. To do that they implemented the following features:
  • A transparent and consistent handling of online exams, online submitted exams and exams submitted through the assignment handling office. All these can be set on Bb, submission is logged on Bb, and results and feedback are published through bb. this creates a central place where student process can be comprehensively managed (by staff and students)
  • Some bulk-upload and download functionality. Assignments are renamed using module codes and student numbers. Feedback and marks can be uploaded in a single archive, which is useful with larger cohorts.
  • The option to withold a mark until the student has reflected on, and responded to, the feedback provided. The University is now researching to what extend this actually motivates students to engage genuinely with their feedback.
  • Generation of confirmation e-mails as receipts of submission
  • Support for group assignments
As we have just started to look into a structural sollution around online submission ourselves, this presentation was brilliantly timed. There was a lot of mumbling in the audience on the non-responsiveness from Blackboard on this issue, as many institutions have requested functionality like this before. and in all fairness, most of it is pretty generic and sensible and should probably have been part of the core product for years. In stead it is now a buildingblock that Blackboard will most likely charge us a nice extra fee for.

Wednesday 14 May 2008

Blackboard Europe conference 2008

As we use Blackboard at the University of Derby, I attended the European Blackboard conference in Manchester this week. The conference was of to a bit of a poor start. No wireless available for conference go-ers, just the crappy connection for which the hotel charged £15 a day. I decided that was a bit ridiculous, hence the late submission of this post. The keynote and my first workshops on Tuesday were really poor, and I started to loose heart. Luckily some little gems did manage to arise from the rubble of disappointment.

Blackboard NG (next generation)
I was very please to see assessment high on the agenda for the next generation(s) of Blackboard. Tools supporting peer and self assessment, a new and expanded Grade centre (replacing the somewhat limited Gradebook) and the integration and expansion of the existing WebCT and Blackboard quiz tools will certainly add a bit of meat on the meager bones of the platform's support for assessment. What actually surprised me (and I would still like to actually see this before I truly believe it) is the announcement that Blackboard will be working towards interoperability with other CMS-es such as Moodle and Sakai. We saw a demonstration of a learner portal page that listed courses and notifications from courses in various platforms transparently, which was very promising. This would allow for an institution to grant much wider freedoms to faculty intheir choice of platform without loosing the integration that only a single platform can currently offer. Watch this space.

More tomorrow, it's time to spend some time with my family now...

Monday 28 April 2008

Problem based Learing in Second Life

I attended a presentation by Daden, who are doing a lot of very impressive and interesting things in Second Life (and other virtual worlds). I thoroughly recommend having a look at their space in Second Life, where they have some great mash-ups with Google Earth. I would post some links here, but the ones I could find on their site aren't working, which is a bit rubbish.

Aside from the things appealing to my inner geek, there were also some ver interesting applications in learning. One project I found particularly interesting was the JISC funded 'Problem based learning in Second Life'. We were shown a simulation of a road traffic accident used for assessment. The detail was quite incredible (including the ability to listen to the patients breathing, which adjusted over time based on the actions of the attending paramedic). The medical sciences as usual are front runners in the use of new technologies, but I could see many applications in other domains.

The question that does still bug me is whether we should be doing this in open worlds, like Second Life, or if we should be using more private spaces. Perhaps a happy medium will be found in the Second Life Grid, which seems to be looking to offer the best of both worlds... so to speak.

Friday 25 April 2008

Assessment standards: a manifesto for change

A group of 34 prominent academics has taken a laudible stance against our current assessment culture (see also this THE article). You can find the manifesto and it's supporters at the bottom of this post. Point 3 especially I think is very poignant within the context of e-Assessment, where our obsession with the measurable (I'm thinking Item Response Theory here) has gotten way out of hand at the expense of validity.

The Weston Manor Group


Assessment standards: a manifesto for change


  1. The debate on standards needs to focus on how high standards of learning can be achieved through assessment. This requires a greater emphasis on assessment for learning rather than assessment of learning.


  1. When it comes to the assessment of learning, we need to move beyond systems focused on marks and grades towards the valid assessment of the achievement of intended programme outcomes.


  1. Limits to the extent that standards can be articulated explicitly must be recognised since ever more detailed specificity and striving for reliability, all too frequently, diminish the learning experience and threaten its validity. There are important benefits of higher education which are not amenable either to the precise specification of standards or to objective assessment.


  1. Assessment standards are socially constructed so there must be a greater emphasis on assessment and feedback processes that actively engage both staff and students in dialogue about standards. It is when learners share an understanding of academic and professional standards in an atmosphere of mutual trust that learning works best.


  1. Active engagement with assessment standards needs to be an integral and seamless part of course design and the learning process in order to allow students to develop their own, internalised, conceptions of standards, and monitor and supervise their own learning.


  1. Assessment is largely dependent upon professional judgement, and confidence in such judgement requires the establishment of appropriate forums for the development and sharing of standards within and between disciplinary and professional communities.



Supporters:


Professor Trudy Banta

Dr Simon Barrie

Professor Sally Brown

Ms Cordelia Bryan

Dr Colin Bryson

Ms Jude Carroll

Professor Sue Clegg

Professor Linda Drew

Professor Graham Gibbs

Professor Anton Havnes

Dr Mary Lea

Dr Janet Macdonald

Professor Ranald Macdonald

Dr Debra Macfarlane

Dr Susan Martin

Professor Marcia Mentkowski

Dr Stephen Merry

Professor David Nicol

Professor Andy Northedge

Professor Lin Norton

Ms Berry O’Donovan

Dr Thomas Olsson

Dr Susan Orr

Dr Paul Orsmond

Professor Margaret Price

Professor Phil Race

Mr Clive Robertson

Dr Mark Russell

Dr Chris Rust

Professor Gilly Salmon

Professor Kay Sambell

Professor Brenda Smith

Professor Stephen Swithenby

Professor Mantz Yorke

Sunday 20 April 2008

Crowdsourcing assessment preparation

An article in the Wired Campus made me aware of a new service for test preparation called Socrato. It's seems to be a sort of massive online study group where people can submit, view and practice all sorts of tests (although at the moment mainly MCAS). The downside could be that this is a beta for which the final business model has not yet been chosen, so I'd be careful of the stuff you submit.

Friday 18 April 2008

Resources to support the assessment of learning

The latest entry in JISC Inform 21 links to "Resources to support the assessment of learning". I must say that the collection is far from comprehensive, and ery JISC/CETIS focussed. Still it's worth a look.

Wednesday 16 April 2008

Efficiency or effectiveness

The BBC reports that our government will be reviewing the efficiency of our exam system. I'm developing a rather serious aversion to efficiency, as it usually translates rather neatly into degradation.

It would be nice if the government would review the effectiveness of our exam system. Effectiveness is about reaching intended outcomes, not just about saving pennies. As the general secretary of the Association of School and College Leaders, John Dunford, said in the article: "It is vitally important that the government not only conducts a cost-benefit analysis of the current exam system but evaluates its effect on teaching and learning." Perhaps (god forbid) we could also review the effects of all the links to targets, KPIs and league tables on the quality of learning, as they certainly compromise the validity of the whole system. I will again point to the efforts of colleagues in Denmark, who seem to have understood this a whole lot better.

Tuesday 15 April 2008

Publishing exam questions in advance

I just finished reading an article in the Times Higher Education in which it is suggested that exam papers should be published in advance to students to cut down on stress. This idea apparently stems from a paper published by the University Mental Health Advisors Network.

Now I hope this is an oversimplification of what the paper actually suggests. Wen taken literally, the suggestion seems rather awkward. Surely just publishing questions in advance would lead to all sorts of problems. Papers are often designed to test only a subset of the curriculum. This is only a valid approach in combination with a moratorium on the questions during learning (otherwise learning would most likely be limited to these questions).

What we need is to move towards more authentic and negotiated assessment, and away from the eternal exam and essay constructions. That is hardly a new notion however, and not really anything to do with disability in particular.

Monday 14 April 2008

Split personalities








This fascinating short video give some insight into how our brain (or should I say brains?) work. The subject has had his 2 hemispheres severed in an attempt to decrease epileptic seizures. The video shows how Joe can now 'talk' to his disconnected right half of his brain through letting it draw pictures for him with his left hand. Amazing stuff, thanks for that to Donald Clark.

I am truly fascinated by research like this, in particular with the more philosophical questions that it raised about identity. Another amazing video on this topic is from Jill Taylor, in one of the most gripping TEDtalks I have seen. Jill describes how she one morning discovered she was having a massive stroke... Quite an opportunity for a brain scientist.









Friday 11 April 2008

Tutored by pirates









I just watched this incredibly inspiring and funny TEDtalk by David Eggers. It's a wonderful example of the power of what direct and personal feedback can do for learning. More importantly though, it is about how passion and fun can help children learn.

It does make me wonder though... would we be able to set up something like this in the UK, given all the bureaucracy with CRB checks and the likes?

Tuesday 25 March 2008

Heisenberg in education

In physics the Heisenberg uncertanty principle is a well known limitation of measurements. The principle explains the fundamental conflict between establishing a particle's speed, and it's position. The more we focus on making one of these explicit, the more uncertain our understanding of the other. This is not a shortcomming of our instruments or anything like that, it is a fundamental property of the universe. Perhaps it is time that we realise that in education our ability to measure things like student attainment is even more limited. It is not a limitation that we can overcome by measuring more. In fact that just makes the situation worse, as our measurements then start to influence what we are trying to observe and ussualy not for the better. This effect is called the observer effect, and it is a crucial element to take inot consideration when delivering high stakes assesments.

With the increasing pressures on education to measure and report, calls to take into account the observer effect (although ussualy not referred to as such) are becomming louder. The National Union of Teachers conference has spoken out against the practice, and I have raised the issue on this blog before in the post titled High stake national assessments and ranking. A very thoughtful analysis of the problem is given by Wesley Fryer in his post Raising expectations. Wesley argues for the return of teachers designing and delivering high stakes tests, in stead of these being set by governments and awarding bodies. While a lot can be said in favor of this idea, I do think it is important to realise that this is only possible if we combine this with a very serious upgrade of the staff development that is given to our teaching staff on the subject of assessment. Nevertheless, Wesleys post is definetly worth a read. Especially this little gem:
"... bestowed upon the plebeian masses by the academic elites filling the hallowed halls of commercial companies now profiting handsomely from our myopic focus on summative, simplistic, high-stakes assessments". That must be the best and most colourful descriptions of our asessment culture that I have ever read.

Monday 24 March 2008

Marking free text

One of the frequent criticisms on e-Assessment is the perceived limits in item types that can be supported by technology. While there are long debates to be had about assessing higher order skills with constrained response item types, I don't think these debates are going to take away the prime concern: Free text items.

I must say that I have serious doubts about marking free text by computer. I don't know enough about the principles involved to say this with any sort of authority, but I am aware of the kind of heuristics used in automated essay marking for instance. These heuristics are often grammatical and rhetorical in nature, and have fairly little to do with the subject matter (although it must be said that many human markers have been shown to use similar heuristics). Nevertheless, interesting progress is being made in this area, and eventually I am sure that language processing will be commonplace.

One of the interesting project that I recently became aware of, is the OpenComment project, which is lead by Denise Whitelock at the Open University. The project is looking to use latent semantic analysis to analyse learners responses to open ended questions in history and philosophy. Another interesting fact is that the project is developing this as a question type in Moodle, and so it should be relatively easy for everyone to reap the benefits of this technology within their own learning environments.

Automated marking is by no means the only value of using technology in assessment. The OpenMentor project, again from the Open University, is a great example. OpenMentor compares a mark assigned to a piece of work to the amounts of positive and negative feedback given, and checks this for consistency. In this way it can help in the coaching process of new teachers. Given the importance of feedback, I think it's wonderful to have explicit standards and training in giving it.

The ABC (Assess By Computer) software has so far escaped my radar. I wasn't aware of it until queried by the Times Higher Education for the article they were doing. The software has a support role similar to OpenMentor, but this time the support is provided around the marking process. The software can highlight keywords, compare answers to model answers and more. All of this for the sole purpose of making it easier on the human marker, but also improve consistency between human markers. Especially the latter is very welcome I think, as marking open ended questions and assignments can sometimes be somewhat of a dark art.

I only just discovered that bits of the e-mail I sent to the reporter actually appear in the article. If I would have known that I probably would have paid a bit more attention to my grammar :S

e-Assessment centre for the South-West?

Last Thursday I attended a conference / workshop that was exploring the business case of setting up an e-Assessment centre in the South West of England. The conference was organised by Dr. Kevin Hapeshi from the University of Gloustershire with talks from Denise Whitelock (always inspiring) and myself. I must say my presentation skills are somewhat out fo practice, and I coudl have probably done with reviewing the art of speaking.

The day took an unexpected turn during the afternoon sessions, where we were going to discuss the details of a regional e-assessment centre. Both work groups came to the surprising conclusion that perhaps this was actually not such a good idea after all. There are plenty of challenges that we need to face in this domain, but none of them really benefit from a regional approach.

The biggest challenge is the development of mature tools, standards and practice. I've blogged about this in the past ( see Standards in assessment, Open Source Assessment tools, The ideal assessment engine). This is not a challenge we can face as universities, or regions however, it is something that requires (inter-) national collaboration. Many of the other challenges are institutional. They revolve around generating awareness and changing culture and practice. This is not something you can do from the outside. We find it hard to change practice on other other campuses of the University. Changing practice requires a proximity to practitioners, and to the learning and teaching strategies and strategic stakeholders. I don't think that proximity is something you can achieve with a regional (and thus external) centre.

There are of course hybrid models, whereby Universities could collaborate in virtual networks, nominating and funding members of their own staff to work in and with the centre. but this might just all become a rather artificial model tailored mostly towards fitting the proposed solution, and not the problem.

Wednesday 20 February 2008

Standards in assessment

The attempts to define standards for computer based assessments have so far been largely unsuccessful. I think that one of the problems is the lack of clarity in the functional domain. Do we really understand the ontology of an assessment, and a question? I don't really think we do, and perhaps we never will. It is easy enough to find a way to define a multiple choice question in XML, but to do the same for 'any' question... I think it's a bit much to ask. You inevitable end up constraining what you can do.

This was one of the main problems with IMS QTI 1.2. The specification was incredibly limited, and thus any systems supporting the standard by definition were as limited. Worse, most systems did not even implement the standard fully, or correctly, and so QTI 1.2 never really got anywhere.

Version 2 was supposed to solve this. The specification (currently still a draft, version 2.1) is indeed a lot better, and allows for much more questions types, feedback, and scoring strategies. The problem is that to make all this possible in a standard XML definition, the specification has gotten rather complicated. I'm not sure it is a viable proposition to expect any vendor to support the standard in full. To make matters worse, all the big vendors, but also the Open University's OpenLearn, seem to be pushing the Common Cartridge, which includes an amended version of IMS QTI. 1.2. While it would be nice to be able to exchange and run questions that are embedded in learning materials from Blackboard or Moodle, it it does strike me as very unlikely that any vendor will now have a serious incentive to support anything beyond the Common Cartridge.

And so we might have to live with the fact that we are not going to have any standard for the exchange of question and/or assessment information. I'm not sure that s a bad thing though. We would probably be better of designing a decent system first, in stead of trying to standardise functionality that hasn't even been implemented anywhere yet. What use is interoperability, if there isn't anything to exchange?

Monday 18 February 2008

Open Source Assessment tools

I attended a JISC - CETIS workshop today discussing the latest set of Open Source assessment tools that JISC has commissioned. The triads of projects is to deliver Authoring, item banking and delivery tools based on the IMS QTI 2.1 standard. For more information on the individual projects, they are:
  • AQuRate (The authoring tool, developed by Kingston University)
  • Minibix (The item banking tool, developed by Cambridge University)
  • ASDEL (The delivery engine, developed by Southampton University)
While I applaud the 3 project teams for the work that they have done, I must also say that I was concerned.

There have been a lot of projects funded by the sector that were supposed to kick start the development and uptake of standards-based e-assessment. Projects like TOIA, APIS, R2Q2. None of these project ever became much more then a proof of concept. The current set of projects seems to be on course to be heading that same way. None of these projects ever have the institutional backing of a stakeholder that understands the long term business need for such a solution. In stead they are research bids by researchers and developers who's only mandate is to fulfill the requirements of the project plan, and who's only resources are those granted by, in this case, JISC. And so after the kick start the project dies, as the funding dries up.

Are we then forever in the hands of the commercial vendors? I certainly hope not, as so far they have been completely unable to impress me with their products. Most commercial tools offer little of the pedagogical affordances and support that they should be giving and are often even technically rather weak. I deeply believe that the only serious hope that we have in ever getting a valuable and usable set of assessment tools is by collaboratively developing them ourselves. Unfortunately the success that Moodle has become in the world of VLE's seems unlikely to be repeated in the area of e-Assessment anytime soon.

Ideas anyone?

Sunday 17 February 2008

Personal Learning and other challenges

The National Academy for Engineering has been trying to identify the grand engineering challenges for this century. It obviously features several challenges in environmental sciences, artificial intelligence an virtual reality. I was very pleased, and slightly surprised, to also see Advance personalized learning as one of the grand challenges.

While the explanation seems to start of with a bit of a disappointing focus on learning styles, it then picks up with applications that I find much more interesting, such as tailored support to learners based on ubiquitous data collection of their progress. I am not quite sure this is an engineering challenge though. 99% of the technology that is needed to meet this challenge already exists. It is primarily our inability and sometimes unwillingness to implement this properly that makes it a challenge.

A good start could probably be made in the education of those who are going to be delivering this personalised learning. From what I recall from my various bits of formal teacher training, the emphasis was on a rather old fashioned model of learning. I was taught how to teach, but seldom did we learn how people learn.

A second area that needs challenging in my opinion, is regulations and management. In most institutes I have worked for, innovation was strangled by conservative financial management (where risk is a dirty word, and profits are always expected in advance to cover investments... a very peculiar idea). In many areas professional bodies also seem to work more to the detriment then the benefit of innovation. The message there often seems to be 'do as we have always done, and you'll be alright'.

Personal Learning definitely is one of our great challenge. But the challenge is not to invent it, or make it technologically possible. The challenge is 'simply' to implement it, and make it work.

Friday 15 February 2008

Edutagger

I think it was Stephen Downes that referred me to Edutagger in one of his posts (where the man finds the time to post the extraordinary amount of stuff that he does is really beyond me by the way).

This I think is a really great idea, fitting in perfectly with developments around OER and the new role(s) of the university that I referred to in earlier posts around this topic. As mentioned in the posting on assessing informal learning, I strongly believe that the true value of the University, is in the guidance and support is provides around learning, and the accreditation of that learning. Edutagger to me is the perfect example of how, in this case in K12 education in the US, Web2.0 technology is utilised to realise one of the components of this guidance: "Where do I find reliable and useful resources to learn about topic X?". I think every module or program should probably have a collection of tagged and rated bookmarks like these in addition to (and eventually in stead of? ) their reading lists.

Monday 11 February 2008

Peer Assessment project: WebPA

One topic that I'm very interested in, both from a pedagogical perspective as a workload management one, is peer review and peer marking. I was therefor delighted when I was asked to be involved with the WebPA project at Loughborough University. The WebPA project is building a tool to support peer marking of group assignments. The system has been used with great succes in Loughborough for many years, and the project aims to make the tool available as an open source solution that can be implemented at other Universities.

We have just held our first workshop in the University of Derby, preparing for a pilot roll out later this semester. For those interested however, there is also a workshop running in Loughborough on the 5th of March. If you are interested in peer assessment, then I would thoroughly recommend that.

Quote

I don't normally make a habit of posting quotes, although I do like them. this one however seemed too good.

"Live as if you were to die tomorrow. Learn as if you were to live forever.
" - Mahatma Gandhi.

Amen.

Friday 8 February 2008

OpenLearn: back to basics?

Both Donald Clark and Seb Smoller have posted rather critical reviews of the content published by the Open University on their OpenLearn learning space.

One of the challenges for the OU I think, is the scale and methodology on which they (have to) work. Issues like scalability, reliability and accessibility will have been very high on the list of priorities, and whether we like it or not, all of these ussually make is a lot harder to be creative and innovative. Nevertheless, in spending over 5 million on repurposing this set of, mostly rather old and dull, resources it does seem that the OU has let this 'overhead' get way out of hand.

It is also a matter of expectations perhaps. I know when we attend conferences and presentations there is a lot of interesting and exiting stuff floating around, but if you poke a bit deeper into most of these presentations, you will find that the majority actually links to very small pockets of practice, pilots, or plans. Very few truly innovative practice actually develops into a mainstream embedded practice. The uncomfortable truth of projects like OpenLearn is that it suddenly exposes a lot more then the tiny tip of the iceberg that ussually makes it's way to dissemination.

And so perhaps this is really a good thing. It is an honest look into the state of higher education, and it gives some very clear, and perhaps uncomfortable, truths about the state and quality of the majority of our learning materials and activities. We should perhaps log out of second life, close our facebook for a minute and start cleaning up some of the more mundane mess in the backyard.

Monday 28 January 2008

How are we learning

Seb Schmoller's post made me aware of this excellent video.



It is one of the most comprehensive and catching summaries of learning today I have seen in a long time.