Tuesday, 18 December 2007

Granularity of learning

I was watching this very interesting presentation by Martin Weller, called Bridging the gap between web 2.0 and higher education




Something that caught my particular attention was Martins remarks about how technology challenges presuppositions on granularity, and what the consequences for the granularity of learning might be. I find the idea of more granular learning compelling, in particular in combination with personal learning (although Martin also rightfully points out that this is not just about what you want to learn, but also how you want to learn it!). On the other hand I also cannot help being concerned with what we loose from the holistic approach if we insist on atomizing everything. The whole is after all often more then the sum of its parts.

Perhaps the solution lies in a differentiation between atomized accreditations on the one side, the majority of which will probably be APEL, and separate aggregating accreditations that require you to integrate and join up what you've learned, and reflect on it on the appropriate level. These qualifications woudl focus more on (meta cognitive) skills and trans-disciplinary thinking.

As I've said before, our future is not in content, it is in guidance and accreditation!

Monday, 17 December 2007

Capetown OER Declaration

I finally found some time to read the Cape Town OER Declaration, and a selection from the deluge of comments that have piled up in my RSS reader the past weeks. Given the critical tone of most of these, I was expecting something very fundamentally flawed.

The declaration is an initiative of the Shuttleworth Foundation (yes, that's the same Shuttleworth as the one in Ubuntu). The purpose of the declaration is to accelerate the international effort to promote open resources, technology and teaching practices in education. Unfortunately many advocates of open learning have not really welcomed the declaration with open arms.

A noteworthy example of this can be found in the blog Half an Hour: Criticizing the Cape Town Declaration by Stephen Downes. While I normally find Stephens post very eloquent, I cannot support many of the arguments he makes. It leaves me with the impression that his main point (and that of many others) is that they are a bit miffed of they weren't consulted. To me the whole 'let's decide everything in a big all encompassing committee' culture is exactly the reason that hardly anything ever gets done, or done properly in education. Open source communities understand that democracies don't work. A benevolent dictator, or a meritocracy (or both) is what you need. I'm sure Mark Shuttleworth understood exactly that when he limited participation in drafting this initial declaration.

I for one support the initiative. I'm going to sign up for it now, and I would invite you to consider the same.

... Which reminds me, I still need to formally license the stuff on here with a creative commons license...






oo Martin Weller v

Tuesday, 11 December 2007

Proceedings from Work-based Learning Futures

I blogged in April about my excellent experience attending and presenting to the work-based Learning Futures conference in Buxton. I announced then that the proceedings would most likely be published as a special UVAC publication, and it now has. You can read op on the contribution from the e-APEL project-team that I am involved in by following this link. I thoroughly recommend having a look at the entire publication, as I think us learning technologists would benefit from realizing that innovation does not always mean technology.

Wednesday, 5 December 2007

The echo of teaching

I thought I'd have a go at answering the The Learning Circuits Blog: December Big Question - What did you learn about learning? One of the projects I have worked on this year, is the development of a tool supporting the Accreditation of Prior Learning (APL). It has been truly enlightening for me in many ways.

APL is going to be a core activity of a lot of Universities I recon. Content no longer seems to be the core business of the sector, as has been shown by initiatives such as Open Learn. Coming to grips with this is a bit like trying to understand Open Source business models I think, it requires a fundamental rethink of what is valuable. For most universities I think that value is going to increasingly lie in guidance and coaching on the one side, and assessment and accreditation on the other.

There seems to be a problem with accreditation of learning that has not taken place within the controlled environment of a course though. Very few universities are serious about APL, and I can't help but wonder why. Part of it, I am sure, is to do with fees and such, but not all. After some reflection I think we must also admit that APL exposes some critical weaknesses of our assessment processes. In theory our assessments are supposed to discriminate between those learners that have attained certain outcomes, and those who haven't. If that was all there was to it, then surely learners claiming APL could as simple as doing the regular assessment, but without attending the course.

The reason this isn't common practice I think, is that most assessments don't really assess the right outcomes. Most assessments I think are designed to trigger an echo of teaching, and not of learning. And of course our teaching is so good, that if the learner echo's a confirmation of our teaching, then surely that means the intended learning has taken place. But what if learning has not been a result of our teaching? Suddenly we cannot short circuit the inherent difficulty of assessing competence by resorting to looking for the echo of teaching.

I think it would be interesting to dig into assessment practices used by recruitment agencies. In a way they are asked to make assessments that employers aren't confident we have made. Furthermore, whatever they assess is always without the luxury safety net of knowing what has probably been learned and by which means.

Monday, 3 December 2007

Psychometrics versus pedagogy

One of the things I noticed during the workshop I attended last week, is the fundamental differences to the approach of computer based assessment between psychometrists and educators. it all boils down to the use and value of statistics.

I think within education we often don't evaluate our teaching and assessment practice enough, in particular by means of objective standards. For assessment practice the most well known methods of scientific evaluation are Classical Test Theory and, most importantly, Item Response Theory. I don't think many educators really bother with these, and some will stare very blankly should I ever bring up these terms in conversation. In stead we rely on our 6th pedagogic sense that rather mysteriously enables us to divine what assessment methods and questions work, and which do not.

The psychometric approach is radically different. Almost religiously sometimes, items are tested and analyzed using IRT. the most meticulous detail (question order, item order, delivery medium etc.) is reviewed for it's influence on responses. These statistical analysis only focus on one thing though, and that is the alignment of the item with the overall test. What the church of IRT seems to sometimes forget however, is to question whether or not the test in itself actually measures what it is assumed to measure. To a degree IRT is a bit of a circle argument, if not used carefully and in conjunction with other arguments.

It seems to me we could both do with a bit of each others zeal. Educators should really try and build in some structured objective evaluation of their assessment practices, and psychometrists should perhaps question the appropriateness of their means more fundamentally.