Tuesday 18 December 2007

Granularity of learning

I was watching this very interesting presentation by Martin Weller, called Bridging the gap between web 2.0 and higher education




Something that caught my particular attention was Martins remarks about how technology challenges presuppositions on granularity, and what the consequences for the granularity of learning might be. I find the idea of more granular learning compelling, in particular in combination with personal learning (although Martin also rightfully points out that this is not just about what you want to learn, but also how you want to learn it!). On the other hand I also cannot help being concerned with what we loose from the holistic approach if we insist on atomizing everything. The whole is after all often more then the sum of its parts.

Perhaps the solution lies in a differentiation between atomized accreditations on the one side, the majority of which will probably be APEL, and separate aggregating accreditations that require you to integrate and join up what you've learned, and reflect on it on the appropriate level. These qualifications woudl focus more on (meta cognitive) skills and trans-disciplinary thinking.

As I've said before, our future is not in content, it is in guidance and accreditation!

Monday 17 December 2007

Capetown OER Declaration

I finally found some time to read the Cape Town OER Declaration, and a selection from the deluge of comments that have piled up in my RSS reader the past weeks. Given the critical tone of most of these, I was expecting something very fundamentally flawed.

The declaration is an initiative of the Shuttleworth Foundation (yes, that's the same Shuttleworth as the one in Ubuntu). The purpose of the declaration is to accelerate the international effort to promote open resources, technology and teaching practices in education. Unfortunately many advocates of open learning have not really welcomed the declaration with open arms.

A noteworthy example of this can be found in the blog Half an Hour: Criticizing the Cape Town Declaration by Stephen Downes. While I normally find Stephens post very eloquent, I cannot support many of the arguments he makes. It leaves me with the impression that his main point (and that of many others) is that they are a bit miffed of they weren't consulted. To me the whole 'let's decide everything in a big all encompassing committee' culture is exactly the reason that hardly anything ever gets done, or done properly in education. Open source communities understand that democracies don't work. A benevolent dictator, or a meritocracy (or both) is what you need. I'm sure Mark Shuttleworth understood exactly that when he limited participation in drafting this initial declaration.

I for one support the initiative. I'm going to sign up for it now, and I would invite you to consider the same.

... Which reminds me, I still need to formally license the stuff on here with a creative commons license...






oo Martin Weller v

Tuesday 11 December 2007

Proceedings from Work-based Learning Futures

I blogged in April about my excellent experience attending and presenting to the work-based Learning Futures conference in Buxton. I announced then that the proceedings would most likely be published as a special UVAC publication, and it now has. You can read op on the contribution from the e-APEL project-team that I am involved in by following this link. I thoroughly recommend having a look at the entire publication, as I think us learning technologists would benefit from realizing that innovation does not always mean technology.

Wednesday 5 December 2007

The echo of teaching

I thought I'd have a go at answering the The Learning Circuits Blog: December Big Question - What did you learn about learning? One of the projects I have worked on this year, is the development of a tool supporting the Accreditation of Prior Learning (APL). It has been truly enlightening for me in many ways.

APL is going to be a core activity of a lot of Universities I recon. Content no longer seems to be the core business of the sector, as has been shown by initiatives such as Open Learn. Coming to grips with this is a bit like trying to understand Open Source business models I think, it requires a fundamental rethink of what is valuable. For most universities I think that value is going to increasingly lie in guidance and coaching on the one side, and assessment and accreditation on the other.

There seems to be a problem with accreditation of learning that has not taken place within the controlled environment of a course though. Very few universities are serious about APL, and I can't help but wonder why. Part of it, I am sure, is to do with fees and such, but not all. After some reflection I think we must also admit that APL exposes some critical weaknesses of our assessment processes. In theory our assessments are supposed to discriminate between those learners that have attained certain outcomes, and those who haven't. If that was all there was to it, then surely learners claiming APL could as simple as doing the regular assessment, but without attending the course.

The reason this isn't common practice I think, is that most assessments don't really assess the right outcomes. Most assessments I think are designed to trigger an echo of teaching, and not of learning. And of course our teaching is so good, that if the learner echo's a confirmation of our teaching, then surely that means the intended learning has taken place. But what if learning has not been a result of our teaching? Suddenly we cannot short circuit the inherent difficulty of assessing competence by resorting to looking for the echo of teaching.

I think it would be interesting to dig into assessment practices used by recruitment agencies. In a way they are asked to make assessments that employers aren't confident we have made. Furthermore, whatever they assess is always without the luxury safety net of knowing what has probably been learned and by which means.

Monday 3 December 2007

Psychometrics versus pedagogy

One of the things I noticed during the workshop I attended last week, is the fundamental differences to the approach of computer based assessment between psychometrists and educators. it all boils down to the use and value of statistics.

I think within education we often don't evaluate our teaching and assessment practice enough, in particular by means of objective standards. For assessment practice the most well known methods of scientific evaluation are Classical Test Theory and, most importantly, Item Response Theory. I don't think many educators really bother with these, and some will stare very blankly should I ever bring up these terms in conversation. In stead we rely on our 6th pedagogic sense that rather mysteriously enables us to divine what assessment methods and questions work, and which do not.

The psychometric approach is radically different. Almost religiously sometimes, items are tested and analyzed using IRT. the most meticulous detail (question order, item order, delivery medium etc.) is reviewed for it's influence on responses. These statistical analysis only focus on one thing though, and that is the alignment of the item with the overall test. What the church of IRT seems to sometimes forget however, is to question whether or not the test in itself actually measures what it is assumed to measure. To a degree IRT is a bit of a circle argument, if not used carefully and in conjunction with other arguments.

It seems to me we could both do with a bit of each others zeal. Educators should really try and build in some structured objective evaluation of their assessment practices, and psychometrists should perhaps question the appropriateness of their means more fundamentally.

Thursday 29 November 2007

High stake national assessments and ranking

It's been a long day full of many, many presentations. Fortunately the last presentation was actually one of the more interesting ones, and I did not have to fight off the embarrassment of falling asleep too hard. It was a presentation by Jakob Wandall from Skolestyrelsen.
about the new national computer based assessments in secondary education that have been introduced in Denmark.

While the technical side of this was interesting (they were using computer adaptive testing for instance), the most interesting bit of the talk had nothing to do with technology at all. It had to do with how the test was used, presented used and regulated.

In England, high stakes tests are a very big deal. The main reason is that they are always is that they are inevitable translated into rankings and funding consequences, leading to teachers and school becoming completely obsessed with assessments, drilling students until they are green in the face in the idle expectation it might raise the school a place or 2 in the oh so important regional league tables. It is this abomination that I think the Danish have elegantly addressed (apparently with the English system as the example of what they wanted to avoid at all costs, and understandably so!)

The publication of the results of these national benchmarks is strictly regulated. The national average is published and used for policy purposes, but no regional or individual result is public. Teachers can review all results of all their students, and even responses to individual questions, but are forbidden to communicate these results other then to the student and their parents (and this communication is not in the form of a grade, but of a textual report with feedback). Students have to be given their result by a qualified teacher that discusses the results and provides relevant feedback on the performance.

So it is impossible for a school, a local authority or the press, to rate and rank scores just on the numerical outcomes of a single test. It provides stakeholders on every level with the relevant information, without the detrimental effects of publication that we see in the US and UK. I think we've got a lot to learn from the Scandinavian approach to education

Monday 26 November 2007

The ideal assessment engine

I've been looking into criteria for assessment technologies a lot lately. One reason is that we are looking into migrating our current system to a new platform (as the old one, Authorware, is no longer supported). The other reason is that I have been invited by the Joint Research Centre to take part in a workshop on quality criteria for computer based assessments. I will be posting on the outcomes of that workshop next week. For now though, here are some of my thoughts on the topic.

Flexibility
The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
  • Flexibility in design: The layout of the question can be modified as desired, using media and such to create an authentic and relevant presentation
  • Flexible interactions: There is no point in systems that have parameterized 5 question types for you, and all you can do is define a title, question text, alternatives and select the right answer. Interactions testing and supporting higher order skills are, or should be, more complex then that.
  • Detailed and partial scoring: A discriminating question does not just tell you whether you were completely right, or completely wrong. It can tell you the degree to which you were right, and what elements of your answer had any value. It might also penalize you for serious and fundamental mistakes.
  • Detailed feedback: A lot of mistakes learners make are predictable. If we allow assessment systems to capture these mistakes and give targeted feedback, learners can practice their skills while lecturers can focus there time on more in depth problems that require their personal engagement.
  • Extensive question generation and randomization options: For the re-usability of assessments, generating questions using rules and algorithms given a single question almost infinite re usability. On the assessment level, the same is true for assessment generation based on large banks with questions tagged with subject matter and difficulty.
So far, no real news for TRIADS users (although no proprietary system I know of really supports this well).

Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on http://www.dylanwiliam.net/), the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.

Wednesday 21 November 2007

e-APEL article in Response

The new online journal Response has published a 'work in progress' report I wrote on the e-APEL project that I'm involved in. I'm afraid it is rather dated, as the journal took more then 8 months to actually publish this version. Still, for those interested in the accreditation of prior learning, or IT projects in education in general, it might be a worthwhile read.

Friday 12 October 2007

FREMA

At a JISC meeting this Thursday I was reminded of the FREMA project. I had been aware of their attempts to map the domain of e-Assessment for a number of years now, but I was not aware of some recent developments. Most interesting to me was their use of a semantic wiki. I had never heard of the concept in all honesty, but I found the idea fascinating. In particular for the purpose of Knowledge management and dissemination I think the possibilities here are truly significant.

for the FREMA project in particular, one of the things they were able to do as a result of using this technology, is a gap analysis of their understanding of the domain, but also of the available solutions within the domain. Unfortunately there are still quite a few gaps to be filled in the area of e-assessment, but at least through resources like these we can maximize our efficiency in finding existing solutions, and focusing our efforts on those gaps where the needs are most pressing.

Tuesday 9 October 2007

Can we just go back to learning please?

I'm getting a bit tired of the whole e-learning, eLearning, blended learning, learning 2.0 debate. The same goes for e-assessment, computer based assessment and computer aided assessment. All these debates seem to imply that there is a right mode of teaching, and a right mode of learning. And if there is one thing that does not exist for learning, then that's a magic recipe to make it happen. In the same way that I'm not in favor of the (ab)use of learning styles, I am also highly allergic to the vocabulary wars around the use of technology in learning.

As Clive Shepherd points out in his post on the subject: "the essence of good design for learning is to first develop a strategy that will produce an effective outcome and only then consider the media through which this strategy can be delivered efficiently." And the success of learning is not a result of the medium used. It is the result of the strategy, and the match of the strategy with the medium. Multiple choice tests aren't bad, nor are they good. They are a tool that can be used and abused expertly and inaptly.

Tuesday 11 September 2007

Is web 2.0 dumbing us down?

As some of you might know, I'm an avid listened of podcasts (mainly to make my daily hour long commute seem a little less wasteful). Two of the recent casts I listened to grabbed my specific attention.

The first is a presentation titled Republic 2.0 by Cass Sunstein. In this presentation mr. Sunstein explains the risks of web 2.0 to democracy. While the increased access to the expression and consumption of information and opinion seems like a wonderful thing, there are downsides to how we engage with blogs, wiki's and social networks. Due to the vast amount of information out there, but also because of the nature of these new social artifacts, we tend to expose ourselves only to information and opinions from those that we are close to (ideologically or otherwise). Research has shown that in homogeneous groups like these, polarization takes place: views and opinions become more singular and extreme.

This is a concern in itself, and something to keep in mind when considering aspects of our education system, such schools based on subgroups of our society over dimensions such as religion, class or even geography. The concern got a new dimension for me however after listening to one of the seminars of the Long Now Foundation In his talk, Ignore Confident Forecasters, Philip Tetlock shares some insights from his research on peoples ability to make appropriate predictions about complex future events (in this case in world politics). He found 2 types of thinking, leading to 2 distinct patterns of predictions. one group was classified as 'hedgehogs'. These were people that had a single specialism or conviction, and tried to explain everything in the world from this single perspective. The second group, the 'foxes' were more broad in their thinking, and the constructs they applied to solving problems. Foxes significantly outperformed the hedgehogs.

So this begs the question: If we allow ourselves to be exposed only to those views and people that we have sympathy with, something the web increasingly allows us to do, are we really depriving ourselves of the tools for a balanced and effective mental development?

Friday 3 August 2007

Simple tools

It's not about the technology. We often say it, but we rarely really mean it I think. Let's be fair now: Technology is kinda fun. I know I get carried away far to easily with new shiny things. But sometimes I forget that not everyone is paid to play with new shiny things. Some of us actually have to teach students on a regular basis. So for those colleagues, here's some lovely simple tools that I think can give you a lot of bang for just a little investment.

Course Genie
Course Genie is a lovely tool. It allows you to create nice looking materials and quizzes without any skills other then MS Word. It's a great tool if you want to develop something slightly more interactive then just uploading a module specification or PowerPoint to you VLE, and it integrates nicely with Moodle, WebCT and Blackboard. The quiz options are relatively simple, but actually have some powerful capabilities such as rich feedback based on answers given. The downside is that Course Genie does not save answers, scores, or track progress

QUIA
God knows who thought up the name for this one. In all honesty, QUIA would not be a tool that I'd use myself. it does a lot of things (games, quizes, surveys), but it does none of them very well. Nevertheless if you want a cheap tool that instantly will allow you to create a lot of simple interactive resources this might be the tool for you. The absolute plus to QUIA is that you instantly tap in to the whole QUIA community, which allows you to share and rate all developed materials. QUIA also has a very decent result tracking option, which even allows you to expert a detailed result analysis to Excel.

Electronic Voting Systems
A great way to make teaching, especially in larger groups, more interactive. It's a great way to find out what students think, understand or want during your lesson, so that you can adapt to their needs in the appropriate manner. We've been using Turning Point for a short while now, and so far I am quite impressed. It's extremely easy to set up a simple poll supported by PowerPoint. it is however also possible to much more complex thing, such as linking in responses to demographics.

Monday 16 July 2007

Essays and plagiarism

I have often questioned the prejudice a lot of academics have in favor of essays, and against a lot of other means to assess learners. Perhaps this is a matter of how they were taught and assessed themselves. One the other hand I have often thought that this is a matter of a lack of training. After all, most lecturers do not get that much training in how assessment should be done properly. In addition most lecturers don't have much time to spend on the assessment either. The result is an assignment that is easy to develop (although a lot harder to mark). Either way, this prejudice is one of the major barriers to the uptake of e-assessment. It is also a serious cause for concern about the validity of our degrees.

So it was with some curiosity and expectations that I started reading It’s not plagiarism, it’s an easy essay on the Learn Online blog, where an interview was posted with a provider of an online essay writing service. I thought it was rather appalling.

As mentioned, I'm no fan of essays. They are certainly overrated, overused and usually very poorly delivered. However I do not think they are useless. Someone's critical thinking is rather wasted if it isn't combined with the ability to express that thinking. If the learner has any sort of ambition to climb the corporate (or other) ladder, writing reports and proposals will be something they do regularly. So as long as essay assignments are given some sort of relevant subject and format, I think they are a very valid form of assessment.

The limited value of essays however does not validate the existence of services like this however. I don't care how the service providers attempt to rationalize this, as is done in this article. It is just morally wrong to provide a service that is obviously designed to let people cheat. The audacity to claim that the objective here is to transform education baffles me. If you really want to change education, I could think of a million other and better ways of doing it then by making money out of helping people cheat. I have no respect for anyone in this line of business whatsoever.

Friday 13 July 2007

CAA Conference, Loughborough

I just returned from our annual visit to Lougborough for the CAA conference. It was good to catch up again with colleagues from the UK and abroad. It's always interesting to see themes emerging from amongst the vast number of interesting papers and sessions. It however also worrying how some things seem to change very little over time. I think it was Denise Whitelock from the OU who rightfully pleaded for the education sector to grab control of e-assessment. Because there still aren't any good assessment systems. The commercial systems in general are pedagogically poor, and the HE sector seems to have a very difficult time producing anything beyond prototypes, or very discipline specific an narrow innovations. Nevertheless, there are lots of interesting things happening.

CAA and language

There has been a lot of activity in the use of advanced technology to support languages. We have briefly looked at the use of speech recognition and text to speech technology to support ESOL this year, and so I was very happy to see similar interests and developments in various other places. Xin Yu and John Lowe from the University of Bath are investigation the use of recorded audio and video in the assessment of spoken English in several Universities in China, where apparently a basic mastery of English is a mandatory part of the curriculum for all degree programs. Cambridge Examinations presented their new online assessment environment, but I must say that pedagogically I found little of interest there. The solutions main focus is to cope with the almost industrial scale on which they assess students. What did sound interesting was the research done by the University of East Anglia, the SQA and the RNID to use avatar signing in assessment. The avatars used are quite advanced, in order to accommodate for the rich set of expressions they need to properly convey sign language. I would be really interesting to see if we can use these avatars in our own developments in the ESOL domain.

Facilitation of marking and reflection

Another interesting development is the increased use of technology in support of reflective processes, such as evaluation and peer review. I think e-assessment has suffered greatly by the association with MCQ quizzes, and so it is good to see that there are a lot of people waking up to the realisation that the value of technology is not (necessarily) automation. In fact I would say that these types of facilitation are usually more innovative and transformative then the common automation of existing practice. This was also confirmed by Bobby Elliot from the SQA, who referred to this practice as assessment 1.5 (as opposed to assessment 2.0 .. and we all want everything to be 2.0 these days of course).

Saturday 30 June 2007

Learning styles

I came across two articles today discussing learning styles. One was on the blog of Clark Quinn, the other on the blog of Harold Jarche. It was good to see some healthy critisism of our hangup with learning styles.

Don't get me wrong, I do think there is some use in the idea of learning styles. When designing resources or activities, it is paramount that we look at the design from different angles and perspectives. Using learning styles can be a great way to do this. When used appropriately, this will help you create flexible and varied learning resources and activities, that have the potential to support rich learning for a wide variety of learners.

The problem arises when we give in to our innate need to categorize people. Learning styles seem like such a wonderful tool to slap a 'this is how you teach me' manual on people. I just don't think we can and should simplify personal learning in this way. Aside from the question of wether or not the categories used are the right ones, and the diagnostic tools accurate, there is a more fundamental problem: People don't learn best using a single style. Powerful learning occurs when people are stimulated an a varied and rich way, for instance by addressing multiple senses.

When linking in new concepts with existing ones, the question isn't what the best single link is we can make. The question is how we can make as many useful links as possible. That is what results in powerful long term and deep learning.

Thursday 7 June 2007

Student complaints

I had a chat with a colleague last week, as I was looking for some feedback on the e-assessment she had run this year. She told me about this student complaint, and how it was handled. In all honesty I still can't quite believe we can be this stupid.

Some of our lecturers are experimenting with feedback during summative exams. This means learners immediately know whether they answered a question right of wrong, and sometimes why. It also allows them to instantly view their result at the end of the exam. So far, results from the experiment have been quite good (except for learners who really haven't mastered any of the subject matter, who obviously get rather depressed by this whole affair).

A student that had previously taken part in one of these experimental exams, was now taking part in a regular exam; no feedback and no immediate score. The lecturer, as usual, had received the transcripts from us and, after a possible moderation, had published these to the learners. This learner however was apparently convinced that the lecturer had fiddled with the results. Why else would they not have published the results upon the exam finishing? Apparently the student made quite a scene, upon which the programme leader decided to adhere to the students demands and make the exam available again for his perusal for another week. He was then also granted a resit. I really don't understand the problem here.

Let's look at a normal exam. When you hand in the paper, do you get an immediate result? No, of course you don't; a lecturer takes it away, marks it, maybe a colleague moderates it, and the mark is published. It's the lecturer's job to fiddle with the results, that's what we are paying them for isn't it? They look at the answers provided, and make a judgment on the extent to which they satisfy the assessment criteria? Introducing (partial) marking by a computer can only make this process more objective, not less!

It's no wonder lecturers and teachers sometimes complain about the lack of respect learners give them. Because this isn't just about caving in to a student and giving them their way. This also sends a message that this lecturer was wrong. It sends the message that it is the lecturer who has to cater to the students every whim, as she has now been instructed to do. I understand this lecturer will not be teaching here next year, and if this story is true, I can understand why.

Saturday 2 June 2007

Plagiarism

This article in the New York Times caught my eye this evening. Not the smartest thing to do when you're a superintendent, to copy your speech of the internet. Then again, I feel we do sometimes loose perspective in these issues.

Only very rarely in our lives do we manage to be original. And in fact more often then not when we are, we are not as right or as effective as we could be. Perfection after all takes time, practice and experience. Our whole success as a species stems from our ability to copy each other.

So is it so bad to plagiarize? Sure, it is wrong to claim credit or ownership for something that is not your own. But when you are doing your job, isn't it perfectly normal to apply existing best practice to that job. In fact, aren't we all expected to do this? When doing this, we are not claiming ownership, or credit. We are merely utilizing the collective experience of our race to further our cause. In my opinion, that's the only thing this superintendent is really guilty of. Ok, maybe she could have made a bit more of an effort to rephrase some of the ideas and concepts she collected from the internet. Plagiarism however, is really not the issue here though, that is just taking things a tad too far.

Sunday 27 May 2007

Targets, procedures and learning

The idea that responsibility and creativity are slowly dying by neglect has bothered be for a while now. I've never really been able to put my finger on what the problem was, but now I have had some help by two very distinguished thinkers:

The first person to lift up some of the veil was Peter M. Senge. I recently read his book "The Fifth Discipline: The Art & Practice of The Learning Organization", which I thoroughly enjoyed. In it he discusses much about how people, but also organizations, learn. More importantly, he addresses why they often don't. A lot of that links back to the systems we use to enforce and measure. Systems that, by their constant need for satisfaction, lead us to shorty term symptom driven thinking and compliance, in stead of long term holistic and creative problem solving. I am looking forward to reading Dr. Senge's treaty on education "Schools That Learn: A Fifth Discipline Fieldbook for Educators, Parents, and Everyone Who Cares About Education", in which I hope to gain some insights in how to make changes to our education system in order to create an enviroment in which students are once again challenged to be creative, in stead of pummeled into being compliant.

Last Thursday I coincidentally had the opportunity to attend a lecture by one of our visiting professors: John Seddon. His crushing analysis of the effects of the target and regulation driven framework that is destroying much of our public services fitted seamlessly into the seeds sown by My earlier reading. I would recommend visiting the vanguard website to have a look at some resources or events that are planned.

I see lots of parallels between these management paradigms, and concepts that keep us busy in education. The discussions about formal and informal learning, the pros and cons of instructional design, and the problem of over-assessment all seem to be based in similar (mis-) conceptions over what makes us learn and thrive. There are parts in our education system, and more so even in the collection of professional bodies governing some of the qualifications and licenses, that seem to be tailored towards breeding armies of self-confirming professionals, in stead of critical and independent thinkers. And while this seems comfortable at first, I do believe we are slowly digging ourselves some enormous holes.

-------
Update: I gather that the podcast of John Seddon's talk is now available.

Saturday 21 April 2007

Work-based Learning Futures

I had the great pleasure to attend the WBL-futures conference in Buxton last Thursday and Friday. I found the experience to be incredibly refreshing. The conference was relatively small, but with a very high level of quality amongst visitors. 63% of attendees actually also presented a paper, as did I, and so most guest had a lot to bring to the discussion. I also found the atmosphere to be incredibly constructive and innovative, and it made me wonder where true innovation in education is really taking place. Sure, technology is a wonderful enabler for many pedagogical development, but I couldn't help but think that the true innovators were to be found in this domain of work based learning. While I, and many of my colleagues in e-learning, are still debating personal learning spaces and social networks, here these concepts have been applied for many years, either with or without the help of technology. And many of the underlying pedagogy of applied personal negotiated learning is something I think will spread out across HE in the next few years. We will have to grow into our roles as coaches and assessors, and let go of the idea that there is a future in being an expert, a lecturer. For those keen to take some steps on this path, have a look at the abstracts. Most of these are discussion documents, a more concise publication is being prepared to be published in October.

I will try and collect and upload some of the presentations and such (including ours) as soon as possible... watch this space!

Monday 16 April 2007

Screencasts on scoring

I've been meaning to build up a collection of learning resources supporting the professional development of teaching staff in relation to (e-)Assessment. Aside from the wiki on our website, I have now uploaded 2 screen-casts on scoring strategies. This is my first attempt at using this medium, so any feedback on it's effectiveness, or lack thereof, would be greatly appreciated.

Thursday 5 April 2007

It's funny how things coincide sometimes. Today a student came to our office, wondering if we could help her develop an assessment in support of her dissertation. For some reason several students have made this request this year, while none used to before. Personally I think it's a wonderful thing, and testament to how students now view technology as an integral and important part of their lives and careers. It also shows how they are much less subject to pigeonholing technology.

For years the Centre for Interactive Assessment Development has been supporting lecturers by developing rich assessments. The university of Derby has in general adopted a far more innovative approach then most, progressing e-Assessments far beyond the domain of the multiple choice quiz. still the applications sought for innovative assessment practice have been rather limited. Primarily assessments were measurements of learning, mostly summative or formative only in the sense of providing practice and a benchmark for a later summative exercise. Assessments that actually teach, or diagnose are a relative new addition to our portfolio. Assessments for other purposes, such as research or evaluation, have never even been considered as part of the centres value and expertise. This is something I am desperate to change. I'm glad at least students seem to agree with me on that one.

Tuesday 13 March 2007

Assessing Informal Learning

I have the pleasure to be working on the e-APEL project on developing a way to help students assess their prior learning, in particular where that learning is experiential. It's a tremendously exiting field, combining developments in diagnostic assessments, e-portfolio's and informal learning. Especially the latter is a domain I wasn't intimately familiar with, but during the initial months of my involvement with this project, it has certainly roused my interest.

Informal learning makes assessments much more crucial, but it also emphasis questions and issues with validity and reliability. In formal learning, the trust in the quality of the learning activities already provides us with a degree of confidence in the outcomes it achieved in our students. We feel more in touch with the process and therefor are in a position to moderate any shortcomings of the assessment with that intimate familiarity.

I know this sounds rather awful, because obviously we are always supposed to have brilliantly valid and solid designs for teaching and assessment. But it isn't easy to create valid assessments; it's not easy at all. Still, it is a problem that needs addressing for various reasons. I will list a few:
  • More and more universities are moving away from the business model where their knowledge or content is the added value they sell. Content is no longer a commodity, and learning content is no different. Anyone can look up the principles on general relativity in great detail without even going near to a university. This is one reason why universities are moving to making their knowledge publicly available, such as Open Courseware at MIT, and more recently Open Learn at the Open University in the UK. What they have realised is that the true value of the University, is in the guidance and support it provides around learning, and the accreditation (and thus the assessment!).
  • Developments towards more simulation and game based learning raise questions about how to assess these less tangible and structured ways of learning in an appropriate and quantitative way. The same is true for assessing competences and skills in stead of knowledge and understanding.
Informal, or at the very least unstructured and non-linear, learning will increase in importance, and is the crucial unsolved challenge we face in the implementation of both our lifelong learning agenda, and the agenda of the knowledge economy (which, ironically, is turning out not to be about knowledge at all!). Unfortunately there is a deafening silence at most institutes on the subject, and the session on lifelong learning I attended at the JISC conference today didn't touch on the subject once.

JISC conference resolutions

I was attending the final keynote on the annual JISC conference in Birmingham, an engaging presentation by Tom Loosemore. He was discussing the BBC's 15 principles of good web design, number 6 to be precise: The web is a conversation… join in!

Now the actual intent behind that was more about tone of voice and transparency, but reading the phrase literally before he elaborated did finally trigger my resolve to start a professional blog ( I already have a personal one, but I find I rarely use it, mainly because it lacks focus).

So, here it is!

I had some other names in mind before 'René's Assessment', but these were already taken (most of those by people who abandoned them after a single post in 2004... rather annoying). Then again, this isn't a bad name I suppose.