I've recently been given a more active role in the ownership of our VLE, Blackboard. And while at heart I am an open source fanatic, I do also believe that in the end the tools aren't necessarily that important, it is how you use them. With that in mind I was planning to take a positive approach to my new-found challenge.
My initial exposure was quite positive. I attended the Blackboard Europe conference 2008 in Manchester in spring, and was positively surprised to hear Blackboard talk about openness, open standards and connectivity to, or even integration with, Moodle and Sakai. I was also very impressed by some of the community work being done, in particular the work around the Assignment submission building block at Sheffield Hallam University. Unfortunately this exuberance was not going to last.
My first frustrations started when trying to get more information in the assignment handler. I was very keen for us to have a look at it, and would have been more then happy to make a case for buying it. However, Blackboard was strangely evasive. The building block wasn't exactly ready, and they didn't really know what they were going to do with it. In our most recent discussion this changed to 'We don't really want to sell it to you, you can hire us to redevelop it'...
What? So you have a great bit of functionality, but in stead of selling it, or helping us integrate it, you want us to actually fork out the full development cost again?
I'm not quite sure how this fit's in with Blackboards new found spirit of openness, but if this is the way in which they see their relation with the community then I think I'll consider myself thouroughly disillusioned. In stead of supporting and empowering their community to build more value around their product, they seem to choose to stiffle innovation and collaboration. Similarly in our own efforts to start upskilling our team to create new functionality through building blocks I have not found a great deal of support either. Blackboard seem to not offer much in terms of training or support here, but in stead offer to build a buildingblock for us and let us watch and learn while they do it, and then leave us to it.
It's a shame that some vendors behave in this way, as it creates such an antagonistic atmosphere. You would think we both have similar goals and interests here, yet we are treating eachother like potential enemies and rivals. For example, I still don't know officially what Blackboard are going to release in version 9, as they feel they need to avoid anything that might be mistaken as a guarantee or legal commitment to deliver. But where does that leave us with our roadmap planning?
And I guess that's why I prefer Open Source software. Not because everything needs to be free, but because I want a mature constructive,collaborative relationship with the partners that we work with. And unfortunately many commercial vendors seem to have great difficulty doing that.
Showing posts with label OpenSource. Show all posts
Showing posts with label OpenSource. Show all posts
Friday, 5 December 2008
Monday, 18 February 2008
Open Source Assessment tools
I attended a JISC - CETIS workshop today discussing the latest set of Open Source assessment tools that JISC has commissioned. The triads of projects is to deliver Authoring, item banking and delivery tools based on the IMS QTI 2.1 standard. For more information on the individual projects, they are:
There have been a lot of projects funded by the sector that were supposed to kick start the development and uptake of standards-based e-assessment. Projects like TOIA, APIS, R2Q2. None of these project ever became much more then a proof of concept. The current set of projects seems to be on course to be heading that same way. None of these projects ever have the institutional backing of a stakeholder that understands the long term business need for such a solution. In stead they are research bids by researchers and developers who's only mandate is to fulfill the requirements of the project plan, and who's only resources are those granted by, in this case, JISC. And so after the kick start the project dies, as the funding dries up.
Are we then forever in the hands of the commercial vendors? I certainly hope not, as so far they have been completely unable to impress me with their products. Most commercial tools offer little of the pedagogical affordances and support that they should be giving and are often even technically rather weak. I deeply believe that the only serious hope that we have in ever getting a valuable and usable set of assessment tools is by collaboratively developing them ourselves. Unfortunately the success that Moodle has become in the world of VLE's seems unlikely to be repeated in the area of e-Assessment anytime soon.
Ideas anyone?
- AQuRate (The authoring tool, developed by Kingston University)
- Minibix (The item banking tool, developed by Cambridge University)
- ASDEL (The delivery engine, developed by Southampton University)
There have been a lot of projects funded by the sector that were supposed to kick start the development and uptake of standards-based e-assessment. Projects like TOIA, APIS, R2Q2. None of these project ever became much more then a proof of concept. The current set of projects seems to be on course to be heading that same way. None of these projects ever have the institutional backing of a stakeholder that understands the long term business need for such a solution. In stead they are research bids by researchers and developers who's only mandate is to fulfill the requirements of the project plan, and who's only resources are those granted by, in this case, JISC. And so after the kick start the project dies, as the funding dries up.
Are we then forever in the hands of the commercial vendors? I certainly hope not, as so far they have been completely unable to impress me with their products. Most commercial tools offer little of the pedagogical affordances and support that they should be giving and are often even technically rather weak. I deeply believe that the only serious hope that we have in ever getting a valuable and usable set of assessment tools is by collaboratively developing them ourselves. Unfortunately the success that Moodle has become in the world of VLE's seems unlikely to be repeated in the area of e-Assessment anytime soon.
Ideas anyone?
Labels:
Assessment,
e-Assessment,
ecass_Feb08,
Interoperability,
OpenSource
Monday, 11 February 2008
Peer Assessment project: WebPA
One topic that I'm very interested in, both from a pedagogical perspective as a workload management one, is peer review and peer marking. I was therefor delighted when I was asked to be involved with the WebPA project at Loughborough University. The WebPA project is building a tool to support peer marking of group assignments. The system has been used with great succes in Loughborough for many years, and the project aims to make the tool available as an open source solution that can be implemented at other Universities.
We have just held our first workshop in the University of Derby, preparing for a pilot roll out later this semester. For those interested however, there is also a workshop running in Loughborough on the 5th of March. If you are interested in peer assessment, then I would thoroughly recommend that.
We have just held our first workshop in the University of Derby, preparing for a pilot roll out later this semester. For those interested however, there is also a workshop running in Loughborough on the 5th of March. If you are interested in peer assessment, then I would thoroughly recommend that.
Labels:
Assessment,
e-Assessment,
OpenSource,
Peer assessment
Monday, 17 December 2007
Capetown OER Declaration
I finally found some time to read the Cape Town OER Declaration, and a selection from the deluge of comments that have piled up in my RSS reader the past weeks. Given the critical tone of most of these, I was expecting something very fundamentally flawed.
The declaration is an initiative of the Shuttleworth Foundation (yes, that's the same Shuttleworth as the one in Ubuntu). The purpose of the declaration is to accelerate the international effort to promote open resources, technology and teaching practices in education. Unfortunately many advocates of open learning have not really welcomed the declaration with open arms.
A noteworthy example of this can be found in the blog Half an Hour: Criticizing the Cape Town Declaration by Stephen Downes. While I normally find Stephens post very eloquent, I cannot support many of the arguments he makes. It leaves me with the impression that his main point (and that of many others) is that they are a bit miffed of they weren't consulted. To me the whole 'let's decide everything in a big all encompassing committee' culture is exactly the reason that hardly anything ever gets done, or done properly in education. Open source communities understand that democracies don't work. A benevolent dictator, or a meritocracy (or both) is what you need. I'm sure Mark Shuttleworth understood exactly that when he limited participation in drafting this initial declaration.
I for one support the initiative. I'm going to sign up for it now, and I would invite you to consider the same.
... Which reminds me, I still need to formally license the stuff on here with a creative commons license...
oo Martin Weller v
The declaration is an initiative of the Shuttleworth Foundation (yes, that's the same Shuttleworth as the one in Ubuntu). The purpose of the declaration is to accelerate the international effort to promote open resources, technology and teaching practices in education. Unfortunately many advocates of open learning have not really welcomed the declaration with open arms.
A noteworthy example of this can be found in the blog Half an Hour: Criticizing the Cape Town Declaration by Stephen Downes. While I normally find Stephens post very eloquent, I cannot support many of the arguments he makes. It leaves me with the impression that his main point (and that of many others) is that they are a bit miffed of they weren't consulted. To me the whole 'let's decide everything in a big all encompassing committee' culture is exactly the reason that hardly anything ever gets done, or done properly in education. Open source communities understand that democracies don't work. A benevolent dictator, or a meritocracy (or both) is what you need. I'm sure Mark Shuttleworth understood exactly that when he limited participation in drafting this initial declaration.
I for one support the initiative. I'm going to sign up for it now, and I would invite you to consider the same.
... Which reminds me, I still need to formally license the stuff on here with a creative commons license...
oo Martin Weller v
Monday, 26 November 2007
The ideal assessment engine
I've been looking into criteria for assessment technologies a lot lately. One reason is that we are looking into migrating our current system to a new platform (as the old one, Authorware, is no longer supported). The other reason is that I have been invited by the Joint Research Centre to take part in a workshop on quality criteria for computer based assessments. I will be posting on the outcomes of that workshop next week. For now though, here are some of my thoughts on the topic.
Flexibility
The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on http://www.dylanwiliam.net/), the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.
Flexibility
The main strength of our current system is flexibility. This has several aspects, that are all important in their own right:
- Flexibility in design: The layout of the question can be modified as desired, using media and such to create an authentic and relevant presentation
- Flexible interactions: There is no point in systems that have parameterized 5 question types for you, and all you can do is define a title, question text, alternatives and select the right answer. Interactions testing and supporting higher order skills are, or should be, more complex then that.
- Detailed and partial scoring: A discriminating question does not just tell you whether you were completely right, or completely wrong. It can tell you the degree to which you were right, and what elements of your answer had any value. It might also penalize you for serious and fundamental mistakes.
- Detailed feedback: A lot of mistakes learners make are predictable. If we allow assessment systems to capture these mistakes and give targeted feedback, learners can practice their skills while lecturers can focus there time on more in depth problems that require their personal engagement.
- Extensive question generation and randomization options: For the re-usability of assessments, generating questions using rules and algorithms given a single question almost infinite re usability. On the assessment level, the same is true for assessment generation based on large banks with questions tagged with subject matter and difficulty.
Questions without assessments
As Dylan Wiliam so eloquently worded at the ALT-C conference (you can find his podcast on the matter on http://www.dylanwiliam.net/), the main value in learning technology lies in "to allow teachers to make real-time instructional decisions, thus increasing student engagement in learning, and the responsiveness of instruction to student needs." I could not agree more. However, this means that questions should not just exist within the assessment, but instead be embedded within the materials and activities. Questions become widgets that can of course still function within an assessment, but also work on their own without loosing the ability to record and respond to interaction. This, as far as I'm aware, is unchartered territory for assessment systems. Territory that we hope to explore in the next iteration of our assessment engine.
Labels:
Assessment,
CAA,
CBA,
e-Assessment,
e-learning,
Eduction,
OpenSource,
Research,
Software,
Technology,
tools
Subscribe to:
Posts (Atom)