I've recently been given a more active role in the ownership of our VLE, Blackboard. And while at heart I am an open source fanatic, I do also believe that in the end the tools aren't necessarily that important, it is how you use them. With that in mind I was planning to take a positive approach to my new-found challenge.
My initial exposure was quite positive. I attended the Blackboard Europe conference 2008 in Manchester in spring, and was positively surprised to hear Blackboard talk about openness, open standards and connectivity to, or even integration with, Moodle and Sakai. I was also very impressed by some of the community work being done, in particular the work around the Assignment submission building block at Sheffield Hallam University. Unfortunately this exuberance was not going to last.
My first frustrations started when trying to get more information in the assignment handler. I was very keen for us to have a look at it, and would have been more then happy to make a case for buying it. However, Blackboard was strangely evasive. The building block wasn't exactly ready, and they didn't really know what they were going to do with it. In our most recent discussion this changed to 'We don't really want to sell it to you, you can hire us to redevelop it'...
What? So you have a great bit of functionality, but in stead of selling it, or helping us integrate it, you want us to actually fork out the full development cost again?
I'm not quite sure how this fit's in with Blackboards new found spirit of openness, but if this is the way in which they see their relation with the community then I think I'll consider myself thouroughly disillusioned. In stead of supporting and empowering their community to build more value around their product, they seem to choose to stiffle innovation and collaboration. Similarly in our own efforts to start upskilling our team to create new functionality through building blocks I have not found a great deal of support either. Blackboard seem to not offer much in terms of training or support here, but in stead offer to build a buildingblock for us and let us watch and learn while they do it, and then leave us to it.
It's a shame that some vendors behave in this way, as it creates such an antagonistic atmosphere. You would think we both have similar goals and interests here, yet we are treating eachother like potential enemies and rivals. For example, I still don't know officially what Blackboard are going to release in version 9, as they feel they need to avoid anything that might be mistaken as a guarantee or legal commitment to deliver. But where does that leave us with our roadmap planning?
And I guess that's why I prefer Open Source software. Not because everything needs to be free, but because I want a mature constructive,collaborative relationship with the partners that we work with. And unfortunately many commercial vendors seem to have great difficulty doing that.
Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts
Friday, 5 December 2008
Friday, 3 October 2008
Evidence based teaching
One of the topics that came up several times over the past days in Reykjavik, is that of the differences in culture around assessment. Different countries have different ways in which they perceive and deal with assessment, and this can have a significant impact on the effect of the assessments, and the success of the educational system as a whole.
One particularly interesting approach was outlined by Jakob Wandall, who's work in the Danish national tests I have blogged about last year in High stake national assessments and ranking. I tried to capture Jakob's slide on a picture, but unfortunately that failed rather miserably, so I have tried to recreate his message in the graphic below:


The graph outlines how both the focus of the assessment (on the horizontal axis) and the purpose for which the results are primarily used (on the vertical axis) vary from country to country. I thought the visualisation was very interesting. Comparing this to, for instance, the outcomes of the 2006 PISA, it is interesting to note that neither the approach of the Scandinavian schools (who focus primarily on learner focused formative assessment) nor the Anglo-Saxon approach 9that is much more heavy on the measurements of indicators for performance, tied in to funding) really yields the best results.
The starts of PISA are of course the Finnish, and the unique approach is apparent from this graph. in stead of moving somewhere between the top left and the bottom right of the graph, they sit toward the top right. The Finish system highly values national measurements, evaluating the success of the system by objective measurements. However these measurements are not tied to any control, either through formal channels or more informal ones such as public rankings. In stead the measurements made in the Finnish system have the purpose to inform teaching and learning. An evidence based approach to teaching shall we say.
When I translate this to our own practice, I can't help but relate this to demands to increase the amounts of formative assessment in our teaching. And while I am sympathetic to these demands, these assessments are similar to those in the top left of the above graph, informing and supporting individual learning processes. And so perhaps in stead of focusing primarily on formative individual assessment, we should focus (also) on assessment and evaluation that informs teaching. Building an infrastructure through which lecturers can stay in touch with the progress, successes and difficulties of all their students, and modify their teaching based on this understanding continuously.
One particularly interesting approach was outlined by Jakob Wandall, who's work in the Danish national tests I have blogged about last year in High stake national assessments and ranking. I tried to capture Jakob's slide on a picture, but unfortunately that failed rather miserably, so I have tried to recreate his message in the graphic below:


The graph outlines how both the focus of the assessment (on the horizontal axis) and the purpose for which the results are primarily used (on the vertical axis) vary from country to country. I thought the visualisation was very interesting. Comparing this to, for instance, the outcomes of the 2006 PISA, it is interesting to note that neither the approach of the Scandinavian schools (who focus primarily on learner focused formative assessment) nor the Anglo-Saxon approach 9that is much more heavy on the measurements of indicators for performance, tied in to funding) really yields the best results.
The starts of PISA are of course the Finnish, and the unique approach is apparent from this graph. in stead of moving somewhere between the top left and the bottom right of the graph, they sit toward the top right. The Finish system highly values national measurements, evaluating the success of the system by objective measurements. However these measurements are not tied to any control, either through formal channels or more informal ones such as public rankings. In stead the measurements made in the Finnish system have the purpose to inform teaching and learning. An evidence based approach to teaching shall we say.
When I translate this to our own practice, I can't help but relate this to demands to increase the amounts of formative assessment in our teaching. And while I am sympathetic to these demands, these assessments are similar to those in the top left of the above graph, informing and supporting individual learning processes. And so perhaps in stead of focusing primarily on formative individual assessment, we should focus (also) on assessment and evaluation that informs teaching. Building an infrastructure through which lecturers can stay in touch with the progress, successes and difficulties of all their students, and modify their teaching based on this understanding continuously.
Labels:
Assessment,
Conference,
Education,
Evaluation,
Formative,
Quality Assurance
Monday, 15 September 2008
Free at last?
Research done by PISA has shown convincingly that school systems whit a high degree of autonomy perform better (see my post on SAT troubles for a bit more info) . It seems that the Liberal Democrats have now formally adopted this position, and have outlined plans to scrap the national curriculum. A brave move. It will be interesting to see how this discussion unfolds, and if it will survive the inevitable backlash from the control brigade.
Tuesday, 10 June 2008
Towards a research agenda on computer-based assessment
At the EU workshop I attended in Ispra, Italy last year (see blogposts Psychometrics versus pedagogy and High stakes national assessments and ranking) we agreed to write some articles on quality aspects of computer based assessments to go towards a report for the European Commission. I'm glad to say that the report has now been published, and can be accessed online via the following link: Towards a research agenda on computer-based assessment
I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.
I think there's many interesting articles and views within the report, and I will certainly be reviewing the interesting perspectives that my colleagues presented at the workshop in this report. Do have a look, I am positive there will be something of interest there for virtually anyone.
Labels:
Adaptive testing,
Assessment,
CAA,
CBA,
Conference,
e-Assessment,
Education,
Evaluation,
Feedback,
Psychometrics,
Research,
Resources
Saturday, 17 May 2008
SAT troubles
There's been a lot of upheaval this week about SAT tests. After a report published by the Children, Schools and Family committee of the House of Commons, MPs warn that national Sats tests distort education, which then lead to the schools minister defending the Sats, followed by technical difficulties with the tests. Personally I am not convinced the tests are really the problem.
One of the keynotes at the Blackboard Europe 2008 conference was given by Andreas Schleicher, the director of the PISA program for the OECD. He presented a very compelling set of ideas around successful (secondary) educations. Some of the conditions he identified (and all of these are based on the data gathered by the programme over the past years) are:
Our main problem lies in the area of autonomy. We no longer trust our teachers and schools do do what they do best based on their professional judgments. In stead there is this weird notion that education is better served by central generic judgments made by policymakers. The problem with SATs isn't that they provide a common high stakes benchmark for learners. The problem is that this information is abused for public league tables and the like, which inevitably leads to pressures on learners that have nothing to do with their personal learning. It's the same pressures that lead to Universities coercing students into filling out the national student survey more favorably.
In Finland schools have no idea about their performance related to their neighbors. Funny enough in Finland it doesn't really make a difference. Only 4% of the variance in scores on the PISA tests can be assigned to the difference in quality between schools. Finnish schools have around 9 applicants for every position offered, and this is not because of higher salaries or anything like that. It is because the system in Finland provides a challenging environment in which people are valued, can grow and develop and actually make a difference.
One of the keynotes at the Blackboard Europe 2008 conference was given by Andreas Schleicher, the director of the PISA program for the OECD. He presented a very compelling set of ideas around successful (secondary) educations. Some of the conditions he identified (and all of these are based on the data gathered by the programme over the past years) are:
- No stratification. Education systems that have separate streams, schools and or qualifications for learners based on their performance tend to do poorly. An example of this is the Dutch system, where secondary education is stratified over VMBO, HAVO and VWO based on a learners performance in primary school. The British system actually comes out quite well here (if we ignore the stratification that takes place because of the divide between private and public schools that is).
- Standards. It is important to work to common standards. Central examinations are one way of enforcing common standards, and so the SAT tests do satisfy this condition.
- Autonomy. It is crucial for schools and teachers to have a high degree of autonomy as long as their performance raises no concerns. Here we obviously fail completely as the British system dictates how schools teach and assess to a very high degree.
- High Expectations, challenge and support. Both for teachers and learners, education should provide challenge, the expectation of high performance, but also plenty of support (staff development for instance). I think this is another area in which we fail to deliver.
Our main problem lies in the area of autonomy. We no longer trust our teachers and schools do do what they do best based on their professional judgments. In stead there is this weird notion that education is better served by central generic judgments made by policymakers. The problem with SATs isn't that they provide a common high stakes benchmark for learners. The problem is that this information is abused for public league tables and the like, which inevitably leads to pressures on learners that have nothing to do with their personal learning. It's the same pressures that lead to Universities coercing students into filling out the national student survey more favorably.
In Finland schools have no idea about their performance related to their neighbors. Funny enough in Finland it doesn't really make a difference. Only 4% of the variance in scores on the PISA tests can be assigned to the difference in quality between schools. Finnish schools have around 9 applicants for every position offered, and this is not because of higher salaries or anything like that. It is because the system in Finland provides a challenging environment in which people are valued, can grow and develop and actually make a difference.
Labels:
Assessment,
complaints,
e-Assessment,
Education,
Quality Assurance
Sunday, 17 February 2008
Personal Learning and other challenges
The National Academy for Engineering has been trying to identify the grand engineering challenges for this century. It obviously features several challenges in environmental sciences, artificial intelligence an virtual reality. I was very pleased, and slightly surprised, to also see Advance personalized learning as one of the grand challenges.
While the explanation seems to start of with a bit of a disappointing focus on learning styles, it then picks up with applications that I find much more interesting, such as tailored support to learners based on ubiquitous data collection of their progress. I am not quite sure this is an engineering challenge though. 99% of the technology that is needed to meet this challenge already exists. It is primarily our inability and sometimes unwillingness to implement this properly that makes it a challenge.
A good start could probably be made in the education of those who are going to be delivering this personalised learning. From what I recall from my various bits of formal teacher training, the emphasis was on a rather old fashioned model of learning. I was taught how to teach, but seldom did we learn how people learn.
A second area that needs challenging in my opinion, is regulations and management. In most institutes I have worked for, innovation was strangled by conservative financial management (where risk is a dirty word, and profits are always expected in advance to cover investments... a very peculiar idea). In many areas professional bodies also seem to work more to the detriment then the benefit of innovation. The message there often seems to be 'do as we have always done, and you'll be alright'.
Personal Learning definitely is one of our great challenge. But the challenge is not to invent it, or make it technologically possible. The challenge is 'simply' to implement it, and make it work.
While the explanation seems to start of with a bit of a disappointing focus on learning styles, it then picks up with applications that I find much more interesting, such as tailored support to learners based on ubiquitous data collection of their progress. I am not quite sure this is an engineering challenge though. 99% of the technology that is needed to meet this challenge already exists. It is primarily our inability and sometimes unwillingness to implement this properly that makes it a challenge.
A good start could probably be made in the education of those who are going to be delivering this personalised learning. From what I recall from my various bits of formal teacher training, the emphasis was on a rather old fashioned model of learning. I was taught how to teach, but seldom did we learn how people learn.
A second area that needs challenging in my opinion, is regulations and management. In most institutes I have worked for, innovation was strangled by conservative financial management (where risk is a dirty word, and profits are always expected in advance to cover investments... a very peculiar idea). In many areas professional bodies also seem to work more to the detriment then the benefit of innovation. The message there often seems to be 'do as we have always done, and you'll be alright'.
Personal Learning definitely is one of our great challenge. But the challenge is not to invent it, or make it technologically possible. The challenge is 'simply' to implement it, and make it work.
Labels:
e-learning,
Education,
Learning Styles,
Personal Learning
Monday, 16 July 2007
Essays and plagiarism
I have often questioned the prejudice a lot of academics have in favor of essays, and against a lot of other means to assess learners. Perhaps this is a matter of how they were taught and assessed themselves. One the other hand I have often thought that this is a matter of a lack of training. After all, most lecturers do not get that much training in how assessment should be done properly. In addition most lecturers don't have much time to spend on the assessment either. The result is an assignment that is easy to develop (although a lot harder to mark). Either way, this prejudice is one of the major barriers to the uptake of e-assessment. It is also a serious cause for concern about the validity of our degrees.
So it was with some curiosity and expectations that I started reading It’s not plagiarism, it’s an easy essay on the Learn Online blog, where an interview was posted with a provider of an online essay writing service. I thought it was rather appalling.
As mentioned, I'm no fan of essays. They are certainly overrated, overused and usually very poorly delivered. However I do not think they are useless. Someone's critical thinking is rather wasted if it isn't combined with the ability to express that thinking. If the learner has any sort of ambition to climb the corporate (or other) ladder, writing reports and proposals will be something they do regularly. So as long as essay assignments are given some sort of relevant subject and format, I think they are a very valid form of assessment.
The limited value of essays however does not validate the existence of services like this however. I don't care how the service providers attempt to rationalize this, as is done in this article. It is just morally wrong to provide a service that is obviously designed to let people cheat. The audacity to claim that the objective here is to transform education baffles me. If you really want to change education, I could think of a million other and better ways of doing it then by making money out of helping people cheat. I have no respect for anyone in this line of business whatsoever.
So it was with some curiosity and expectations that I started reading It’s not plagiarism, it’s an easy essay on the Learn Online blog, where an interview was posted with a provider of an online essay writing service. I thought it was rather appalling.
As mentioned, I'm no fan of essays. They are certainly overrated, overused and usually very poorly delivered. However I do not think they are useless. Someone's critical thinking is rather wasted if it isn't combined with the ability to express that thinking. If the learner has any sort of ambition to climb the corporate (or other) ladder, writing reports and proposals will be something they do regularly. So as long as essay assignments are given some sort of relevant subject and format, I think they are a very valid form of assessment.
The limited value of essays however does not validate the existence of services like this however. I don't care how the service providers attempt to rationalize this, as is done in this article. It is just morally wrong to provide a service that is obviously designed to let people cheat. The audacity to claim that the objective here is to transform education baffles me. If you really want to change education, I could think of a million other and better ways of doing it then by making money out of helping people cheat. I have no respect for anyone in this line of business whatsoever.
Saturday, 30 June 2007
Learning styles
I came across two articles today discussing learning styles. One was on the blog of Clark Quinn, the other on the blog of Harold Jarche. It was good to see some healthy critisism of our hangup with learning styles.
Don't get me wrong, I do think there is some use in the idea of learning styles. When designing resources or activities, it is paramount that we look at the design from different angles and perspectives. Using learning styles can be a great way to do this. When used appropriately, this will help you create flexible and varied learning resources and activities, that have the potential to support rich learning for a wide variety of learners.
The problem arises when we give in to our innate need to categorize people. Learning styles seem like such a wonderful tool to slap a 'this is how you teach me' manual on people. I just don't think we can and should simplify personal learning in this way. Aside from the question of wether or not the categories used are the right ones, and the diagnostic tools accurate, there is a more fundamental problem: People don't learn best using a single style. Powerful learning occurs when people are stimulated an a varied and rich way, for instance by addressing multiple senses.
When linking in new concepts with existing ones, the question isn't what the best single link is we can make. The question is how we can make as many useful links as possible. That is what results in powerful long term and deep learning.
Don't get me wrong, I do think there is some use in the idea of learning styles. When designing resources or activities, it is paramount that we look at the design from different angles and perspectives. Using learning styles can be a great way to do this. When used appropriately, this will help you create flexible and varied learning resources and activities, that have the potential to support rich learning for a wide variety of learners.
The problem arises when we give in to our innate need to categorize people. Learning styles seem like such a wonderful tool to slap a 'this is how you teach me' manual on people. I just don't think we can and should simplify personal learning in this way. Aside from the question of wether or not the categories used are the right ones, and the diagnostic tools accurate, there is a more fundamental problem: People don't learn best using a single style. Powerful learning occurs when people are stimulated an a varied and rich way, for instance by addressing multiple senses.
When linking in new concepts with existing ones, the question isn't what the best single link is we can make. The question is how we can make as many useful links as possible. That is what results in powerful long term and deep learning.
Labels:
Education,
Learning Design,
Learning Styles,
Teaching
Tuesday, 13 March 2007
Assessing Informal Learning
I have the pleasure to be working on the e-APEL project on developing a way to help students assess their prior learning, in particular where that learning is experiential. It's a tremendously exiting field, combining developments in diagnostic assessments, e-portfolio's and informal learning. Especially the latter is a domain I wasn't intimately familiar with, but during the initial months of my involvement with this project, it has certainly roused my interest.
Informal learning makes assessments much more crucial, but it also emphasis questions and issues with validity and reliability. In formal learning, the trust in the quality of the learning activities already provides us with a degree of confidence in the outcomes it achieved in our students. We feel more in touch with the process and therefor are in a position to moderate any shortcomings of the assessment with that intimate familiarity.
I know this sounds rather awful, because obviously we are always supposed to have brilliantly valid and solid designs for teaching and assessment. But it isn't easy to create valid assessments; it's not easy at all. Still, it is a problem that needs addressing for various reasons. I will list a few:
Informal learning makes assessments much more crucial, but it also emphasis questions and issues with validity and reliability. In formal learning, the trust in the quality of the learning activities already provides us with a degree of confidence in the outcomes it achieved in our students. We feel more in touch with the process and therefor are in a position to moderate any shortcomings of the assessment with that intimate familiarity.
I know this sounds rather awful, because obviously we are always supposed to have brilliantly valid and solid designs for teaching and assessment. But it isn't easy to create valid assessments; it's not easy at all. Still, it is a problem that needs addressing for various reasons. I will list a few:
- More and more universities are moving away from the business model where their knowledge or content is the added value they sell. Content is no longer a commodity, and learning content is no different. Anyone can look up the principles on general relativity in great detail without even going near to a university. This is one reason why universities are moving to making their knowledge publicly available, such as Open Courseware at MIT, and more recently Open Learn at the Open University in the UK. What they have realised is that the true value of the University, is in the guidance and support it provides around learning, and the accreditation (and thus the assessment!).
- Developments towards more simulation and game based learning raise questions about how to assess these less tangible and structured ways of learning in an appropriate and quantitative way. The same is true for assessing competences and skills in stead of knowledge and understanding.
Labels:
APL,
Assessment,
Education,
Games,
Lifelong Learning,
OER,
Teaching
Subscribe to:
Posts (Atom)