Wednesday, 30 December 2015

eAssessment

Read the introduction up to page 14 the JISC publication ‘Effective Assessment in Digital Age’  and choose one case study that is most relevant to your educational area. Reflect on its implications for your area, by writing a reflective blog post and commenting on two others.
Working on this section of the course has highlighted something for me.

The complete focus in the unit is on HE and eLearning rather than any other - none of the examples were outside the HE space (allowing me I suppose to then contextualise it into my own). Up until now that hasn't been a particular problem for me (although it's irritating), but for a course on learning and digital technologies to spend so little time considering the school or early years sector is a lost opportunity. In this case the JISC report delivers a summary of several pieces of work on formative assessment for HE but neither mentions, nor references any of the research by William and Black, which is particularly relevant in the second case study I selected.

Case Study: Enhancing the experience of feedback University of Leicester (2010)


The case study concerns work in the distance learning course, MSc in Occupational Psychology. Audio recording was used to deliver feedback that was more personal and engaging for the student on their work, and a spin off benefit was the development of skills that would allow the faculty to make greater use of podcasting in the future.

Whether or not the case study illustrates a particular benefit of e-assessment is debatable.

The very fact of running the programme implies staff would have spent greater time giving feedback to students, as it was a project focus, and therefore the positive benefits may well have been just as great had the tutors left voicemail messages or simply written feedback. The fact that it is asynchronous and allows the students to access it when needed and return to it as needed is a positive.

If the work was simply replacing one feedback process (written feedback) with another (audio feedback) then it is I suppose a useful augmentation of the provision. Adding the audio in much smaller chunks within the document itself would seem to be an obvious and more useful approach as it then makes each piece of feedback rooted in the section of the work being discussed, but that is a very different assessment tool and would perhaps be better suited to earlier in the drafting and preparation cycle for the student.

In schools I have seen a similar, but more powerful technique used with the tool Movenote.

Movenote allows the user to set up a stack of pages (documents, slides, images) in advance, and then progress through them, adding audio commentary/ annotation or video. The end result is a URL that can be sent to the viewer. For e-assessement this goes beyond either the method used in the case study or the alternative I suggested.

  • Generic audio feedback for the document as a whole is not much more than a novel way of changing the emotional response and the workflow, not a new approach to feedback.
  • Smaller comments embedded in the document do not give the complete overview.

Within Movenote the tutor can deliver the audio feedback, but process through the document on screen, matching the delivery of the verbal feedback to the relevant section being discussed.

Movenote is very much the kind of useful tool that a learner can bring into their PLE and use within a cMOOC.

Case Study: Facilitating peer and self-assessment University of Hull and Loughborough University

URL http://www.webarchive.org.uk/wayback/archive/20140615043727/http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassess_assessingselfpeers.pdf

A second case study that immediately caught my interest was this, as it seemed to offer two things I understand to be fundamental to good formative assessment practice and an area where e-assessment has massive potential to raise standards or achievement and improve learner experience.

Asking students to give their peers feedback both provides valuable responses and ideas to the student, and helps the giver of feedback to better understand and articulate the assessment criteria themselves - it follows that when you can better explain the criteria to gain a particular level, you are more likely to be able to achieve it.

The Universities used a system called WebPA and asked undergraduates to read and rate each others work numerically using a number of criteria.

This has the advantage of efficiency and providing students with the motive to sit down and read each other's work and respond by rating it against criteria. The case study doesn't go into how well the students understood the criteria, the level of motivation and time spent by students on the process or how reliably the peer assessment matched that from faculty staff.

I'm unsure why the complexity of putting a layer like WebPA between students and each other's work was required. Rather than have them assign numeric values would they not be better to have been asked to add a text comment against a simple framework for each? In that way not only would the student get useful feedback rather than a number (and to go back to the JISC research base, if they had referenced Black and William's work they would have found an immense amount of research to deter them from ever giving numeric values for feedback).

"Avoid grading.  Grades are consistently found to demotivate low attainers.  They also fail to challenge high attainers, often making them complacent.  So avoid giving a grade or mark except where absolutely necessary.  This is not easy to do on some courses.  However it is rarely necessary, and almost never desirable, to grade every piece of work."
Massively increasing the quantity of numeric grades and sub-grades awarded by using peer e-assessment will benefit the givers of those grades, but not the receivers. Far better to ask the students to spend more time on less work and give helpful feedback in the form of comments related to the assessment scheme.

Again, from within the school sector, an often-employed technique is to simply allocate students one or more peer reviewers for their work in progress or finished draft, and to ask them to share the work with those people with access rights set to 'comment only.'

Peer reviewers, having taken part in work to improve their ability to give useful feedback, and to make sure they understand enough about the assessment rubric to guide comments towards improvement that will lead to gains, leave comments on work that is designed to guide their partner to improve. In both the schools that I know use this approach in a routine way, it has been one of a number of strategies that has led to significant improvements in the quality of written work.

Reference:

Inside the Black Box, Black P and William D (1998)

No comments:

Post a Comment

Thank you for reading my blog and taking the time to comment. All comments are moderated - I will aim to review your comment quickly and make sure it is made available.