Categories
Feature descriptions General Usage

Mission accomplished ….

We’ve finished processing the exams in my organisational unit for this exam diet. It all went well. So we’re done. Finally! Here are the stats on what the system achieved (with the help of 48 combined cores of processing and two additional staff doing processing for around the last five weeks, working together to support our the exams team). This system was also used by another organisational unit as well, as their default option.

I’ve learned plenty about our behind-the-scenes exam processes and the limitations of PDF – this was definitely the right solution for us, for the emergency situation we were in, despite the immense amount of work to create it. It de-risked the process by keeping everyone’s work safe from edit conflicts between students and staff, and between staff members assessing it, as well as being backwards-compatible with the old Mk1 Human so we could revert to the pain of a fully manual process at any time if we hit insurmountable problems (which we didn’t in the end).

Working with the PDF ecosystem reminded me why I prefer browser-based solutions – you can interact with the user and keep them on track.

Me

An online electronic marking system is the way to go for us in the future. That’s possible for us now that we’ve taken that first step. Using a digital process that mapped exactly onto our existing workflow avoided a possible barrier to adoption of a new approach. Now our staff have seen the benefit of a digital workflow, we can have a chat about adopting a fully online system – especially since colleagues ran limited trials with other systems (interestingly, using this system to pre-process files before feeding them into the competition). The issues I ran into with the PDF ecosystem itself mean I don’t think a better user experience overall is possible no matter how good the underlying library that you use to work with PDF – you are still at the mercy of the interpretation of the specification for PDF by the viewing tools used by the staffs. We topped out on the quality of that experience, so time to move back to my favourite domain – the browser.

So, as you might guess, eyeing up the browser takes us smack bang into the territory already served by some existing tools. To compete or not to compete? This was not an easy decision because there is a really strong ethical benefit to operating assessment software from an open-source academic team. It provides an opportunity to fund the development from teaching income, and avoid the allure of monetising the student data. By the way, the reason that is even an issue for the edtech sector is that if you have shareholders, you have a legal obligation to do right by them, and that makes it rather difficult to leave money on the table (money you might need to stay in business, to grow, or to make a suitable exit for the founders). We can get around that by using a different funding model.

On the other hand, is it worth the effort to develop a tool which doesn’t monetise student data, but still (only) achieves the same (limited) academic outcomes? And how do you feel about that when you step back and realise you’d really like to see a bunch of change in the way things are done in that educational sector? Is that a reason to aim for a seat at the table now, just so you can try and make change later? Or would that be so much effort you’d never get to the transformative stuff? And is the context in which you’d be operating sufficiently flexible that you can slot the transformative stuff into whatever you are doing, or might it not fit?

There’s a really excellent discussion of the difference between migration online, and transformation online, in assessment systems here – an article suggested to me by Jen Ross, during a conversation we were having with Tim Fawns, who also added great stuff to my thinking (which I am not re-iterating here simply because it needs a proper writing out!)

There is no bald statement of the right path in that paper, but there is certainly a strong indication of the risk of getting bogged down in migration instead of doing transformation. What sort of development journey will evaluative judgement, contract grading, accept-revise, student-in-control grading, and various other practices need in order to flourish – and what role will digital assessment play – if any? Can you make a logical sequence of technical development moves to get from online “commodity” grading to the transformative stuff? Or are they paths that do not cross? In which case, what happens if you try to start with migration? What if you think there might be a way to do all of this without any traditional grading at all? (Health warning: people of a traditional disposition, look away now. Too late? Did I already say that? Sorry not sorry). Ok that’s clearly an extreme provocation. But it is a thought process I went through to offset the dangers of sunk-cost fallacy and train-track thinking (i.e. trying to avoid the pitfall of “I’m doing this, so I shall do more of this”).

How to resolve this in the case of future tools in the gradex ecosystem? Since academics who care about teaching, care about improving the experience for students, it seems a greater good can be served by focusing immediately on new practices that lead to better experiences for students and putting up with short term ethical concerns around any usage of existing commercial tools that fulfill a need. So gradeX development work is going to go on hold, while I bury myself in my keyboard again to deliver an open-source remote laboratory infrastructure over the coming months. I’ve a few other things up my sleeve as well … but more about those in good time!

Categories
Feature descriptions General Usage

Order that tab

So I’ve learned an awful lot more about PDF than I ever expected to. One of the traps is that tab order isn’t always respected by the viewer, but it is in Adobe Reader, so I’ve refined the design some more, to take advantage of that.

Feel free to drop me a line for a copy of the doc itself!

admin@gradex.io

Categories
Feature descriptions Technical Usage

Comments – yes please!

A wall filled with sticky notes. Two people are reaching and moving individual notes.
cc0 from unsplash.com apparently...

This post is actually about two different types of comments …

  • feedback from a colleague
  • how we can use/handle pop-up comments

The post title reflects me realising that I wanted to support pop-up comments in the pdf-handling bit of the gradex™ tool…

1. Feedback on the marking process

Overall, I think it’s cool. As long as we can get students to upload stuff that is readable I think it should be possible to mark at pretty much the same speed as normal.

An Anonymous Colleague

Awesome, I am pleased with that. Avoiding time wasted with the tool is one of the main goals – use effort for marking, not marking up.

And here is one that should be suitable to follow through as a worked example for the web page.
Page 1 – normal marked page, one comment and addition should be correct
Page 2 – marked page but should have an adding up error (to show how this will be moderated)
Page 3 – I’ve checked the box at the top to indicate that I can’t read it (assuming that is what the box is for)

.. The Same Colleague

Here’s a look at their marked-up file (images-only )

This is a great example of how marking will get done – careful consideration, some half marks, a simulated adding mistake (it happens in real life from time to time, so we have processes to catch it) and a duff page getting flagged (that page wasn’t duff, but we’re using our imagination).

2. Handle those pesky virtual post-it comments

Seeing the use of pop-up comments in this marking demo, made me realise I wanted to handle them in the gradex tool. They reduce your cognitive load (you can park some parts of your thoughts on paper to focus on other bits). It’s unlikely that students will use comments on their scanned exam scripts, but they might. Markers definitely will. For marking in PDF, comments do produce some difficulties though:

That means I need to handle them myself, and flatten them in some way, so that they cannot be edited in a rogue non-compliant editor.

It just so happens that both comments in this marked document popped open to cover over parts of the marking area where there could be important information. So I can’t just flatten them where the document says to put them, because they would permanently obscure important information. I thought through a bunch of options

  • stashing the comment text within the document, then duplicating the comment in a new pop-up
  • putting comments in special part of the sidebar*
  • putting comments in a special header or footer
  • finding out which editors respect the readOnly flag and putting the rest on a “do not use” list.

These approaches all have wrinkles or gremlins. The least worst option is to stash comments in a reserved area so that nothing is overwritten. So I have made a space to investigate putting comments in the sidebar, once I have prefill text fields implemented in a general way. For now – I’m writing them to the bottom of the page. I’m also putting in a yellow numbered marker where the comment was located, because position is important context.

These two pages (the two with comments) don’t look that different with the comments on. There is a chance of conflicting with student work at some point …. hence the plan to shift it into the sidebar if possible. Meanwhile, even if it is not as pretty as the grids, at least we’ve captured and preserved the comments. Phew.

Postscript (sorry not sorry)

The more you dig into PDF, the more you start to have thoughts about its clunkiness compared to web markup – these are well articulated by others (like this). The TLDR is that we’d do things differently now we are more worried about security more than we are about performance – and avoid structures that permit infinite cycles.

You know it’s a serious when the CEO of a software house is dropping comments like this in their code (presumably from a code review):

GH: Are we fully protected against circular references? (Add tests).

But it’s not bad for a library that’s older than the average age of the students sitting the exams. And the structure diagrams are pretty – if you aren’t thinking about parsing it!

PDF graph structure
Image seemingly from Author of the linked blog – GUILLAUME ENDIGNOUX

Categories
Feature descriptions

Marking Flows & Ladders

Like all good things, you sometimes struggle to get past the names you liked at various points in the process, and so they stick even when the design aesthetic has moved on.

You can tell where the name “ladder” came from when you look at the earliest version shown to colleagues for their feedback (red for marker, green for moderator):

Then I started branching out a bit with the design “flair,” but still handling the box geometry calculations by hand. It was clear that novelty would be limited by how long it took to wield the ruler on screen to line things up and encode the coordinates, but things did start to look a little smarter:

Then I realised that these could be made a bit more self documenting in their design, if I could just put boxes where ever I wanted. It would also mean I could take feedback from colleagues without wincing about how long it would take to recalculate where the boxes go. It was like going back to my first interactive educational computer program (on Venn diagrams, written for a primary school science fair on an IBM XT). Some ~30 years later we can do better than that!

I set myself the challenge of being able to parse the SVG output from Inkscape and autogenerate the acroforms for this sample page with three different size boxes that don’t line up with each other. There’s a video of that in action here, and a picture below:

Then I got to work on some more expressive side bars, with an option to include moderation or not. Good user interfaces (even if asynchronous like this) should be self-documenting, ideally. So the vertical lines show that you might indicate the scan was not good, and not do any further marking. Or you might fill in some sub-totals so you can keep track, then tot them up, put them in the Q box, and then tick the ‘page marked’ box. That’s something we can track automagically to make sure every page gets seen.

After marking, our office might select this paper for moderation:

… or you might not .So we can make that more obvious by putting in a visually different sidebar, so we’re not wondering if the moderator left some green boxes blank by accident or not:

Either way it will be checked afterwards in the office. At this point, it is more likely the entries will be keyed in. Then we can extract the marks and make reports. This sort of approach is likely to be faster than the paper process, and can give you more confidence (e.g. run reports that make sure every page got checked).

The paper wasn’t moderated, we don’t want the blue bar hanging off in space, so we move it over to the left – that dynamic X-spacing is another reason for the X in pdf.gradex™.

You can imagine further stages for resolving issues, and re-checking. What we do after that are topic(s) for a further post.

Categories
Feature descriptions

Example marking page

This is a proof-of-concept example hot off the press – the first step in marking is to append a sidebar with marking ladder for subtotals, and a marking flow diagram for the total marks for each question that will be formally captured. Or, if the image is unusable, you can flag it for further attention by your support staff, and they will be alerted when they process your marked scripts. If the image is good, you can TAB from box to box, entering marks, leaving your brainpower to focus on the marking task itself rather than how to get the marks onto the page. If you have a tablet or pen display, you can freehand annotate, and these will be safely flattened into images and preserved by the system without risk of erasing them at future steps.

You can download the actual example here . You can edit and save using Adobe Reader, reload and keep going later. Other editors have variable support. For example, you edit and save in Edge, but not Chrome (it lets you edit but not save). If you’re using chrome, like me, feel free to try out the editing side in the browser, just download the form and use a better editor before doing any work you want to keep!

To support additional steps in the process – e.g. for moderating and checking – additional sidebars can be added with different content, designs, colours, names – examples to follow soon!

With multiple people involved in the marking process, they may have different preferred ways of working. So we can mix and match by flattening any annotations that are made on the script.

The script “freezing” is using ghostscript to render the original image in the PDF so that it bakes in any annotations that the student has made (e.g. erasing mistakes or adding extra work). This way, the marker cannot accidentally undo the electronic editing that the student did to their own work.

It’s even conceivable to mix in printing, marking and then scanning, although it will require some additional steps in the workflow to match up returning scripts with the metadata in the electronic version (QR codes are one possibility here – topic for a future post).