Reimagine Learning

Writing Tasks

The case for including writing tasks in an assessment strategy has some very powerful arguments on its side. For starters, writing is itself a fundamental skill that is important in a wide variety of settings. Written communication is required for virtually every role in the modern economy, and so it makes sense to want to know whether students, colleagues, and job applicants can write effectively.

Higher-order thinking skills

In addition, writing tasks are good ways to engage many important higher-order skills, such as creativity and critical thinking. While multiple choice questions and their cousins can, in principle, be used to measure the ability to evaluate arguments, identify assumptions, critique arguments, and so on, these skills are often better measured through writing. If we want to know how a person thinks, reading what they wrote is a good way to find out. Similarly, while writing is not the only way to express creativity, it is a pretty good medium to do so. Traditional item types, by contrast, are simply not up to the job in this regard. Picking the correct choice out of four enough times can tell us if you can add fractions, recognize a comma splice, and spot logical flaws, but they cannot measure your ability to come up with something truly novel.

Time spent grading

So far, this might seem like a pretty good case for using essays as much as possible, but there are two important reasons why this doesn’t happen. The first is that grading essays is very time-consuming. If you spend 5 minutes reading each essay and have 30 students, then that’s 2.5 hours of essay grading for just that one assignment. Having more students, more time per essay, and more essay tasks increases this burden further. Most of the traditional question types, by contrast, can be graded automatically, enabling instant results and instant feedback. So while instructors often want the benefits of writing tasks, they may not want them enough to spend hours grading papers when automatic grading is available.

Essay-grading technology

While there are some technologies that can grade essays automatically, in many cases these technologies involve more work than grading the essays one by one. While I worked at a major publisher, I worked on a project to get essays ready for a program to grade them automatically. Dozens of people were involved in a process that took over six months for just a handful of essay prompts, and after that we still couldn’t use all of them because the data wasn’t good. That kind of process can sometimes be worth it. but in most cases adding technology to essay grading adds to the effort required.

Consistency and fairness

The second major reason with writing tasks don’t get used more often is the concern over fairness and consistency of grading. With traditional item types, there’s no subjective judgment involved when grading. As long as the questions are well-written, you just have to check whether they picked the correct answer or not. With essays, however, human judgment always comes into play, and the decisions get harder to make when those super-important higher-order thinking skills are involved.

While we know the largest prime number less than 100, the capital of New York, and the subject of a sample sentence, it can be harder to decide if an essay has analyzed a question thoroughly, made a clear and persuasive argument, or proposed a creative idea.

Training and rubrics

Training essay graders, having “anchor papers” (sample essays that already have scores) and well-written rubrics (detailed descriptions of the kinds of essays that should receive each potential score) can help make essay scoring more consistent and fair. In the end, as with other question types, writing tasks involve tradeoffs. There’s more work involved, but when you really need to measure writing skill or other skills that are best measured with writing, there’s no substitute.