Where Is Our Outrage Over Non-Writing Writing Assessment?

When someone enters the teaching profession as an English language arts teacher, it is with eyes wide open. One of the biggest challenges these teachers face is learning to manage the paper load; essays that are frequently traded from teacher to student and back come with this territory. ELA teachers are known for taking stacks of papers with them on vacations. Report card markings are brutal, and marathon essay grading at the end of semesters are common. While teachers do become more efficient in grading essays over time, the process of evaluating writing consumes much of their time.

Despite the time and effort involved in evaluation, ELA teachers continue to require students to write. It remains one of the important methods by which students show how they understand logic and organization and how they connect with literature. In writing students show more than knowledge of the rules, they demonstrate this ability within a context. Because most of the writing in ELA is literature based, students also show how  they critique, make inferences, and develop arguments.

ELA teachers have students write, and then they assess the strengths and weaknesses of the writing. The assessment event is not over at this point, however, for they have students make edits and revisions in writing. These teachers understand this critical step in writing.  It allows them to see even deeper into the students' command of the English language, from mechanics and organizational skills to vocabulary and sentence construction. In addition to all of these benefits, ELA teachers understand that the writing process is a powerful way to see how student thinking is changing.

Cue music. Enter the large-scale standardized assessment developers.

Ever fixated on statistics that indicate validity, reliability, item difficulty, etc., the large-scale standardized assessment developers immediately face new realities. Before they can generate numbers to crunch, they must take into account the process of building high-quality generic, trait-specific rubrics. They identify the need to establish inter-rater reliability and the importance of preventing evaluator fatigue. These assessment developers quickly understand that evaluating actual writing takes a great deal of human input. With increased human input, assessment developers have increased costs, for example in training and monitoring. With humans evaluating writing, test results are not immediately provided to users. The developers embrace the conclusion that it takes much more time and money to properly evaluate student writing than the test developers are willing to invest.

And so, it takes very little time, if there is any consideration at all, for these standardized test developers to make the decision to forgo actual writing in order to assess writing. The final product? Non-writing writing assessments.

Every time I read that paragraph, I chuckle.

I would rather not spend any more time thinking about non-writing writing assessments, for a host of reasons. Unfortunately, these tests are being used at an increasing rate and for higher stakes, even though we know better. For that reason,  I raise two very basic questions.

What is really being measured in non-writing writing tests?

Doing a quick scan of a few standardized testing sites, including statewide assessments, I noticed that non-writing writing assessments are called a variety of things, from "language" and "English/language arts" assessments to "writing" and "communication arts" tests. Furthermore, these tests are made of multiple-choice items that ask students to choose options that show they understand rules and procedures of writing. My personal favorites include "Which of the following sentences uses a comma correctly?",  "Which of the following sentences is written in past tense?", and "Which of the following is an adverbial clause?"

In addition to punctuation and grammar, the non-writing writing tests claim to assess students on their composition/writing skills and on writing structure. For example, one assessment given widely across the nation, evaluates students on these skills by answering correctly questions such as "Which words can we use to make the sentence more interesting?" and "Which of the following would be used to develop this idea into a poem?". Because students are forced to choose between a set of options, interesting is defined by the test maker and an accurate response means either the student chose the answer they thought would be correct or the student got the item right by chance. The question about creating a poem, a work of art at its highest form, is reduced to a set of steps predetermined by the developer. It is probably safe to assume that creativity, style, voice, and the ELA teacher are nowhere to be found during this process.

What information does the non-writing writing assignment actually provide?

The current "must give" assessment provides a continuum for scores that range across grade levels. The reports list the students' scores with their current grade level and then categorizes those scores into five possibilities: far below basic, below basic, basic, proficient, advanced. I have yet to find what how these labels were determined and what they actually mean.

In addition to that not-so-informative report, the assessment developer provides other reports that claim to provide a window into student potential. Seriously. Teachers, parents, and students are able to track "growth" within and across years. Even more amazing, these results allow them to compare student progress with other students in a district, building, or classroom. For the ELA teacher, these non-writing writing assessment reports help them to adjust their curriculum and instruction in order to meet the needs of their students. I need someone to show me how this is possible.

I know little about language acquisition, and I can only imagine how our collective wisdom grew as humans began a verbal exchange. The Socratic method continues to be a powerful way to evaluate higher-order thinking and problem-solving skills and has done so successfully for over 2000 years. Compare that to the standardized test that first appeared in the US only 100 years ago and,  not-so-coincidentally, was followed by the first standardized multiple-choice assessments given to students a short time later.

The written language stores collective wisdom. It creates a space for solving world problems and for starting world wars. The writing process is complex, messy work. Evaluating that process is no less arduous. The written work is powerful and serves so many purposes, from documenting history to considering possibilities. Pushing our students to become strong writers has so many benefits, for them and for us.

Where is our outrage over writing assessments that do not assess writing? What will it mean for the future if we continue this madness, making claims on language usage that are not based on using language?

As I say so many times these days, Maya Angelou says that we do what we know to do, and when we know better, we do better. We must do better.


Popular posts from this blog

The Effects of Childhood Bullying into Adulthood

Middle School Bullying Prevalence and the Importance of Social Frameworks

The Puzzling Persistence of Bullying Behavior