I was recently engaged in a lively #edchat on Twitter, and one of my Tweeps asked if the term data means the same thing as the term information. This is the gist of my not-so-reverent response:
Once upon a time, we made good decisions based on solid evidence. Then one day, someone said, "Data." The End.
Oh, how we are enamored with data. We love collecting them, talking about them, and using them to drive decision making. We aggregate, analyze, and map them. We are so in love with data that we use them to rationalize every major decision made in education, from the classroom to the board room.
Unfortunately, our love affair and extended honeymoon with data have blinded us to the realities and limitations of data. We have yet to wake up during an item analysis and ask ourselves, "What in the world have we done?" We need to separate ourselves from the allure of data and the presumed answers they provide, and take a step back to look at data with fresh eyes.
For example, we might consider the actual word data. For the record and from my former English teacher's lens, the word data is plural and the singular form is datum or data point. Growing up, I remember pronouncing the word as DA-ta. By the time I was in my doctoral program in the 1990s, I was guided to pronounce the word as DAY-ta. Although I have no empirical evidence to support this, I privately theorize that our switch from DA-ta to DAY-ta happened at the same time we replaced the word test with the word assessment.
Fresh eyes would also allow us to take data from the glamorous and powerful podium and recognize them for what they are - numbers, characters, and others bits of information. On their own, and without context, they have no meaning. As an example, consider this data point: 15. When 15 is noted as the number of years old, we might picture a teenage boy or girl. Add more context, such as the age of a grandmother's home computer, and we get a different visual of an enormously large, heavy and slow PC, connected to a dot matrix printer. Stripped down and unplugged, data begin to lose their charm.
Seeing data without its star power would also force us to ask questions about the substance and the quality of the data, something we rarely do, especially when talking about assessment results. Before we jump into comparisons and trends,we must begin an analysis. This analysis does not have to be a sophisticated or complex manipulation of the numbers in order to make meaning. In fact, most of that sort of analysis is provided to us by the test originator. Instead, the first stage of our analysis is asking questions about both the test and the data (test results).
Some initial questions might be:
The answers to the initial questions might keep data at an appropriate status. For example, as consumers of the test, you might find that the test does not have clearly articulated standards. If this is the case, you might resist putting too much, if any, weight on the test results as it relates to your classroom or building. After all, if you don't know what the test is measuring, the test results will provide you with irrelevant instructional information. You might also find that the assessment does not live up to its purpose and that it needs additional data before you make assumptions about the results of the test. If so, you will be reluctant to read too much into the results until you have supplemental information that provides a more complete picture of student learning.
The most important finding would be to realize the assessment may be completely unrelated to what is happening in your classroom and in your building. If that is true, then the assessment may be serving a different purpose altogether. In this day of educator evaluation, this assumption is the most dangerous of all.
As we become consumers of assessments, we realize that we have the power and ability to decide what data are needed to inform our practice in order to increase student learning. It is likely that as we become more confident in our ability to evaluate the quality and substance of data, that we become proactive in choosing assessments that will provide us the information we need to be better educators.
As we are able to identify what kind of data are needed to help us improve what we do, we can move from assessment victim to assessment consumer to assessment developer. The results of these assessments will provide us data that is finally worthy of our love.
Seeing data without its star power would also force us to ask questions about the substance and the quality of the data, something we rarely do, especially when talking about assessment results. Before we jump into comparisons and trends,we must begin an analysis. This analysis does not have to be a sophisticated or complex manipulation of the numbers in order to make meaning. In fact, most of that sort of analysis is provided to us by the test originator. Instead, the first stage of our analysis is asking questions about both the test and the data (test results).
Some initial questions might be:
- What is the purpose of this assessment? Given the purpose, does the assessment measure what it intends to measure? How do you know?
- What additional information might be needed to meet the purpose of the assessment?
- How is this assessment related to the standards we have set in this building or district? How are the standards aligned?
The answers to the initial questions might keep data at an appropriate status. For example, as consumers of the test, you might find that the test does not have clearly articulated standards. If this is the case, you might resist putting too much, if any, weight on the test results as it relates to your classroom or building. After all, if you don't know what the test is measuring, the test results will provide you with irrelevant instructional information. You might also find that the assessment does not live up to its purpose and that it needs additional data before you make assumptions about the results of the test. If so, you will be reluctant to read too much into the results until you have supplemental information that provides a more complete picture of student learning.
The most important finding would be to realize the assessment may be completely unrelated to what is happening in your classroom and in your building. If that is true, then the assessment may be serving a different purpose altogether. In this day of educator evaluation, this assumption is the most dangerous of all.
As we become consumers of assessments, we realize that we have the power and ability to decide what data are needed to inform our practice in order to increase student learning. It is likely that as we become more confident in our ability to evaluate the quality and substance of data, that we become proactive in choosing assessments that will provide us the information we need to be better educators.
As we are able to identify what kind of data are needed to help us improve what we do, we can move from assessment victim to assessment consumer to assessment developer. The results of these assessments will provide us data that is finally worthy of our love.
David Brooks wrote about the role of data recently. Of course, he was referring to business decisions, but I've heard that schools should be run according to business principles...
ReplyDeletehttp://classroomadventure.blogspot.com/2013/02/role-of-data.html
I love data so much I tested my families love...
ReplyDeletehttp://oneteachersperspective.blogspot.com/2013/02/along-with-learning-lets-measure-love.html
Your essay jives with my own sentiments. Thanks for you piece.