The first college class I ever took was Introduction to Statistics. It met at 8 am MWF and was full of freshmen like myself who didn’t know better than to register for an 8am class. Lessons were delivered in a dimly-lit lecture hall and revolved around bulleted PowerPoint slides. The professor was a dry septuagenarian with a slow southern drawl. All of this is to say… I didn’t learn that much.
Sure, I know the difference between median and mode and I can stumble my way through an explanation of standard deviation or bell curves. But I am by no means a stats geek. I don’t get excited by data the way my engineer husband does (the man monitors our solar panel production gleefully and makes spreadsheets of household expenses with gusto). Math is a foreign tongue in which I can order a coffee and ask for the restroom but have never achieved fluency. But this was never a cause of concern for me. After all, I was an English teacher. I like to read and write and debate. What use do I have for math? I pay someone to do my taxes and there is a calculator on my iPhone.
Turns out, the joke’s on me. Just as I entered the teaching profession, the teaching profession entered a new era: an era saturated by statistics and measurement. Data meetings, data walks, and data walls became part of our vernacular. Then came blended learning.
In just the past few years, educational software and digital tools have begun to spit out types and amounts of data that would have been unfathomable even a decade ago. Computer-based assessment has led to a level of nuance far beyond what was discernible through paper-based chapter tests and weekly spelling quizzes. But with the growing availability of data comes a demand for more and more sophisticated analysis.
We’re not just talking about reviewing a report to determine if students are on, above, or below grade level. We’re talking about students’ time on task, rate of growth, and level of mastery in specific domains (or even on specific standards). In order to make sense of the data, teachers must understand if the program is adaptive, if the scale is vertical, and if scores are norm- or criterion-referenced. They’re using scale conversion charts and triangulating multiple measures. I, and most of the teachers I work with, see the value of this bounty of information. But many of these educators also have a niggling thought in the back of their minds that sounds something like, “Am I doing this right?” Or maybe even, “Is this really my job?”
I wrestle with these questions often in my new role at the Highlander Institute, managing the EdTechRI Testbed. And, for me, the answer is: yes. This project matches teams of teachers from around the state who are interested in piloting new educational software with edtech vendors who are interested in getting feedback on their products. Over the course of a 12-week trial, teachers get access to the selected software, as well as some basic training and support in its use. In exchange, they agree to complete teacher and student surveys on their experience and to open their doors to trained observers who look for changes in classroom practice. At the conclusion of the trial, we analyze various data sources and issue a report on the impact of the software on classroom practice.
While the classroom support looks fairly similar to the work I’ve always done with teachers through the Highlander Institute, this project has also required me to step into some unfamiliar (and sometimes uncomfortable) territory. With the help of my colleagues with psychometric training, I have been learning the language of statistics and educational research. Our chats are now peppered with references to “n” size, reliability, regression, and bivariate analysis. I combine this growing analytic know-how with my understanding of the best practices of teaching and learning to, hopefully, help teachers achieve a level of personalization that was not possible (or manageable) without these tools.
Sure, at times I find myself thinking (or shrieking) “I don’t know this stuff! I’m a teacher!” But I try to remember the wise words of growth mindset guru Carol Dweck who says that the most important word for a learner is “yet”. Allow me to reframe that thought: “I don’t know this stuff, yet.”
Like most modern professionals, teachers must be willing to evolve and expand our skill set. I, for one, wouldn’t want a retro surgeon or pilot who dismissed advances in modern technology in favor of doing things “the old fashioned way”. There is enormous potential in educational software that can shine a light onto student learning in a way that was never before possible. And there is potential in projects like the EdTechRI Testbed that can move us out of the realm of strictly gut feelings and anecdotes and to a place where school administrators are making purchasing decisions based on sound information (that includes more than just student achievement measures). But to get there, there are going to be some growing pains as we figure out how to do this work.
When it comes to the increasing role of data and measurement in education, I believe that, like other things that are good for us (yoga, green juice), a little is better than none at all. So we start small and commit to pushing ourselves a bit further each time until we reach a sweet spot where the data are truly informing and enhancing our practice. After all, the role of an educator will keep changing, but it will always require us to be learners.