Student facing devices have the potential to dramatically change the frequency and effectiveness of classroom formative data collection. Schools who find themselves with a wealth of technology are in a position to leverage the growing number of online adaptive and non-adaptive assessment softwares available on the market today.
The time honored image of students sharpening their pencils prior to taking a test will soon be replaced with images of students facing computers with headphones in place. The beauty of these new computer based systems is that teachers no-longer need to spend their valuable time grading and evaluating the tests their students take because the data will be pushed directly to their accounts for them to review from anywhere.
The pundits believe that this new influx of teacher time and formative data will give teachers more time to customize and personalize their face to face instruction. This is possible, but taking the diagnostic onus out of the hands of teachers and placing it in the hands of computers will lead to one of two counter-productive conclusions.
First of all, these student facing assessment systems are all built on algorithms or teacher created content that are being built by a third party. Without seeing the algorithm or without knowing what the vetting process is for each individual item, how can we as teachers fully trust the decisions they are guiding us to make?
The Institute had the privilege of viewing a premiere Pearson product recently and we were confused by several of the pre-assessment questions. For example, one question showed a cow alongside four words and the student was asked to choose the word that matched the picture (bow, vow, cow, and plow).
This was a student facing question and there was no indication of what particular skill was being tracked in the backend of the system. It could have been at least three or four different skills, which makes us wonder what actual data the child’s teacher would be receiving in her reports section if that student were to make an incorrect click?
Did the student miss the question because they misread the initial sound? Did they think that the cow with it’s harness was being used to plow a field and chose plow instead? Was the child a native spanish speaker who thought cow is “vaca” in Spanish so I’ll choose the one that starts with “V”. Or, maybe the child just clicked on a random word because he was bored out of his mind and this was the fifth word ID question he’d seen.
For the most part students are doing their best work and teachers are gleaning important data from the backend reporting features in these programs, but if a program tells us that a student can’t read initial sounds when really they can, how is this progress or a time-saver?
Our second concern is that when we take grading out of the hands of teachers they are now absent from an incredibly analytical aspect of classroom practice. Grading, while tedious allows the teacher the quiet space to not only reflect upon each student’s work, but also a time and space to reflect upon each individual student.
Going from paper to paper a teacher may remember that they needed to follow up with a student’s mom regarding a behavior issue, or that another student needed a new book in his book-bin. The grading allows us a chance to think student by student, which is an unbelievably important by-product of what could be described as a mostly torturous activity.