What does assessment look like in a workshop classroom?
Amy: Interestingly, I just saw a tweet by @triciabarvia yesterday that said “Growth, when ss do something better than they did before (as a result of assignment/lesson, that’s success.” In away that’s really what assessment should be, right? We should be noting growth and improvement, and celebrating successes. I think we forget this sometimes, especially in regard to what we view as assessments and students call grades. Too often we overlook the learning and focus on the 78 or the C+. (That is all most of my students focus on.)
Assessment in my classroom drives my students crazy. I think they feel like I am that elusive balloon they cannot pin down and pop. Poor darlings. I refuse to give in to the grades routine. I want to see improvement. I want to see them take the skill I know I’ve taught and then apply it — better yet, go beyond and do something clever with it. So to answer the question, “What does assessment look like?” is kind of tricky. I think too often teachers spend time assessing student work in ways that is not meaningful. We can waste a lot of time.
But if we’d plan a little differently and make assessment a natural and moving part of our learning journey, we would save time scoring work and enjoy talking to our students more. At least that is always my goal.
In my classroom, assessment takes the shape of written work: play in notebooks or on notecards, exit slip quickwrites, and sticky note conferences, plus, of course, major writing tasks with formal and informal conferences and oral and written feedback. Assessment also shapes itself into lots of student self-evaluation: “Look at the instructions, the model, the expectations, did you meet them, why or why not?”
If I didn’t have to take grades I wouldn’t, but I assess everything all the time.
Shana: I go back and forth when it comes to how I value process vs. product vs. what Tom Romano always called “good faith effort.” There are students who have mastered skills important for reading and writing, but who haven’t fully committed in terms of being vulnerable and trying to advance their skills. Then there are those students who explore powerful themes and show amazing growth in their abilities, but still can’t spell or use commas to save their lives.
Because I wrestle so frequently with this dilemma, I end up grading not on where each student is at in relationship to our academic content standards, nor on where each student is at in relation to their peers, but rather how far they’ve come since the start of the unit/school year/week. And I do this by doing what Kelly Gallagher does– “a lot of fake grades.” 🙂
Amy: Me, too!
How do you design assessments?
Shana: I used a lot of scantrons my first year of teaching. Those kinds of assessments were pretty easy to make, if time consuming–long, multiple-choice tasks that were topped off by a few essays. After a year or two of fighting with the scantron machine and my conscience when I noticed that oftentimes a kid’s essay was much stronger than his multiple choice section (or vice versa), I threw tests out the window. I haven’t given a test in five years.
Now, I focus on designing assessments that are as unique as the units we work through. A unit of study revolving around the exploration of complex themes cannot be assessed, well enough, in my opinion, using a multiple choice test. So instead of trying to gauge a student’s interaction with Macbeth that way, I use Socratic Seminars. Or projects. Or reflections. Or presentations.
Amy: Yeah, I tossed out the multiple choice tests at the same time I introduced choice-independent reading. Even the shorter texts we read together as a class are worthy of much more than a scan tran. Harkness discussions and writing about our thinking allow students room to stretch a bit and show us what they really know.
I design assessments by thinking through my end game. Of course, teaching AP Lang means I must prepare my students for that exam each spring. I know my students have to write convincing arguments, synthesize sources into their arguments, deconstruction other writer’s arguments, and read critically — sometimes some pretty old texts. To design assessments means to start there and then work backwards into the instruction.
For example, I know my students must be able to synthesize sources into their arguments. So I may decide on a couple assessments — say a Socratic Seminar where students discuss three related texts. They must come to the discussion armed with questions, annotated texts, and join the conversation. Then, I may challenge students to find three more texts related somehow to the first three. Now, we move into writing an argument in which they must synthesize at least three of the texts. That essay becomes another important assessment because all along the way I’ve taught mini-lessons: academic research via databases, proper citation, embedding quotes, transitions between paragraphs, interesting leads, combining sentences, etc.
Everything I teach as a mini-lessons gets a matching mini-assessment somewhere along the line to learning. By the time I read those synthesis essays I have a pretty good idea of which of my students understands and can apply each of those skills. I’ve conferred with them and retaught as necessary, and when I read that final paper I rarely have any surprises.
I think maybe one of the reasons I love a workshop pedagogy so much is because of the grading. It is just not the same as it was when I taught in a traditional model — it is better!
Do you have topic ideas you would like us to discuss? Please leave your requests here.
Tagged: #3TTWorkshop, Assessment, assessmnet, formative assessment, pedagogy, process over product, Readers Writers Workshop, summative assessment
[…] November Shana and I talked about how to make our assessments more authentic. We seem to talk about assessment a lot. We talk and then write. We wrote about our semester exams and the thinking behind them in […]
I have to agree with Amy–I think the most valuable, rich kind of assessment of a student’s understanding is talk. Sometimes it’s in one-on-one conversations between just the two of us, and sometimes it’s in a more socially constructivist setting like a Socratic Seminar. Either way, what you say about hard-copy assessment methods is true–those methods will just reinforce what we might learn through talk.
Thanks for this thinking!
LikeLiked by 1 person
I appreciate you sharing the thinking of Wiggins here. I respect his work! I can see where the writing might not always be the best indicator of a student’s comprehension. Personally, I think when students can talk about their learning, that’s the best assessment, but I also know that discussions are not always possible –sometimes even few and far between. I think my biggest deal with multiple choice tests is two part, 1) students have to take way too many of them and often do not take them seriously, 2) many teachers make the test about the content of a book instead of the thinking around the book (I am guilty of this is years past.)
You say something important here especially: “Still, on the whole, my student results have been consistent with what I observe in other measures.” It’s those other measures, and I imagine like all good teachers, you put many other measures into play when you assess your students. When we mix it up and keep the kids’ best interest in mind, we all win!
Thank you for this comment. You got me thinking.
I just had a conversation last period with my department chair about multiple choice tests. In general, I have been opposed to them, but in 2014, I saw Grant Wiggins speak at the NJEA convention, and he altered my perspective a bit. Wiggins, the greatest proponent of authentic assessment, actually endorsed the occasional multiple choice test! He discussed the idea of a “conflating variable.” That if the focus in a given assessment is centered on reading comprehension, then a student’s inability to clearly convey her understanding in writing might hinder our ability to determine if she understood what she read. Writing, in this case, is the conflating variable. I have actually reintroduced the occasional objective reading assessment, even though I still feel conflicted. Sometimes, maybe I am just convincing myself, I feel like I do get a decent insight into a whether or not a student understood or misread a passage. And I get this information in an efficient format that was not time-consuming to grade. However, well-written critical thinking objective tests (may be an oxymoron) are really difficult to find, so they are very time consuming to create. And even then, I’m not totally confident in the questions I created on my own. Still, on the whole, my student results have been consistent with what I observe in other measures. But again, maybe that’s just my confirmation bias influenced by the quickness and ease of running Google Form responses through Flubaroo, just like my math and history counterparts get to do!
LikeLiked by 1 person