I just returned from my second Western Conference on Science Education conference, Ontario’s emerging science education research conference hosted at the University of Western Ontario. It has become one of my favorite conferences because there is a mix of interesting boots-on-the-ground research and a nice relaxed atmosphere. Applying novel methods for student assessment was this year’s theme.
At the last WCSE conference, I found a presentation on collaborative testing to be particularly intriguing. This approach appears to be growing in popularity in Canada, with at least two science education research groups presenting results this year and others talking informally about their experiences. The basic idea is that students first take a test individually, then retake it in small groups. Most of their points come from their individual score (say 75% - 85%). The remaining portion of their points is their group score, which in most implementations can only increase, but not decrease, their individual score. Interestingly, this sets up a situation in which there may be less incentive for the students with the highest grades to argue strongly for an answer they know is correct.
One research team (led by a very impressive undergraduate, Sonya Sabourin, from University of Ottawa) presented research data from two different institutions showing that factors one might expect to influence group test success—such as individual scores within the group or whether or not groups remained the same over time—actually did not. As in previous studies, students’ scores were on average higher in the group test compared to the individual tests, and having a high achieving student in a group helped the group score higher. However, whether the students in each group remained fixed for the entire semester or whether groups changed each class had little effect, nor did the heterogeneity of each group in terms of individual scores.
A second team’s results (presented by Jane Maxwell from University of British Columbia) showed additional data on the effectiveness of group-testing. Not only did students’ scores increase on average in the group test compared to individual tests, students reported that the format helped them learn concepts and review material. The tests also enabled the instructors to drop two days worth of material from the lectures. But perhaps the most interesting part of this presentation were scratch-off cards made by Epstein Education used to conduct the tests.
The scratch-off cards were the subject of another presentation (by Aaron Slepkov and Ralph Shiell from Trent University) who went into more detail on how to use them effectively. Each scratch card comes with 10 lines of A - E answer spots. The answer spots are covered with a scratching surface similar to a lottery ticket. The student reads the question on a separate paper and then scratches off what they think is the correct answer on the card. If they get it right, they reveal a star underneath and receive full points for the question. If they get it wrong, they pick their second choice and scratch that one off. A correct answer on the second try is worth 1/2 the points. A correct answer on the third try is worth 1/4 points. After that, the student can keep scratching until they find the correct answer but won’t receive any points. Since there is no way to unscratch, it’s easy to give partial credit for a multiple-choice question.
The key point is that students are discouraged from guessing (because they lose points) but are encouraged to keep trying until they get it right. Because students knows the correct answer after completing the question, they can learn even from a summative assessment. This also means you can ask a series of related questions for which the answer to later questions depends on answering previous questions correctly, without multiple jeopardy - without a student losing points for the whole group of questions just because they couldn’t answer the first one. Both presenters emphasized how much students enjoy the scratch card tests. When used in group-testing, students are high-fiving and cheering if they get the star on the first scratch for a question, injecting a lot of energy into the classroom during what is usually a pretty low-energy activity.
The final talk I saw was one of the keynotes, by Kimberly Tanner from San Francisco State University. She presented a research technique her group pioneered that uses card-sorting to help reveal how novices vs. experts organize biological knowledge. Tanner’s group wanted a way to measure student improvement in ability to “think like a biologist” as they progressed from freshman through graduation. To address this higher-level skill set, she wanted a test that could measure how students connect the information they have learned in their biology classes, and how similar those are to experts’ connections. She hypothesized that beginners would group biological questions according to surface features like taxa, while expert biologists would group according to deeper features such as “evolution” or “structure-function”. Following earlier work in psychology and physics education, Tanner’s team created a set of cards that contain problems in biology, and asked students to sort the cards into piles and put labels on the piles. Encouragingly, she found that students in her program did indeed progress from sorting on surface features as freshman to sorting using deeper features as graduating seniors. I was particularly excited that this card-sorting technique looks powerful, because we use similar sorting challenges in our Evolutionary Evidence lab, exploring the different patterns one would expect if species diversification is governed by descent with modification vs. by intelligent design.
The next WCSE will be in the summer of 2017. If you’re interested in a friendly meeting with immediately useful techniques for changing your teaching, I’d certainly suggest checking it out.