OOS 38-3 - Long-term skill retention in undergraduate biology students

Thursday, August 9, 2012: 8:40 AM
A106, Oregon Convention Center
Joseph Dauer1, Tammy Long2, Kristen M. Kostelnik2, Patrycja A. Zdziarska2 and Neelima Wagley2, (1)School of Natural Resources, University of Nebraska - Lincoln, Lincoln, NE, (2)Plant Biology, Michigan State University, East Lansing, MI
Background/Question/Methods

Instructors often interpret high scores on the final exam as evidence of students mastering the skills and content of the course. We reformed 3 sections of a major biology course (N=517) to include models as an important pedagogical component. Students repeatedly practiced applying modeling skills to multiple biological systems. Talk-aloud interviews were conducted 2.5 years after course completion with a random sample of 30 students (10 per tritile based on incoming GPA, distributed equally among 3 sections). During the interview, students created a model similar to the final exam. All models were analyzed for their biological correctness and level of complexity (linear to fully interconnected). In a second interview task, students ranked sample models that varied systematically with 3 levels of biological correctness and 3 levels of complexity along a spectrum of “best”, “correctness”, and “complexity”. We analyzed the difference in correctness and complexity between student-constructed interview models and their rankings of the sample models.

Results/Conclusions

In interviews, students’ models were significantly less correct and less complex (more linear) than their final exam models (p<0.001, p<0.004). Interestingly, middle tritile students performed significantly better (p<0.001) on their interview models than lower and higher tritile students. In the ranking task, students ranked sample models with highest biological correctness as “best” and as “most correct”. The “best” and “most correct” sample models were always more biologically correct than student-constructed interview models and final exam models (p<0.001). The odds of a student selecting a sample low correctness model as “best” was significantly less than selecting a sample high correctness model as the “best” (p<0.001). The opposite was true for complexity; students were significantly more likely to select middle complexity models as ”best”, placing high complexity models toward the “worst” end of their spectrum (p<0.001). Students stated that correctness is more important than complexity when determining the best model, however they could not create a biologically correct model. These results raise the question whether mastery of a skill (e.g. system model construction) differently affects students’ ability to learn compared other pedagogical methods. Moreover, these results suggest that repeated practice of model construction in successive courses, as opposed to passive model observation, is essential to maintain this important skill.