Wednesday, August 6, 2008 - 2:50 PM

COS 64-5: Multiple forms of assessment to analyze undergraduate students’ understanding of evolution by natural selection

Elena Bray Speth, Michigan State University and Diane Ebert-May, Michigan State University.

Background/Question/Methods

Evolution by natural selection represents one of the foundations and unifying themes of all biological sciences. Yet, incoming college students seem to struggle with some of the basic concepts underlying evolutionary theory. This study describes student learning of evolution by natural selection in a large enrollment first-semester introductory biology course. It is widely recognized that an active-learning pedagogy enhances student performance and promotes learning with understanding. Student understanding is a complex and multidimensional achievement, which presents a significant challenge both in terms of instruction and of assessment. How do we know our instruction is effective in promoting students’ understanding?

The question addressed in our study is how to obtain reliable and consistent measurements of students’ understanding of evolutionary concepts. Our assumption is that use of multiple types of assessment provides a more powerful tool for analyzing the richness and complexity of student learning outcomes than a single type of assessment. We analyzed student learning in two sections of the course that implemented a learner-centered instructional design including a sequence of active-learning activities about evolution. Among these activities, we tested the efficacy of Avida-ED, a software designed to help undergraduate students learn about evolution and the nature of science.

Results/Conclusions

The forms of assessment used included, but were not limited to, multiple representations of concepts, use and interpretation of models, multiple choice test items (from the Concept Inventory of Natural Selection), and an open-ended essay question. Results obtained from analysis of four courses prior to this study confirmed the assumption that a single type of assessment provides incomplete data about student learning, and reinforced the idea that design of a reliable scoring rubric impacts significantly the quality and richness of the results. We used data collected from four large enrollment biology and non-biology majors’ courses to develop and refine a single rubric that we can apply to multiple assessments. From the analysis of students’ responses to test items, we derived five major concepts underlying evolution by natural selection (variation among individuals of a population, origin of variation, inheritance of genetically determined traits, differential fitness, and change in a population). The rubric we devised allows tracing these five concepts across a variety of assessments. Because of the quantitative nature of this rubric, we can apply a variety of statistical methods (including analysis of covariance, multiple regressions and principal components analysis) to analyze data obtained from multiple assessments of students’ understanding about evolution.