Bryce Battisti, University of California
Student responses to multiple-choice items are typically analyzed using metrics calculated by Classical Test Theory (CTT). Such metrics as quartiles, means, standard deviations, difficulty and reliability can provide a wealth of information about how well students answer particular items and how well items discriminate between students. However, it is difficult, using CTT, to graphically display responses to all options within an item. Item Characteristic Curves (ICCs), calculated using Item Response Theory (IRT), are well suited to characterizing patterns in responses to options within items in tests that measure understanding of scientific principles. IRT provides an estimate of an examinee's ability to answer an item correctly based on the difficulty of the item and the examinee's pattern of responses to each of the other items in the test. Examinees' performance on each item is plotted on an ICC against the spectrum of abilities of the population of examinees. ICC's display a trace line for each item option and show that the probability for choosing the correct answer on typical items increases with level of understanding. For less typical items (i.e. those designed to detect misconceptions) ICCs reveal the conceptual detours some students take on the way from misconception to scientifically accepted conception. ICCs allow for detailed and comprehensive analysis of the patterns of examinee responses providing a window into the process of conceptual change.