Matched Question Analysis

The matched question analysis provides an estimate of the amount of learning when adjusted for guessing.

Matched question analysis uses data from students who took both the pretest and the posttest to calculate learning values grouped by question; students who took only one of the two exams are removed from the analysis. This is the most important analysis file for assessment and pedagogical improvement as it provides estimates of the percent of students who learned a given question. This data can be compared over time as instructors make changes to their course or program.

If you are new to this analysis, focus your attention on γ^\hat \gamma (gamma) and γ^/(1μ^)\hat \gamma/(1-\hat\mu)(gamma gain). In simple terms, gamma is the proportion of students who learned the material (as opposed to correctly answered the question). Higher is better but comparing different questions can be problematic as they can be at different levels of difficulty. The gamma gain ( γ^/(1μ^)\hat \gamma/(1-\hat\mu)) estimate is the proportion of students who learned the material that didn't already know the material. In addition to these measures of learning, 'raw' learning values are included in the output file; if you are new to this analysis these can be ignored.

Formally, γ^\hat \gamma(gamma), α^\hat \alpha(alpha), and μ^\hat \mu(mu) correspond to 'corrected' measurements of the learning types when factoring in the number of students guessing; these adjustments assume that the probability of correctly guessing can be estimated, which is more reasonable in higher-stakes testing environments. γ^\hat \gamma is corrected positive learning, α^\hat \alpha is corrected negative learning, μ^\hat \mu is corrected pretest stock knowledge (corrected retained plus corrected negative learning), and flow is the corrected pretest/posttest delta (γ^α^\hat \gamma-\hat\alpha). The following equations are used to find the corrected values:

μ^=nl^+rl^1n1+nl^+rl^γ^=n(nl^+pl^n+rl^1)(n1)2α^=n(nl^n+pl^+rl^1)(n1)2\begin{aligned} \hat \mu &= \frac{\hat {\text{nl}}+\hat {\text{rl}}-1}{n-1}+\hat {\text{nl}}+\hat {\text{rl}} \\ \hat \gamma &= \frac{n (\hat {\text{nl}}+\hat {\text{pl}} n+\hat {\text{rl}}-1)}{(n-1)^2} \\ \hat \alpha &= \frac{n (\hat {\text{nl}} n+\hat {\text{pl}}+\hat {\text{rl}}-1)}{(n-1)^2} \end{aligned}

where pl^\hat{\text{pl}} (positive learning), rl^\hat{\text{rl}} (retained learning), and nl^\hat{\text{nl}} (negative learning) refer to the raw learning type values and nn is the number of answer options. It is important to use these corrected values as the raw scores can be sensitive to the percent of the class guessing. Smith and Wagner 2018 details this adjustment.

R=nl^+pl^+rl^12pl^+(nl^+rl^1)(1/n+1)R = \frac{\hat {\text{nl}}+\hat{\text{pl}}+\hat{\text{rl}}-1}{2 \hat{\text{pl}}+(\hat{\text{nl}}+\hat{\text{rl}}-1) (1/n+1)}

Gamma gain ( γ^/(1μ^)\hat \gamma/(1-\hat\mu) ) and RR (the column R) were introduced by Smith and White 2021. RRcompares the sensitivity of the gamma and gamma gain estimators to probability misspecification. An RR value between -1 and 1 indicates the gamma gain estimator is less sensitive to probability misspecification. AnRRvalue greater than 1 or less than -1 indicates the gamma estimator is less sensitive.

Columns ending in 'Zero' indicate that the probability of guessing is determined by assuming that true negative learning is zero instead of using the supplied value. With these columns, α^\hat \alpha is assumed to be zero in the equation above. This assumption allows the system to solve for the implied probability of correctly guessing. This implied probability is then used to calculate γ^\hat \gamma (column GammaZero), γ^/(1μ^)\hat \gamma / (1-\hat \mu) (column GammaGainZero), and RR (column RZero). These columns are useful when the probability of correctly guessing could be substantially incorrect. This includes situations like low-stakes exams and incentives that manipulate the propensity of a student to guess.