- This event has passed.
AIME SIGIMIE Speaker Series: Predicting Item Difficulty for Pretest Items in Large-Scale Assessments
December 10 @ 4:00 pm – 5:00 pm EST
This is presented by the AIME SIGIMIE.
Pretesting items in large-scale assessments requires substantial time, resources, and financial investment. Accurately predicting item difficulty before any administration can streamline pretesting, reduce costs, and provide earlier feedback to item writers, supporting targeted revisions that better align with intended difficulty. Effective difficulty prediction methods, therefore, have the potential to improve both efficiency and quality in item development and pool management.
In this study, we evaluate a two-step hybrid framework for predicting the IRT-based difficulty of pretest items in large scale literacy assessments. First, we fine tune transformer-based models on item text under multiple input configurations that vary the inclusion of stimuli, stems, options, and associated skills. Second, we feed transformer predictions, together with handcrafted linguistic features, into traditional machine learning models. Using real world pretest data, we then examine the gains over the baseline model that only uses linguistic features, the differences among transformer models, and the contribution of each input source. Our results indicate that hybrid models consistently improve predictive accuracy while maintaining interpretability, offering a practical companion tool for large scale assessment programs.
About the Presenter
- YoungKoung Kim (College Board)
Meeting Details
https://umn.zoom.us/j/98791553326?pwd=TojUjPWwpzw7RaZ6Lc9UzWiAgyX3J3.1
Meeting ID: 987 9155 3326 Passcode: 3HkRqw

