Leveraging Auxiliary Data from Similar Problems to Improve Automatic Open Response Scoring Public
Downloadable Contentopen in viewer
As computer-based learning platforms have become ubiquitous in educational settings, there is a growing need to provide teachers with better support in assessing open-ended questions. Particularly in the field of mathematics, teachers often rely on open-ended questions, prompting students to explain their reasoning or thought processes, to better assess students' understanding of content beyond what is typically achievable through other types of problems. In recognition of this, the development and evaluation of automated assessment methods and tools has been the focus of numerous prior works and have demonstrated the potential of such systems to help teachers assess open-ended work more efficiently. While showing promise, many of the existing proposed methods and systems require large amounts of student data to make reliable estimates which may vary in real world application. In this work, we explore whether an automated scoring model trained for a single problem could benefit from auxiliary data collected from other similar problems to address this ``cold start" problem. Within this, we explore how factors such as sample size and the magnitude of similarity of utilized problem data affect model performance. We find that the use of data from similar problems not only provides benefits to improve predictive performance by increasing sample size, but the incorporation of such data also leads to greater overall model performance than using data solely from the original problem when sample size is held constant.
- Defense date
- Date created
- Resource type
- Rights statement
Permanent link to this page: https://digital.wpi.edu/show/n296x2467