Improving Automated Assessment for Student Open-responses in Mathematics Public
Downloadable Contentopen in viewer
Open-ended questions in mathematics are commonly used by teachers to monitor and assess students’ deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes and strategies adopted by students in formulating their responses. While these student responses help to inform teachers about their students’ progress and understanding, the amount of variation in these responses can make it difficult and time-consuming for teachers to manually read, assess, and provide feedback on student work. For this reason, there has been a growing body of research in developing AI-powered tools to support teachers in this task. This work seeks to build upon the prior work that presents a model designed to help automate the assessment of student responses to open-ended questions in mathematics through sentence-level semantic representations. We conduct an error analysis of this model, to examine characteristics of student responses that may be considered to further improve the method. We find that this model performs poorly in presence of mathematical terms and images in student responses. We then introduce a model as a step toward the improvement of this method in presence of mathematical terms and we find that this new model outperforms the previously published benchmarks across three different metrics.
- Defense date
- Date created
- Resource type
- Rights statement
Permanent link to this page: https://digital.wpi.edu/show/n296x2424