Transforming science education through research-driven innovation
Constructed-response assessments, in which students use their own language to demonstrate knowledge, are widely viewed as providing greater insight into student thinking than multiple-choice assessments. In the past, constructed-response assessments were expensive and time consuming to score. But recent advances in technology and measurement research are making them a feasible option for education settings. Lexical analysis and machine-learning technologies allow researchers to use computers to score student and teacher writing. The goal is to develop computer models that score written responses with the same levels of accuracy and reliability as human expert scorers.
BSCS Science Learning is leveraging these technologies in two research projects: PCK*lex and ArguLex.
The first project, PCKlex, explores measurement of teachers’ pedagogical content knowledge (PCK)—the type of teacher knowledge that bridges content knowledge and how to effectively teach the content in classrooms. It builds on several STeLLA studies that have measured PCK as an outcome of professional learning, as well as on the work of the BSCS PCK Summit in 2012, which brought together researchers from around the world to develop a consensus model of PCK. The product of the PCKlex project will be a computer scoring instrument that measures teachers’ PCK. The instrument will analyze teachers’ written descriptions of instructional practices they are observing through a video analysis task. In this task, teachers are exposed to carefully selected video clips from science lessons in which content-specific pedagogical moves are strategically illustrated. The computer scoring instrument will accurately reflect the time-consuming process of human scoring and will be available online. It will provide rapid PCK scores for research and evaluation purposes as well as formative feedback for teacher educators, professional learning providers, and teachers themselves. This project is a collaboration with the AACR group at Michigan State University.
Following in the footsteps of PCK*lex is ArguLex, a project that applies similar technologies to the measurement of students’ abilities to engage in scientific argumentation. Explanation and argument are essential practices in the Next Generation Science Standards (NGSS). However, these new standards will only have a meaningful impact if they are accompanied by high quality assessments that are closely aligned with a three-dimensional vision for teaching and learning science. Such assessments require a shift away from reliance on the efficiency and affordability of multiple-choice items and towards the use of more subjective, written tasks, aligned to NGSS performance expectations. The goal of the ArguLex project is to use automated analysis and machine learning techniques to develop an efficient, valid, and reliable measure of students’ placement on a learning progression for argumentation. Additionally, we are interested in the degree to which the computer scoring models are more or less biased against English language learners than humans scoring the same data (relative linguistic bias) and the capacity for automated scoring to differentiate between linguistic fluency and argumentation ability.
This material is based upon work supported by the National Science Foundation under Grants Nos. 1437173 and 1561150. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.