Digital Module 32


ITEMS Portal Menu

Digital Modules     Home     Print Modules


2e8cabc8-b979-47c3-a86c-b187f0ce837e

Understanding and Mitigating the Impact of Low Effort on Common Uses of Test and Survey Scores 


Module Overview

Most individuals who take, interpret, design, or score tests are aware that examinees do not always provide full effort when responding to items. However, many such individuals are not aware of how pervasive the issue is, what its consequences are, and how to address it. In this digital ITEMS module, Dr. James Soland will help fill these gaps in the knowledge base. Specifically, the module enumerates how frequently behaviors associated with low effort occurs, and some of the ways they can distort inferences based on test scores. Then, the module explains some of the most common approaches for identifying low effort, and correcting for it when examining test scores. Brief discussion is also given to how these methods align with, and diverge from, those used to deal with low respondent effort in self-report contexts. Data and code are also provided such that readers can better implement some of the desired methods in their own work.

Meet the Instructor

James Soland, University of Virginia

Assistant Professor, University of Virginia

Jim Soland is an Assistant Professor of Research, Statistics, and Evaluation at the University of Virginia (UVA) and an affiliated research fellow at NWEA, an assessment nonprofit. His work focuses on how measurement decisions impact common uses of scores. Particular areas of interest include understanding how scoring choices affect inferences related to program evaluation and modeling student growth, as well as quantifying and addressing disengagement on tests and surveys. His work has been featured by the Collaborative for Academic, Social, and Emotional Learning (CASEL) and the Brookings Institute.  Recently, he has collaborated in research examining the effects of COVID-19 on student learning, which was highlighted in the New York Times.  Prior to joining NWEA, Jim completed a doctorate in Educational Psychology at Stanford University with a concentration in measurement.  Jim has also served as a classroom teacher, a policy analyst at the RAND Corporation, and Senior Fiscal Analyst at the Legislative Analyst’s Office (LAO), a nonpartisan organization that provides policy analysis to support the California Legislature and general public.




Abstract

INTRODUCTION

Upon completion of this ITEMS module, learners should be able to:
  • Provide examples of why addressing test effort is important to valid uses of test scores.
  • Describe how measurement experts often define/operationalize low effort.
  • Implement basic techniques used to separate effortful and noneffortful responses.
  • Fit IRT models that account for noneffortful responses.
  • Consider applications of these, and other, methods for identifying and addressing low respondent effort in self-report contexts.
  • Understand extensions of these basic approaches for identifying and addressing low examinee effort.

Module Content

SECTION 1

Why should we worry about low examinee effort?

Upon completion of this section, learners should be able to:
  • Understand the prevalence of low examinee effort

  • Identify examinee/item characteristics associated with low effort

  • Articulate ways that low effort can affect parameter estimation

  • Articulate ways that low effort can affect common test-based inferences

Video: 

Interactive Learning Check:
Section_1_-_Learning_Check.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 2

How is low effort defined and operationalized in testing contexts?

Upon completion of this section, learners should be able to:
  • Define common approaches to identifying low effort
  • Understand the central role of item response times in many approaches to identifying low effort
  • Identify sources of meta-data other than response times that can be useful in identifying low effort
  • Articulate tradeoffs between using response times versus other sources of meta-data to identify low effort
Video: 

Interactive Learning Check:
Section_2_-_Learning_Check.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 3

Setting Response Time Thresholds

Upon completion of this section, learners should be able to:
  • Define and implement threshold- setting methods
  • Articulate why the method chosen likely depends on the intended use of scores
  • Understand limitations and strengths of each approach
Video: 

Interactive Learning Check:
Section_3_-_Learning_Check.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 4

Addressing Low Effort

Upon completion of this section, learners should be able to:
  • Outline ways to prevent low effort before or as it happens
  • Define person- versus item-level filtering
  • Gain conceptual understanding of how several baseline IRT models to address low effort are specified
  • Identify which IRT model might be best in your context
Video: 

Interactive Learning Check:
Section_4_-_Learning_Check.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 5

How is low effort defined and operationalized in self-report contexts?
Upon completion of this section, learners should be able to:
  • Name common approaches to identifying low effort on surveys
  • Identify tradeoffs involved in using each approach
  • Compare with response time approaches used for achievement
  • Know how to use multiple approaches in tandem to improve validity of inferences


Interactive Learning Check:
Section_5_-_Learning_Check.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 6

Activity

Data and Syntax:
Demo.zip
Please right click on File. Then, click "Save Link As..." and open on your PC.

Downloadable Content

Click to view or Right Click and "Save Link as..."