Digital Module 35


ITEMS Portal Menu

Digital Modules     Home     Print Modules


2e8cabc8-b979-47c3-a86c-b187f0ce837e

Through-Year Assessment


Module Overview

Through-year assessments are generally assessments that are administered in multiple parts and at different times over the course of a school year (Lorié et al., 2021; Dadey & Gong, 2023). These assessments are referred to in other ways that include instructionally-embedded, through-course, and periodic assessment. There are a number of through-year assessment models, and they have gained favor for a variety of reasons that include their potential to inform subsequent instruction, be more closely aligned with and responsive to curricula and instruction, provide opportunities for proximal measures of learning, and be a more sensitive measure of student progress or growth than typical year-end summative assessments (ATLAS, 2020; Clark et al., 2017; Gong, 2021; NWEA, 2020; Wise, 2011). 

The varied uses of through-year assessment systems, however, have raised questions about the criteria used to evaluate the technical quality of these assessments. Criteria related to the definition of assessment targets for status versus growth, content domain sampling, the relationship among the multiple forms or tests, and the aggregation and weighting of results, for example, have implications for an assessment's design and the validity of the interpretation of results (Gong, 2021; Wise, 2011). In this digital ITEMS module, Dr. Nathan Dadey, Dr. Brian Gong, Yun-Kyung Kim, and Dr. Edynn Sato present information about through-year assessment, including discussion of major test design elements and considerations, key challenges that pose a threat to assessment validity and utility, and recommended methods to address these challenges, and considerations for implementation. Examples and discussion of through-year assessment models also are provided.

Meet the Instructors

Nathan Dadey

Nathan Dadey, Ph.D. is a Senior Associate at the Center for Assessment. His work focuses on the design, scaling, and use of educational assessments, particularly assessments used for accountability purposes. His work addresses issues that threaten the validity of assessment and accountability operational programs. Dr. Dadey uses psychometric and statistical methods to manage practical problems, including issues related to combining interim assessment data, dimensionality of alternative assessments, subscores, and vertical scales. He aims to produce methodological and applied work that contributes to improved understanding and use of assessment results in policy contexts.



Yun-Kyung Kim

Yun-Kyung Kim is a doctoral student at the University of California, Los Angeles in the Social Research Methodology division with an emphasis on advanced quantitative methods. She received a Master's degree in Educational Measurement and Evaluation from Seoul National University. Ms. Kim has been involved in the design and implementation of measurement and evaluation studies, with application of latent variable models including item response theory models. Her research focuses on latent variable models for longitudinal data and inferences on growth, change, and future trajectory, mainly in the context of English language proficiency assessment and admission setting.

Brian Gong

Brian Gong, Ph.D. is co-founder and Senior Associate at the Center for Assessment. He has been involved with creating policies, models, and criteria for promoting validity, reliability, and credibility in both assessments and accountability systems. Dr. Gong has helped develop solutions including educationally valuable and technically defensible state accountability systems and innovative assessments (e.g., science performance, writing portfolio, learning progressions, growth, non-cognitive, comprehensive assessment systems). He was a member of the committee tasked with revising the Standards of Educational and Psychological Testing and was the co-author of content methodology to implement the CCSSO Criteria for Procuring and Evaluating High-Quality Assessments.

Edynn Sato

Edynn Sato, Ph.D. is CEO of Sato Education Consulting LLC and also is Director of Psychometrics & Research for WIDA at the University of Wisconsin-Madison. She is an experienced Peer Reviewer of State Assessments for the U.S. Department of Education, serves on several technical advisory committees, recently was co-Principal Investigator of a grant  and lead designer for an alternate English language proficiency assessment, and she has contributed substantively to the development of a number of other large-scale assessments for accountability. Her research focuses on academic English language processes, opportunity structures and cultural responsiveness, and assessment fairness and validity, with particular interest in alternative and innovative ways to measure what multilingual learners and students with disabilities know and can do.

Abstract

INTRODUCTION

Upon completion of this ITEMS module, learners should be able to:

•    Define relevant key terminology and underlying concepts related to through-year assessment. 
•    Understand the current range of purposes and uses, design elements, key challenges, and recommended methods/approaches to address challenges in support of the assessment's validity and utility. 

Video:
  

Module Content

SECTION 1

Context

Upon completion of this section, learners should be able to:
  • Become familiar with Federal requirements for state assessments for accountability
  • List desired areas for the improvement and innovation of assessment that have implications for a through-year assessment model
  • Define “through-year assessment”
  • Distinguish through-year assessment from other types of assessment (e.g., interim, formative, summative)

Video: 

Interactive Learning Check:
Learning_Check_-_Section_1.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.
Learn More:
SECTION 2

Major Design Elements

Upon completion of this section, learners should be able to:
  • Understand that through-year assessments involve multiple inferences about students for multiple purposes. 
  • Characterize key distinctions in how the content domain can be “structured” across assessment “modules”. 
  • Examine practical issues related to administration.
  • Articulate an overall framing for score aggregation as well as provide detail on a variety of approaches.

Video: 
Interactive Learning Check:
Learning_Check_-_Section_2.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 3

Examples, Key Challenges, and Methods to Address the Challenges

Upon completion of this section, learners should be able to:
  • Become familiar with examples of through-year assessments currently in use or in development
  • Discuss the examples in terms of major design elements and considerations
  • List key challenges to the development and use of through-year assessments
  • Discuss methods to address the challenges in support of the assessment’s validity

Video: 

Interactive Learning Check:
Learning_Check_-_Section_3.pptx
Please right click on File. Then, click "Save Link As..." and open with PowerPoint on your PC.

SECTION 4

Considerations for Implementation

Upon completion of this section, learners should be able to:
  • Identify key considerations for successful implementation in addition to technical design
  • Identify how previous aspects in the module support needed evaluation and continuous improvement



Interactive Learning Check:
Learning_Check_-_Section_4.pptx
Please right click on File. Then, click "Save Link As..." and open on your PC.

SECTION 5

Summary and  Relevant Areas of Research

Video:



Downloadable Content

Click to view or Right Click and "Save Link as..."