Relating Classroom and Summative Assessment to the Curriculum

Relating Classroom and Summative Assessment to the Curriculum

President's Message

Wilson, M. (2016, December). NCME Newsletter, 24(4), 1-4.

In the previous two president’s messages, I have (a) made a case for the importance of classroom assessment as a site for educational measurement research and development and as a principal context in which educational measurement can be a positive influence for educational success, and (b) delineated two types of uses of summative assessment, information uses, and signification uses. (See the June 2016 and September 2016 NCME Newsletters if you missed them.) In this message, I will turn my attention to the relationship between classroom and summative assessment and the curriculum.

At the end of the previous message, I described Paul Black’s “vicious triangle,” which illustrates the way that teachers’ plans for their students’ learning can be squeezed between the demands of the curriculum and the summative assessments that are used for evaluative purposes. This can have multiple harmful effects, including the replacement of teaching the curriculum with teaching the test, and related reductions in student engagement and teacher morale. The central issue is that the assessments (both classroom and summative) need to be working in concert with the curriculum. When that coherence breaks down, or, as is sometimes the case, was never there in the first place, the sorts of negative outcomes as mentioned in the previous message can occur (Wilson, 2004).

Thus, we must develop ways of understanding and expressing the structure of both curriculum and assessments together (a) so that the curriculum can be used to define the goals of the assessment (i.e., constructs to be assessed), and (b) so that the results of the assessments can be interpreted directly in terms of those curriculum constructs. In my view, this is best achieved through the construction of learning progressions (also known as learning trajectories) that articulate student development through the curriculum in terms of the main content areas and reasoning and other disciplinary practices involved. One description of the concept of a learning progression is as follows:

Learning progressions are descriptions of the successively more sophisticated ways of thinking about an important domain of knowledge and practice that can follow one another as children learn about and investigate a topic over a broad span of time. They are crucially dependent on instructional practices if they are to occur. (Corcoran, Mosher, & Rogat, 2009, p. 37)

This idea of a progression of sophistication in student ways of thinking can be combined with the psychometric concept of a set of unidimensional constructs to create a roadmap of students’ development (Black, Wilson & Yao, 2011). An illustration of this is provided in Figure 1, where the succession of higher and larger clouds represents the complexities of student thinking in the learning progression, and the vertical bars represent the psychometric constructs designed to act like lines of longitude, mapping out the main constructs in the learning progression. Within the constructs, there are levels that delineate different degrees of sophistication of the thinking within each construct. These would not function like lines of latitude unless they were coordinated across the constructs (although that would be possible, too). Note that the little figure in the bottom left-hand corner represents the curriculum developer and/or assessment developer who is creating this learning progression. For some examples of such learning progressions, see Brown, Nagashima, Fu, Timms, and Wilson (2010—on the topic of scientific reasoning); Lehrer, Kim, Ayers, and Wilson (2014—statistics and modeling); Osborne, Henderson, MacPherson, and Yao (2016— scientific argumentation); and Wilson, Scalise, and Gochyyev (2015—ICT literacy). A simple unidimensional example involving buoyancy can be found in Kennedy and Wilson (2007).

Wright map for consumer in social networks

Figure 1: Representation of a set of constructs mapping a learning progression

Armed with such a map of student development, both the curriculum developer and the assessment developer can build a coordinated system of instruction and assessment, and the resulting coherence between the two can lead to greater usefulness of the assessments to instruction, and, thus, to a greater possibility of students achieving success (Wilson & Sloane, 2000). In terms of developing such a roadmap, we have found that, although curriculum ideas must be posited first, of course, it is essential that the assessment perspective be brought into consideration as early as possible, and that it is also important to include actual data from (perhaps initial versions of) assessments into the curriculum development process.

As an example of an empirical representation of one such a construct, see Figure 2. This Wright map illustrates a construct called consumer in social networks (see Wilson et al., 2015, for more detail). The construct is shown vertically, with more sophisticated thinking towards the top. The on-the-side histogram of x’s represents student locations; the numbers on the right represent score levels for items (e.g., 44.2 locates the threshold between Category 1 and 2 for the three-level item 44); and the right-hand labels show the three levels: emerging consumer, conscious consumer, and discriminating consumer. This map can then be used to design assessments for both classroom assessment purposes (e.g., diagnosing individual student levels of performance, and with the augmentation of student-fit statistics, to check for students with interestingly different response patterns), as well as summative purposes (e.g., interpreting average gains by students in different classes), and also to relate the results from these two levels together.

One concern that can be raised is that by adopting such structures for curricula, we would be constraining the choices of schools and teachers regarding their curriculum content planning. There are two points to note about this: One is that this is true in the same sense that adopting standards is constraining, but indeed that is a choice most educators are comfortable with (and, one might add, this constraint is somewhat stronger, due to the inherent ordering of the constructs, although such ordering is very common in standards documents). The second is that adopting a particular structure still leaves much room for adopting and adapting a variety of instructional practices and specific educational contexts and strategies, again, just as with the adoption of standards.

I will continue this story, connecting back to classroom assessment, in my next newsletter message.

Wright map for consumer in social networksFigure 2. Wright map for consumer in social networks.

References

Black, P., Wilson, M., & Yao, S. (2011). Road maps for learning: A guide to the navigation of learning progressions. Measurement: Interdisciplinary Research and Perspectives, 9, 71-123.

Brown, N. J. S., Nagashima, S. O., Fu, A., Timms, M. J., & Wilson, M. (2010). A framework for analyzing scientific reasoning in assessments. Educational Assessment, 15(3-4), 142-174.

Corcoran, T., Mosher, F. A., & Rogat, A. (2009, May). Learning progressions in science: An evidence-based approach to reform (CPRE Research Report No. RR-63). New York, NY: Center on Continuous Instructional Improvement, Teachers College—Columbia University.

Kennedy, C. A., & Wilson, M. (2007). Using progress variables to map intellectual development. In R. W. Lissitz (Ed.), Assessing and modeling cognitive development in schools: Intellectual growth and standard setting. Maple Grove, MN: JAM Press.

Lehrer, R., Kim, M.-J., Ayers, E., & Wilson, M. (2014). Toward establishing a learning progression to support the development of statistical reasoning. In A. Maloney, J. Confrey, & K. Nguyen (Eds.), Learning over time: Learning trajectories in mathematics education (pp. 31-60). Charlotte, NC: Information Age Publishers.

Osborne, J. F., Henderson, J. B., MacPherson, A., & Yao, S.-Y. (2016). The development and validation of a learning progression for argumentation in science. Journal of Research in Science Teaching, 53(6), 821–846.

Wilson, M. (Ed.). (2004). Towards coherence between classroom assessment and accountability. 103rd Yearbook of the National Society for the Study of Education, Part II. Chicago, IL: University of Chicago Press.

Wilson, M., Scalise, K., & Gochyyev, P. (2015). Rethinking ICT literacy: From computer skills to social network settings. Thinking Skills & Creativity, 18, 65-80.

Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208.