Building Bridges: Improving Test Optional Practice with Research

By Megan Welsh posted 07-30-2020 21:28


Jinghua Liu, Enrollment Management Association

I use Google Alert to track updates including college admissions, standardized testing, holistic enrollment, and so on. Recently, my mailbox has been jammed every day with headlines such as “Yale has adopted a one-year test-optional policy for first-year applicants in the 2020-2021 admissions cycle;” “In response to COVID-19, Pennsylvania will not require applicants to submit the SAT, ACT, or SAT Subject Tests for the 2020-21 application cycle;” “For first-year applicants in the 2020-21 admission cycle, Brown is now test optional. This change is for the 2020-21 academic year only.” College Board recently announced that an at-home SAT will not be offered this fall; that in-person test taking will be limited due to social distancing regulations; and requested for colleges and universities to provide flexibility regarding when and whether applicants need to submit SAT scores. All of these headlines point to a possible acceleration in the test-optional movement.

According to Fair Test, The National Center for Fair and Open Testing, as of June 11 more than 1,200 accredited, four-year colleges and universities have adopted test-optional policies for the fall 2021 admission cycle. This number could be interpreted as a very generous upper bound on the number of accredited colleges and universities that are truly test optional. According to Fair Test data, if we only consider institutions that do not use tests for admissions for any students for any programs, the number of test optional institutions drops below 800. If we further eliminate schools implementing only temporary measures, fewer than 700 schools are truly test optional. Still, that number is not trivial.

The California Institute of Technology’s announcement on June 7th that it would “stop considering applicants’ ACT and SAT scores during the next two admissions cycles,” has pushed the concept of test-optional admission to its extreme: test-blind. CIT is not the first institution to consider a test-blind policy (Sarah Lawrence College may have been the first institution in recent history to be test-blind, although now they are test-optional). At the present time, only a handful of institutions do not consider college entrance exam scores even if submitted.

Psychometricians and researchers could argue that standardized testing scores do a decent job predicting first year college GPA and other outcomes (as evidenced by College Board’s most recent SAT predictive validity report). The University of California Task Force found that “standardized test scores aid in predicting important aspects of student success, including undergraduate grade point average (UGPA), retention, and completion. At UC, test scores are currently better predictors of first-year GPA than high school grade point average (HSGPA), and about as good at predicting first-year retention, UGPA, and graduation.”

Despite the empirical evidence, however, here is the reality: the perceived importance of standardized test scores in college admissions decision making has been declining over the last decade. NACAC conducts annual Admission Trends Surveys. One of the survey questions asks college and university admissions officers to attribute a level of importance to each factor used to make admissions decisions for first-time freshmen. The top four admission factors have been consistent over the years: Grades in college prep courses, strength of curriculum, admission test scores (SAT, ACT) and grades in all courses. This survey question results from 2007 to 2018 are plotted in Figure 1.

Figure 1. Trend analysis: Percent of colleges attributing “Considerable Importance” to factors in admission decisions - academic achievement


What has not been consistent is the relative positions of these four factors. As can be seen from Figure 1, grades in college prep courses have been consistently rated as the top factor in the admission decision making process, followed by strength of curriculum, admission test scores, and overall GPA. Somewhere around 2014, this trend started to change: the rated importance of overall GPA has been climbing, matching grades in college prep courses in 2016, and surpassing them in subsequent years. In 2007, 52% of colleges and universities that participated in the survey rated overall GPA of “considerable importance;” in 2018, this percentage jumped to 75%. In contrast, the importance of admission test scores has consistently fallen. From 2007 to 2018, the percentage of respondents rating college admission tests as having considerable importance dropped from 59% in 2007 to 46% in 2018, a 13% decrease. I wonder what Figure 1 will look like after NACAC’s next round of survey results are available.

Thus, here is the reality we have to face, whether we like it or not: more and more colleges and universities are going test-optional, either as a direct response to COVID-19, or as an approach to promote college accessibility. As practitioners and researchers in the fields of education and measurement, what should we do?

We could adopt a two-fold approach. On the one hand, we could continue to show the value-add of test scores in admission; on the other hand, we could anticipate the eventual demise of college entrance exams and help colleges and universities to prepare for a new era without test scores in admission. Here are some suggestions.

  1. Analyze current admissions practices.

Colleges and universities can perform a thorough analysis of their current admission process:

  1. Consider each piece of information required and its corresponding role in admission; what do admissions officers expect each piece of information to tell about a student’s academic achievement, college readiness and character traits?

  2. Specifically, what role do test scores play in the current process; how much do test scores weigh in decision making; what is the current predictive admission model with test scores included, and what is the current yield model with test scores included?

  3. Presumably, without test scores, other factors including high school GPA, course rigor, grades in college prep courses, an essay or writing sample, teacher/counselor recommendations, students’ demonstrated interests, extracurricular activities, subject test scores (AP, IB), interviews (if applicable), and so on, will all contribute to determining a student’s academic performance. What is the relative importance of these factors compared to test scores when making admission decisions? More importantly, given the inherent inequities in the K-12 system, what can we do to prevent a perpetuation of those inequities in higher education?

  1. Explore new practices using existing data to get prepared
  1. Using existing data, explore predictive admission models that do not include test scores. In addition to overall GPA, course rigor and college prep course grades, what other factors can reflect academic performance and readiness? Compare the new model to the model that includes test scores; note the differences and make adjustments.

  2. Assume that without testing scores, colleges and universities will pay even more attention to high school GPA. Is it possible to develop a matrix or a rubric, for example, based on the rigors of the curriculum, which could place GPAs from a wide range of high schools’ report cards and transcripts onto a relatively more objective and interchangeable common scale?

  3. Collect validity data after students are enrolled. Calculate correlation coefficients with and without test scores. Adjust predictive models based on empirical data.

  1. Consider character attributes in college admission

The idea of measuring character (which might be referred to as “noncognitive traits” elsewhere) and using it to predict college success has a long history, as Zwick pointed out in her book Who gets in. Independent schools and colleges value positive character attributes such as resilience and intellectual curiosity and include the evaluation of character attributes as an important part of holistic admission.  NACAC and Character Collaborative jointly conducted a survey of secondary school counselors and college admission officers. The survey asked admission officers to indicate the level of importance given to various factors in admission decisions, including the role of positive character traits.

The survey results are depicted in Figure 2. As discussed above, indicators of academic performance including grades in all courses, grades in college prep courses, strength of curriculum and admission test scores, were ranked as the most important factors. After this cluster of factors, positive character attributes received the highest rating among other factors: Approximately 26% of the college admission officers and secondary school counselors rated positive character attributes as having considerable importance.


Figure 2
. Percent of colleges attributing “Considerable Importance” to factors in admission decisions with positive character attributes included


The Character Collaborative conducted a follow-up study to explore how institutions incorporate character into the admission process. One major finding was that there is no standard tool among institutions to measure character traits. College admission officers look for traits such as resilience, kindness, and service to others by examining applicants’ personal statements, recommendations, interviews, and other submitted information. 

There is a need for a standardized tool to measure character attributes. The Enrollment Management Association (EMA) launched an innovative online tool, the Character Skills Snapshot, to assess seven essential character skills of students seeking entrance to independent schools grades 6–12. Could such a standardized measurement tool be developed and used in Higher Education admissions?

  1. Provide a tool to capture multiple data points and to improve decision making consistency

The follow-up study conducted by Character Collaborative also found that in the process of looking for character attributes, admission officers use various tools for capturing information (e.g., narratives, rubrics, team rating), and colleges use various processes to incorporate academic and character data into the admission decision. There is no standard decision logic among institutions.

Hence there is a need for a standardized tool to track a large number of variables and use those variables in a consistent way for decision making. Shu and Kuncel conducted an experiment and demonstrated that when decision makers were given summary anchors, decision consistency and accuracy were improved. One way to create a decision aid that can accommodate the multiple data sources used in admissions is the integration grid. 

An integration grid is a decision aid matrix that can accommodate large number of variables used in decision making. Table 1 is a mockup of a grid[i]. Each of the competency dimensions is on the Y axis while the information sources are on the X axis. The strong sources of information have two “X”s while the weaker one’s have a single “X.” For instance, standardized test scores and GPA are strong indicators of academic skills, hence each receiving two “X”s; extra-curricular activities are a weak indicator of persistence and drive, hence receiving one “X.” Users can decide the weighting that serves them best. The column to the far right is the index of the corresponding competency dimension. Such a grid can be filled out for each candidate. All candidates’ indices can then be put on a common scale for direct comparison. A heat map can be generated as well (e.g., indices of “1” or “2” could flag concerns for consideration of needed assistance if the applicant is admitted and enrolled.

Table 1

Integration Grid Example


  1. Promote good practice

Some colleges and universities have temporarily made college entrance exams optional as a direct response to COVID-19. Fall 2021 will be the first time for them that SAT/ACT scores are not required. Other colleges and universities, on the other hand, have a much longer history of test optional policies and have accumulated rich experiences. For example, Bowdoin College has more than five decades’ of admission experience without a test requirement. The methodology they use, the data points they collect, and the process they use that blends the evaluation of academic performance and character traits, could benefit other practitioners. 

Recently, I spoke with Whitney Soule, Senior Vice President and Dean of Admissions and Student Aid at Bowdoin College. She expressed concerns that the ride to college admission and enrollment for the next few years is going to be bumpy. It is likely that some schools are not prepared to approach admission without admissions tests. In an earlier Inside Higher Ed article, Soule was quoted as saying, “It’s one thing to choose to become test optional, but it’s an entirely separate project to figure out what the testing actually provided – what it corresponded to, or not, elsewhere in the process. Were there gaps in the process, even with testing, that absolutely need to be understood and addressed now without testing?” As education and measurement professionals and practitioners, our job is to help schools to explore and to identify those gaps, and then to provide the tools and information to fill them.

[i] Kuncel, N. (2018). Integration grids: Examples and structure [Unpublished manuscript]. Department of Psychology, University of Minnesota.