The College Puzzle Blog
Prior PostingsAbout
Dr. Michael W. Kirst

Michael W. Kirst is Professor Emeritus of Education and Business Administration at Stanford University since 1969.
Dr. Kirst received his Ph.D. in political economy and government from Harvard. Before joining the Stanford University faculty, Dr. Kirst held several positions with the federal government, including Staff Director of the U.S. Senate Subcommittee on Manpower, Employment and Poverty. He was a former president of the California State Board of Education. His book From High School to College with Andrea Venezia was published by Jossey Bass in 2004.

Most Recent Blog
::ACT Report Recommends ACTion to Increase College C...>
::Why Community Colleges Struggle to Increase Colleg...>
::Why Community Colleges Struggle to Increase Colleg...>
::College Success Begins With HIGH SCHOOL Engagement...>
::College Signals Should Improve College Success>
::Crossing the Chasm to the Clovers of College Compl...>
::College Success Begins in High School>
::Reading Scores Are Warning Signs for Lack of Colle...>
::How to Prepare More College-Ready Students>
::Search Engines May Hinder College Success (Part 3 ...>


My blog discusses the important and complex subjects of college completion, college success, student risk factors (for failing), college readiness, and academic preparation. I will explore the pieces of the college puzzle that heavily influence, if not determine, college success rates.

Limits of Tests for Assessing College Readiness

Limits of Tests for Assessing College Readiness
Can any test of high school students predict college success and indicate adequate college preparation?. Probably not if the only data used for prediction is a single test like ACT or SAT.
There are inherent problems with any single test as reported by the National Research Council’s Lessons Learned About Testing enumerates. Below are some relevant direct quotes from the NRC report:

There is measurement error related to the fact that the questions on a test are only a sample of all the knowledge and skills in the subject being tested – there will always be students who would have scored higher if a particular test version had included a different sample of questions that happened to hit on topics they knew well.

Other examples of factors that contribute to measurement error are students’ lucky guesses, physical condition or state of mind, motivation, and distractions during testing, as well as scoring errors. Therefore, a test score is not a perfect reflection of student achievement or learning.
One common problem is the tendency to use what are single, inexact measures to make very important decisions about individuals.

Testing professionals advise that when making high-stakes decisions it is important to use multiple indicators of a person’s competency, which enhances the overall validity (or defensibility) of the decisions based on the measurements. It also affords the test taker different modes of demonstrating performance.

High Stakes (1999) concludes that tests should be used for important decisions about individual students only after implementing changes in teaching and curriculum that ensure that students have been taught the material on which they will be tested.

A major rebuttal to Professor Ericksson’s contention that tests are the best predictor of college success is provided by UC studies of Grade Point Average at UC (UCGPA). These studies use student transcripts over several years to find whether high school Grade Point Average (HSGPA) or admissions tests used by IC (SAT I and SAT II) are the best predictor of UCGPA. These three studies listed below find the opposite of what Professor Ericksson contends.

1. Geiser with Studley (2002). This research, published in the peer-reviewed journal Educational Assessment, focuses on the relative predictive validity of the SAT I and SAT II examinations at the University of California (UC) for the years 1996-1999. The authors found that the SAT II tends to be a better predictor of first-year UC GPA than does the SAT I. Although the evidence is mixed, the research also demonstrates that HSGPA tends to perform better than SAT I and SAT II scores at predicting UC GPA. The research does not address the issue of the relative predictive validity of HSGPA versus the combination of SAT I and SAT II when both are used simultaneously to predict UC GPA.

2. Update of Geiser/Studley results (2004). The Geiser and Studley methodology was used to analyze data for 2000-2002 that have become available since the original study was conducted. In these analyses, HSGPA is an unambiguously better predictor of first-year UC GPA than is the SAT I or the SAT II. Furthermore, the updated analysis directly compares the predictive power of HSGPA with that of a combination of SAT I and SAT II scores, and finds that HSGPA is a better predictor of college completion and college success.

3. Burton and Ramist (2001). In a research report published by the College Board, these authors review more than a dozen predictive validity studies that were conducted on different data sets by various authors. In most of the studies, the high school record was a better predictor of college success than the SAT.

Labels: , ,

Copyright 2006 My College Puzzle