The College Puzzle Blog
Prior PostingsAbout
Dr. Michael W. Kirst

Michael W. Kirst is Professor Emeritus of Education and Business Administration at Stanford University since 1969.
Dr. Kirst received his Ph.D. in political economy and government from Harvard. Before joining the Stanford University faculty, Dr. Kirst held several positions with the federal government, including Staff Director of the U.S. Senate Subcommittee on Manpower, Employment and Poverty. He was a former president of the California State Board of Education. His book From High School to College with Andrea Venezia was published by Jossey Bass in 2004.

Most Recent Blog
::Blog is Moving!>
::Blog That Has Some Similar Goals As The College Pu...>
::New Evidence That Part Time Faculty Produce Fewer ...>
::Book Explores Why Males Lag Females In College Suc...>
::New Studies On College Remediation Show Short term...>
::Arizona Study Demonstrates High School Exit Test D...>
::Stimulus Bill Intensifies But Does Not Change Fed...>
::College Presidents MIA In Discussion About College...>
::College Data on Student Preparation and Success Is...>
::New Report By Jane Wellman of Delta Project Critiq...>

::September 2006> ::October 2006> ::November 2006> ::December 2006> ::January 2007> ::February 2007> ::March 2007> ::April 2007> ::May 2007> ::June 2007> ::July 2007> ::August 2007> ::September 2007> ::October 2007> ::November 2007> ::December 2007> ::January 2008> ::February 2008> ::March 2008> ::April 2008> ::May 2008> ::June 2008> ::July 2008> ::August 2008> ::September 2008> ::October 2008> ::November 2008> ::December 2008> ::January 2009> ::February 2009>

My blog discusses the important and complex subjects of college completion, college success, student risk factors (for failing), college readiness, and academic preparation. I will explore the pieces of the college puzzle that heavily influence, if not determine, college success rates.

The Disjuncture Between K–12 and Higher Education

One of the reasons for inadequate college preparation and completion is the weak connection between k-12 and postsecondary education. This results in unclear signals to students and lack of academic standards articulation.

The origin of the disjuncture between lower and higher education in the United States stems, in part, from the laudable way the nation created mass education systems for both K–12 and higher education. In Europe, in contrast, the higher grades of secondary education were designed for an elite group who would be going on to universities, and European universities have long played a major role in determining the content of the secondary school curriculum and both the content and format of secondary school examinations. For example, professors at British universities like Oxford and Durham grade the A levels taken by students during their last year of secondary education, and these essay exams figure crucially in a student’s chances for university admission.

Over time, the chasm between lower and high education in the United States has grown greater than that in many other industrialized nations (Clark, 1985), but at one time U.S. colleges and universities did play an important role in the high schools. In 1900, for example, the College Board set uniform standards for each academic subject and issued a syllabus to help students prepare for college entrance subject-matter examinations. (Prior to that, each college had its own entrance requirements and examinations.) Soon after, the University of California began to accredit high schools to make sure that their curriculums were adequate for university preparation.

In the postwar years, however, the notion of K–16 academic standards vanished. “Aptitude” tests like the SAT replaced subject-matter standards for college admission, and secondary schools added elective courses in nonacademic areas, including vocational education and life skills. Today, K–12 faculty and college faculty may belong to the same discipline-based professional organizations, but they rarely meet with one another. K–12 policymakers and higher education policymakers cross paths even less often.

Labels: ,

My blog discusses the important and complex subjects of college completion, college success, student risk factors (for failing), college readiness, and academic preparation. I will explore the pieces of the college puzzle that heavily influence, if not determine, college success rates.

Limits of Tests for Assessing College Readiness

Limits of Tests for Assessing College Readiness
Can any test of high school students predict college success and indicate adequate college preparation?. Probably not if the only data used for prediction is a single test like ACT or SAT.
There are inherent problems with any single test as reported by the National Research Council’s Lessons Learned About Testing enumerates. Below are some relevant direct quotes from the NRC report:

There is measurement error related to the fact that the questions on a test are only a sample of all the knowledge and skills in the subject being tested – there will always be students who would have scored higher if a particular test version had included a different sample of questions that happened to hit on topics they knew well.

Other examples of factors that contribute to measurement error are students’ lucky guesses, physical condition or state of mind, motivation, and distractions during testing, as well as scoring errors. Therefore, a test score is not a perfect reflection of student achievement or learning.
One common problem is the tendency to use what are single, inexact measures to make very important decisions about individuals.

Testing professionals advise that when making high-stakes decisions it is important to use multiple indicators of a person’s competency, which enhances the overall validity (or defensibility) of the decisions based on the measurements. It also affords the test taker different modes of demonstrating performance.

High Stakes (1999) concludes that tests should be used for important decisions about individual students only after implementing changes in teaching and curriculum that ensure that students have been taught the material on which they will be tested.

A major rebuttal to Professor Ericksson’s contention that tests are the best predictor of college success is provided by UC studies of Grade Point Average at UC (UCGPA). These studies use student transcripts over several years to find whether high school Grade Point Average (HSGPA) or admissions tests used by IC (SAT I and SAT II) are the best predictor of UCGPA. These three studies listed below find the opposite of what Professor Ericksson contends.

1. Geiser with Studley (2002). This research, published in the peer-reviewed journal Educational Assessment, focuses on the relative predictive validity of the SAT I and SAT II examinations at the University of California (UC) for the years 1996-1999. The authors found that the SAT II tends to be a better predictor of first-year UC GPA than does the SAT I. Although the evidence is mixed, the research also demonstrates that HSGPA tends to perform better than SAT I and SAT II scores at predicting UC GPA. The research does not address the issue of the relative predictive validity of HSGPA versus the combination of SAT I and SAT II when both are used simultaneously to predict UC GPA.

2. Update of Geiser/Studley results (2004). The Geiser and Studley methodology was used to analyze data for 2000-2002 that have become available since the original study was conducted. In these analyses, HSGPA is an unambiguously better predictor of first-year UC GPA than is the SAT I or the SAT II. Furthermore, the updated analysis directly compares the predictive power of HSGPA with that of a combination of SAT I and SAT II scores, and finds that HSGPA is a better predictor of college completion and college success.

3. Burton and Ramist (2001). In a research report published by the College Board, these authors review more than a dozen predictive validity studies that were conducted on different data sets by various authors. In most of the studies, the high school record was a better predictor of college success than the SAT.

Labels: , ,

My blog discusses the important and complex subjects of college completion, college success, student risk factors (for failing), college readiness, and academic preparation. I will explore the pieces of the college puzzle that heavily influence, if not determine, college success rates.

ACT Report Recommends ACTion to Increase College Completion

A 2007 study released by ACT, the largest college admissions testing company, specified substantial mismatches between high school course content and what college teachers want students to know. The study “Aligning Postsecondary Expectations” underlines how college completion and college success can be undermined by unaligned content and skills within seemingly college preparation courses.

I offer the following recap of quotes and insights from the report.

First, the national survey of 35,665 educators tells us what postsecondary institutions believe is important and necessary for their entering students to know and what middle and high school teachers are teaching. It focuses, therefore, on identifying the gap between postsecondary expectations and high school practice.

For example, high school teachers in all content areas (English/writing, reading, mathematics and science) tend to rate content and skills as “important” or “very important” moreso than did their postsecondary or remedial counterparts. It may be that the extensive demands of state standards are forcing high school teachers to treat all content topics as important, sacrificing depth for breadth.

Postsecondary instructors ranked mechanics more frequently among the most important groups of skills for success in an entry level, credit-bearing postsecondary English/writing course, while high school teachers’ rankings of these strands were generally lower.

High school mathematics teachers gave more advanced topics greater importance than did their postsecondary counterparts. In contrast, postsecondary and remedial-course mathematics instructors rated a rigorous understanding of fundamental underlying mathematics skills and processes as being more important than exposure to more advanced mathematics topics.

High school science teachers consistently rated science content as more important to student success than science process/inquiry skills. These responses are in direct contrast to those of middle school and postsecondary science teachers, who consistently rated science process skills higher in importance than science content.

The survey responses of postsecondary English/writing instructors suggest that high school language arts teachers should focus more on punctuation and grammar skills to better prepare their students for college-level expectations in college composition courses.

(Aligning Postsecondary Expectations and High School Practice: The Gap Defined - - Policy Implications of the ACT National Curriculum Survey® Results 2005–2006, Iowa City: ACT, 2007).

Labels: ,

Copyright 2006 My College Puzzle