Advertisement

Assessing the Value of a Harvard Education

When Harvard's Economics Department wanted to improve its huge introductory course, Social Analysis 10, researchers did a careful study which compared course objectives to actual results. The analysis showed that students retained broad ideas from the course, but not specific details. As a result, Economics Department instructors revised their teaching stratagies to eliminate much of the technical terminology, which students were forgetting anyway, from the course.

This study is an example of a growing national trend toward increased critical evaluation of higher education. At Harvard and other universities, educators are trying to assess whether or not higher education is achieving its goals and what those goals should be.

As the nation reconsiders the type of return it expects from its investment in education, many politicians have urged that universities be sure they are successfully teaching their students. But as the assessment efforts have grown in the past few years the question scholars have yet to answer is what constitutes a good education. Is it concentrating on a narrow body of knowledge--the terminology of economics or developing broad interdisciplinary skills as the concepts now emphasized in Ec 10?

At Harvard this issue, among others, is being addressed by sixty leading educators who meet once a month as part of a Seminar on Assessment. These scholars, who began meeting in September, are assessing the effectiveness of various aspects of the University, ranging from specific academic programs to the role of extracurricular activities in student life.

The Seminar, which was established by President Bok last spring, is part of an explosion of interest in educational evaluation. Fueled by several recent reports criticizing the nation's colleges and universities for failing to evaluate the effectiveness of their programs, these assessment programs range from the more open ended and scholarly like Harvard's to very specific state mandated standardized tests.

Advertisement

Those critical reports, including Involvement in Learning from the U.S. government's Office of Research and Development of Education (ORDE) and the Carnegie Foundation's College, have "put assessment front and center" on the higher-education agenda, according to Cliff Adelman, senior associate in ORDE.

Critical evaluation of educational programs is relatively new, educators say. Colleges have traditionally done very little empirical self-examination. Instead, they have often based program evaluations on impressions and intuition rather than hard data. "The time faculties and administrators spend working together on education is devoted almost entirely to considering what their students should study rather than how they can learn more effectively or whether they are learning as much as they should," Bok wrote in his book, Higher Learning, published in the fall.

The Harvard Seminar hopes to address Bok's concerns about Harvard's ability to criticize and evaluate itself.

But in order to evaluate teaching and extracurricular programs the Seminar first needs to determine what constitutes a successful outcome, Light says, adding it then must develop tests and other criteria for measuring those definitions of success.

Assessment techniques have been the focus of persistent debate among educators. Until recently many experts believed that the broad objectives proposed by assessment advocates were unrealistic with current research tools. "To study how well students are being educated you need a huge amount of data," said Kenneth C. Green, associate director of UCLA's Cooperative Institutional Research Program (CIRP).

Because of the vast amounts of information needed to analyze higher educational outcomes, most early assessment programs measured student knowledge with standardized tests such as Florida's university system's "rising junior exam," which was required of all students before they could enroll as third-year undergraduates. These basic skills tests are easy to use but draw wide criticism from educators, who argue that they trivialize higher education by not testing higher-order skills.

"Such exams emphasize the acquisition of facts and the mastery of simple skills, [but]...are not suited to measuring how clearly students think about social justice, how deeply they can appreciate a painting or a literary text, how much they have gained in intellectual curiosity, [or] how far they have come in understanding their own capacities and limitations," Bok writes.

In response to this criticism, along with complaints that the exams test students rather than institutions, assessors at Harvard and elsewhere are seeking to diversify their tools. The University of Tennessee at Knoxville, for example, uses the College Outcome Measurement Project (COMP) exam to test student ability to clarify values, solve higher-order problems, and communicate. Seniors are also asked to evaluate their college experience and education in a thorough survey.

"Test scores may tell you what students are learning, but not how they're getting that information," says Trudy W. Banta, a professor at Knoxville's Learning Research Center (LRC).

The University has identified a number of critical problems as a result of the LRC assessment.

Advertisement