Dan Chambliss, a former colleague of mine at Hamilton, has written a valuable article for the Center for Inquiry in the Liberal Arts. The article reminds us of the difference between the way that most faculty experience academic life and the way students experience it and suggests some ways that we can incorporate a more systematic understanding of student perception into our planning and assessment models.
As a sociologist, Dan has spent much of his professional life rigorously studying things that many of us know intuitively at some level, but often don’t act upon. Research by sociologists and anthropologists has confirmed that students exist in a culture that often runs parallel to that of faculty and administrators, but that only occasionally intersects.
Students and faculty also approach academic disciplines with different expectations. Faculty, for instance, typically place the psychology department among the natural sciences; most psychologists themselves do, and many fiercely advance a scientific agenda and image for their discipline. But most freshmen (reasonably) expect psychology to explain parental divorce, boyfriend problems, and why roommates fight. When they discover that hypothesis testing often figures more prominently than people, many students drop psychology.
Perhaps the most important point in the article for me, though, was the clear identification of he problem of reasoning from “organizational collectivities”:
the success of individual students doesn’t directly reflect the success of classes, departments, programs, or institutions, since individual experience cannot automatically be inferred from the behavior of collectivities.
With all the emphasis on assessment of student outcomes, most of us still fall back on reasoning from collectivities as a way of judging and publicizing our quality. Individual student learning is a complex interaction of all the academic and nonacademic experiences of a whole human being, and measuring the effectiveness of courses, departments and professors will never give the kind of deep insight into individual learning that would be necessary to allow us to make our universities truly liberating. Most universities have the expertise among the faculty in the social sciences to do this kind of research at a much more sophisticated level.
Even though we have the expertise, few of universities are using it to plan policy, because of the natural limitations of our own humanty. All of tend to focus most directly on the contribution that we make to the institution:
As the paid employees of academic institutions, then, we all concentrate on our formal, institutionalized, organized efforts to help our students. So it’s not surprising that when we try to measure what happens, we measure our own efforts: what buildings are newly opened; what programs are designed and initiated; what’s in the course catalogue; the classes we teach and how many students are in them; even how successful those classes are.
Dan lays out specific guidelines for doing policy research:
- Start by sampling actual student and looking at their entire transcripts. Even small random samples of transcripts can give “startling” insights into the actual academic lives of your students.
- Look hard at the academic lives of all your students–not just the award winners. How did the bottom half of the class get there? Does the institution bear any responsibility or is it purely the students’ lack of talent or achievement?
- “Finally, remember that departmental or program-level assessment, so politically feasible and apparently efficient, may easily be irrelevant to student outcomes.”
Lots to think about here…
While it is true that most institutes of higher learning (as well as public K-12 schools) judge their success on the collective by how many graduate, we really do ourselves an injustive when we do not look deeper at the individuals who make of the masses.
Dan makes excellent suggestions to go beyond just the surface of a school’s success; however, I would also seek information on what the student did after leaving the college/university (or in my field K-12 school). Did the student immediately seek employment or enroll for more education? In addition, knowing why the student chose the way he/she did becomes important for policy makers and planners as well.
I feel that K-12 schools and colleges/universities do not collect sufficient student outcome data because of the lack of sufficient manpower to collect and maintain such data. While collecting data after a student graduates regardless at the high school or the college level becomes difficult, the effectiveness of an institution is partially tied to the student outcomes it generates.