The use—or misuse—of student ratings of instruction (SRIs) in faculty evaluation is a frequent topic in higher education news. Unfortunately, popular press articles on the topic often garner more attention than the vast empirical literature. A recent review of the research by Linse (2017) pointed to common misperceptions about SRIs, offering administrators evidence-based advice on appropriate SRI interpretation.
Albert Einstein is credited with the observation that “Not everything that counts can be counted, and not everything that can be counted counts.” Perhaps nowhere is this principle truer than when trying to evaluate the work of faculty members and administrators in higher education. Yet the difficulty of the task rarely stops anyone from trying to count the uncountable and assess the unassessable.
At Georgia College & State University, each academic unit was tasked in 2011 with developing a new faculty-evaluation system. We were instructed to create an instrument that had both qualitative and quantitative components. It took a year of discussion, compromise, and eventual consensus, but we finally moved forward with a test run. After three years of using this evaluation instrument, it has been tweaked somewhat but has worked tremendously well. What follows is the process and product that came from our journey to develop a systematic evaluation instrument that would enable us to determine to what degree faculty performance aligned with the values of the academic unit.
While a necessary and worthy milestone, earning promotion and tenure is not an end goal of an academic career. During the pretenure years, a faculty member is gearing up for growth in the areas (e.g., teaching, research and teaching) defined by the institution to meet the mark for tenure. Ideally, the latter part of the pretenure period is one where the quality of the work is on the rise, there is an emerging reputation, and the products (e.g., presentations, exhibitions, publications, proposals for/success in funding) of success are generated at an increased pace. At the time of dossier submission, there should be a record with a definitive upward trajectory. The challenge at this point is to capitalize on the momentum created to begin planning for the next step, the promotion to full professor. This, however, is not the way all cases proceed.
More than a decade ago, Thomas Tobin, coauthor of the new book, Evaluating Online Teaching: Implementing Best Practices, was hired to teach a business English and communications class in a hybrid format. When the time came for evaluation, he received a very thorough evaluation based on the chair’s observation of the face-to-face portion of his class, but the section of the evaluation instrument meant for the online component was left completely blank. “The department chair eventually confessed that because he had not himself taught using the institution’s LMS, he didn’t feel qualified to rate Tom’s use of its tools,” the book explains. Evaluation of the online component of the class was not something the administrator was equipped to do.
The problems inherent in evaluating online teaching arise understandably. “Deans, department chairs, faculty members, and students rate and evaluate teaching at their institutions mostly through home-grown processes and forms,” write the authors. “Although these are often constructed to help observers and raters to provide meaningful information, it is often the case that even now [years after Tobin’s experience], little training is provided for those using the evaluation instruments.” Many institutions find that one size cannot fit all.