Conversations about Course Ratings: Encouraging Faculty to Make Changes
Talking with faculty about end-of-course ratings is generally a high-stakes conversation where merit raises, promotions, or permanent contracts are on the line or at least hovering in the background of the exchange. Most chairs, program coordinators, or division heads would like to use the conversation for more formative purposes—to engage…
Rewarding Excellent Support for Non-tenure-track Faculty, Part II
The annual Delphi Award presents a $15,000 cash award to each of two applicants who have worked to support non-tenure-track, contingent and/or adjunct faculty. In the first installment of this article, we examined how California State University, Dominguez Hills, supports its non-tenure-track faculty. We continue with the next institution recognized…
Rewarding Excellent Support for Non-tenure-track Faculty, Part I
For decades, campuses have hired increasing numbers of non-tenure track faculty. The number of adjunct faculty is now more than 52 percent of faculty nationally and full-time non-tenure track make up another 18 percent of the faculty, with all types of non-tenure-track faculty (NTTF) accounting for 70 percent of faculty…
Pitfalls of Using Student Comments in the Evaluation of Faculty
The use—or misuse—of student ratings of instruction (SRIs) in faculty evaluation is a frequent topic in higher education news. Unfortunately, popular press articles on the topic often garner more attention than the vast empirical literature. A recent review of the research by Linse (2017) pointed to common misperceptions about SRIs, offering administrators evidence-based advice on appropriate SRI interpretation.
Clock Time Versus Piece Work in Higher Education
Albert Einstein is credited with the observation that “Not everything that counts can be counted, and not everything that can be counted counts.” Perhaps nowhere is this principle truer than when trying to evaluate the work of faculty members and administrators in higher education. Yet the difficulty of the task rarely stops anyone from trying to count the uncountable and assess the unassessable.
Developing a New Faculty-Evaluation System
At Georgia College & State University, each academic unit was tasked in 2011 with developing a new faculty-evaluation system. We were instructed to create an instrument that had both qualitative and quantitative components. It took a year of discussion, compromise, and eventual consensus, but we finally moved forward with a test run. After three years of using this evaluation instrument, it has been tweaked somewhat but has worked tremendously well. What follows is the process and product that came from our journey to develop a systematic evaluation instrument that would enable us to determine to what degree faculty performance aligned with the values of the academic unit.
After Promotion and Tenure: Maintaining Faculty’s Upward Trajectory
While a necessary and worthy milestone, earning promotion and tenure is not an end goal of an academic career. During the pretenure years, a faculty member is gearing up for growth in the areas (e.g., teaching, research and teaching) defined by the institution to meet the mark for tenure. Ideally, the latter part of the pretenure period is one where the quality of the work is on the rise, there is an emerging reputation, and the products (e.g., presentations, exhibitions, publications, proposals for/success in funding) of success are generated at an increased pace. At the time of dossier submission, there should be a record with a definitive upward trajectory. The challenge at this point is to capitalize on the momentum created to begin planning for the next step, the promotion to full professor. This, however, is not the way all cases proceed.
Evaluating Online Teaching: Implementing Best Practices
More than a decade ago, Thomas Tobin, coauthor of the new book, Evaluating Online Teaching: Implementing Best Practices, was hired to teach a business English and communications class in a hybrid format. When the time came for evaluation, he received a very thorough evaluation based on the chair’s observation of the face-to-face portion of his class, but the section of the evaluation instrument meant for the online component was left completely blank. “The department chair eventually confessed that because he had not himself taught using the institution’s LMS, he didn’t feel qualified to rate Tom’s use of its tools,” the book explains. Evaluation of the online component of the class was not something the administrator was equipped to do.
The problems inherent in evaluating online teaching arise understandably. “Deans, department chairs, faculty members, and students rate and evaluate teaching at their institutions mostly through home-grown processes and forms,” write the authors. “Although these are often constructed to help observers and raters to provide meaningful information, it is often the case that even now [years after Tobin’s experience], little training is provided for those using the evaluation instruments.” Many institutions find that one size cannot fit all.