May 12th, 2017

Developing a New Faculty-Evaluation System

By:

Faculty-Evaluation System

At Georgia College & State University, each academic unit was tasked in 2011 with developing a new faculty-evaluation system. We were instructed to create an instrument that had both qualitative and quantitative components. The plan we developed was put in place in 2012.

As an academic unit (School of Health & Human Performance) within a college (Health Sciences), our college-level tenure and promotion document had already defined the evaluation categories (teaching, scholarship, service) for tenure-track faculty, and we felt it was in everyone’s best interest to maintain consistency and use the same definitions. For lecturers/instructors, our academic unit decided these evaluations would focus only on teaching (because this is university policy) and that teaching would be defined in the same way.

Our next task was to decide what we wanted this instrument to look like (rubric, checklist, narrative, etc.). We decided on rubrics that used a rating system and a narrative component (see Figure 1 for a modified scholarship rubric).

Figure 1. SCHOLARSHIP RUBRIC
E = Excellent = four; G = Good = three; NI = Needs Improvement = two; P = Poor = one

Scholarship Categories COHS T&P Critical Element Equivalent Identification of Faculty Activities Evidence of These Activities Evaluation of Weighted Role
Professional Research Development and dissemination of knowledge through any of Boyer’s four forms of scholarship. Knowledge may take the form of empirical, historical, basic, applied, conceptual, theoretical, or philosophical scholarship. Peer-reviewed or edited work such as:

  • Authored or edited books
  • Book chapters
  • Journal articles
  • Monographs

Reviewed or invited presentations such as:

  • Invited keynotes
  • Posters or oral presentations at professional conferences
  • Public lectures

Grants for research projects

 

Authoring/producing creative works

Scholarship of teaching and learning (SoTL)

Copies of work or written verification of acceptance of work for publication

 

Copy of program, copy of conference literature

 

Written documentation of grant submission and/or award, plus copy of grant

Assessment of SoTL activities; developing and testing instructional materials; advancing learning theories through classroom research

 

E

G

NI

P

 

Narrative

 

 

     

It took a year of discussion, compromise, and eventual consensus, but we finally moved forward with a test run. After three years of using this evaluation instrument, it has been tweaked somewhat but has worked tremendously well. What follows is the process and product that came from our journey to develop a systematic evaluation instrument that would enable us to determine to what degree faculty performance aligned with the values of the academic unit.

First step of the process

Using the Yearly Faculty Evaluation Percentage Table (Figure 2), faculty members fill in a percentage for each category (excluding the administration category unless the faculty member is a program coordinator) based upon their projections of what they would like to accomplish professionally during the calendar year (January 1–December 31). These are typically submitted at the beginning of the spring semester (January). Faculty are given an opportunity to reexamine their percentage choices in August, in case they feel modifications need to be made (these need director/chair approval).

Figure 2. Yearly Faculty-Evaluation Percentage Table (Faculty Choice)

January 1–December 31   
Faculty Name: – Percent = Your Decision
Teaching: at least 50 percent = ____50 percent_____
Scholarship: 20–40 percent = ____20 percent_____
Service*: 20–40 percent =      ____20 percent_____
Administration*: 5–10 percent  = ____10 percent_____
Total = 100 percent

  1. Each category (teaching, scholarship, service) must have a value (in multiples of five) within the provided ranges.
    2. Teaching must be a minimum of 50 percent.
    3. All values added together should total 100 percent.
    *Administration value is a percentage of the overall service category.

Second step of the process

Throughout the year, faculty write narratives in the spaces provided on each rubric in an effort to keep their materials updated. When it is time for faculty to prepare their documentation for submission, they must also complete the Individual Faculty Report (IFR) Faculty & Director Evaluation (see Figure 3, second column, for teaching example). Each faculty member decides what weight to assign in each category within the teaching, scholarship, and service areas.

Figure 3. IFR Faculty & Director Evaluation
Example

Categories Weight
(Percent)
Faculty
Evaluation Score
(1–4)
Chair
Composite
(Chosen Percent x Evaluation Score)
Content Delivery 25    
Course Design 25    
Course Expertise 25    
Course Management 25    
Total 100 percent    

The completed rubrics (narrative for each area), the IFR Faculty & Director Evaluation Form (Figure 3, second column), a Word document copy of the faculty member’s most recent vita, and the Yearly Faculty Evaluation Percentage Table comprises the faculty evaluation packet that is submitted to the director/chair.

Third step of the process

Using the faculty member’s vita and information provided on each rubric, the director/chair evaluates all the documentation and assigns an Evaluation Score (see Figure 3, third column) in each category of each area (teaching, scholarship, service). It is a sliding numerical score (1–4) that corresponds to the evaluation classification (poor/needs improvement/good/excellent) from across the top of each rubric (refer to Figure 1).

The Composite Score (Figure 4, fourth column) for each category is calculated by taking the faculty member’s percentage and multiplying it by the director/chair’s evaluation score. The Total Composite Score (bottom row) for each area (teaching, scholarship, service) is derived from adding each composite score in that area.

Figure 4. Individual Faculty Report (IFR) Faculty and Director

Teaching

Categories Weight
(Percent)
Faculty
Evaluation Score
(1–4)
Chair
Composite Score
(Chosen Percent x Evaluation Score)
Content Delivery 25 3.0 .75
Course Design 25 3.0 .75
Course Expertise 25 4.0 1.0
Course Management 25 4.0 1.0
Total 100%   3.5

Scholarship

Categories Weight
(Percent)
Faculty
Evaluation Score
(1–4)
Chair
Composite Score
(Chosen Percent x Evaluation Score)
Professional Research 40 2.0 .80
Professionalism in Academic Field 60 3.0 1.8
Total 100 percent   2.6

Service

Categories Weight
(Percent)
Faculty
Evaluation Score
(1–4)
Chair
Composite Score
(Chosen Percent x Evaluation Score)
Institutional or USG Service 30 4.0 1.2
Professional Service 10 4.0 .40
Community Service 60 4.0 2.4
Total 100 percent   4.0

Choose your percentage in each category. No category can be lower than 10 percent. Each category must have a percentage. Percentages can be only in multiples of five (10 percent, 15 percent, 20 percent, 25 percent, etc.). Each category (teaching, scholarship, service) should total 100 percent.

These three total composite scores get transposed onto the Yearly Faculty Evaluation Percentage Table (see Figure 5) and multiplied by the percentage chosen at the beginning of the year. This provides us with a Total column. The overall quantitative/numeric score is derived from adding these rows for an Overall Total Composite Score.

Figure 5. Overall/Total Composite

Category Role Percent of Time
(Yearly Faculty Percent Table)
Composite Score/Each Category Totals
Teaching
(not less than 50 percent)
50 3.5 1.75
Scholarship
20 2.6 .52
Service*
20 4.0 1.2
Administration*   10    
Total Composite Score   100 Percent   3.47

*Coordinator role a component of the service category.

Chair Narrative

Teaching

Scholarship

Service

Fourth step of the process

This information provides the director/chair with a final quantitative evaluation determination (3.47 in the previous example). The director/chair also writes a summative narrative and includes that with the final evaluation (refer to Figure 5). Each faculty member receives an electronic copy of his or her respective evaluation prior to setting up a one-on-one meeting with the director/chair. This process has allowed us to provide both quantitative and qualitative feedback to faculty that has helped deliver better clarity for tenure, promotion, and merit-pay decisions.

As a positive side note, our institution has recently brought back merit pay. This instrument has been extremely helpful with simplifying that process as well. There are no across-the-board merit raises, so this evaluation system has enabled the director/chair to use the quantitative component as a means of developing merit-pay guidelines.

I have found this approach to be fair and as equitable as humanly possible when I evaluate someone else’s performance. Faculty members do not always like the work that goes into it, but they realize that the more specific they are with their materials, the better they are able to represent themselves professionally—and that is advantageous for the tenure and promotion process. We will continue to examine this process each year to make sure it is still working for us and, if not, make changes that will better reflect our evolution as an academic unit.

 
Lisa M. Griffin is director of the School of Health and Human Performance at Georgia College & State University.

 

Reprinted from Academic Leader, 31.7 (2015): 4, 5. © Magna Publications. All rights reserved.