Faculty development is a nationwide phenomenon that emerged from the academic accountability movement in the early 1970s, yet rarely was there interest in evaluating the effectiveness of this effort—until now.

Faculty developers across the nation are working on developing methods to evaluate their services. In 2010, the 35th Annual Professional Organizational and Development Network Conference identified assessing the impact of faculty development as a key priority. It was this growing demand that spawned my interest in conducting a 2007 statewide and a 2010 nationwide investigation of faculty development evaluation practices in the U.S. This article will describe how to develop a customized evaluation plan based on your program’s structure, purpose, and desired results, based on contemporary practices discovered through this research.

First, working definitions of “evaluation,” “assessment,” and “program” need to be established since oftentimes these terms have led to confusion. “Evaluation” is defined as judging the effectiveness of various services to determine value and improvements. “Assessment” means determination of the level to which the center achieved its specific outcomes—similar to academic program assessment. Directors of faculty development centers appear to be more interested in measuring for improvement and merit (i.e., evaluation) than in designing and measuring program outcomes and indicators (i.e., assessment). “Program” is oftentimes used in reference to a faculty development center and the center’s themed offerings, such as a mentoring program, grant program, instructional program, and consultation services. Therefore, “program” will refer to the center’s services and themed offerings in this article.

Before an evaluation plan can be developed, it is important to carefully examine three situational factors unique to your faculty development center: the structure, purpose, and evaluation mind-set. There are four faculty development structures typically found at universities: (1) a large, centralized, university-funded program with a full-time director and staff; (2) a smaller, low-budget program with a faculty member acting as a part-time director with a part-time administrative assistant; (3) a dean or department head organizing events loosely based on strategic planning; and (4) no structure, instead faculty are responsible for self-development (Minter, 2009). Evaluation is possible with any staff size, yet the extent will vary accordingly.

Second, consider the purpose of the center. For example, is it designed to meet the needs of faculty or the institution, or to promote academic quality? Programs focused on the needs of faculty tend to evaluate faculty behavior. Those targeting institutional needs extend measurements to impacts on the institution, and those that focus on academic quality tend to emphasize evaluation of faculty and student learning outcomes.

Third, the evaluator’s mind-set needs close examination. The research indicated this area to be the mostimpactful element in evaluation planning. Those who believed that program evaluation was beneficial, and informative, and that it improved practices were more likely to implement a routine, systemized, in-depth evaluation process. Those who believed it was an act of accountability, difficult to do, or done only for resource requests oftentimes created an evaluation system based on reports of faculty participation and satisfaction.

The next step is to consider the level to which each program should be evaluated, keeping in mind the impact of the situational factors. The research identified six evaluation levels: (1) participation, (2) satisfaction, (3) learning, (4) impact on teaching, (5) impact on student learning outcomes, and (6) impact on the institution. Think of each level as concentric rings (like ripples) emanating from a center point representing a single program or service. Determine the level to which each program needs to be evaluated, based on the center’s purpose and program’s desired outcome or goal. Then determine the timing for each level of evaluation. Consider staging out and staggering the evaluation process—especially if staff levels are low or workload is high. For example, gather participation, learning, and satisfaction data for your mentoring program until a significant number of faculty members have participated. Then at that point, evaluate the impact the program had on the attendees teaching practices. Another option is to gather participation, learning, and satisfaction data for new workshops then stop gathering satisfaction data after receiving high reports from three consecutive sessions. Or if a program is designed for high impact on student learning outcomes and institutional change, such as a large course redesign program, then plan for gathering baseline and follow up measures on a term or annual basis. Additionally, if multiple programs are going to be evaluated for impact on teaching or beyond, stagger the process so an in-depth evaluation is done on one program per year.

As you develop your evaluation plan, remember that program evaluation should be tailored to measure your level of interest specific to each individual program. The research showed that measuring out to the institutional level was typically reserved for high-impact programs specifically designed to increase student progression and retention. Measuring out to the student learning level was typically seen in high-impact programs designed to improve student learning, such as grant programs, learning community programs, and intensive instructional improvement initiatives.

Once the evaluation levels and timing are determined for each program, decide on the evaluation methods to be used. Keep in mind that multiple measures increase reliability, and efficient strategies leads to a greater likelihood of implementation. The table below lists methods and strategies used for measuring the six levels at various colleges and universities involved in the research.

 

Evaluation Level Methods Strategies
Level I: Participation

 

(Who is participating in the faculty development programs and services?)

 

  • Track attendance (by individual, program, department, or school)
  • Track usage of online resources
  • Use online registration
  • Develop customized databases
  • Use Google Analytics for website usage
Level II: Satisfaction

 

(What was the participant’s level of satisfaction?)

  • Satisfaction surveys at event
  • Annual campus-wide satisfaction surveys
  • Focus groups
  • Advisory board reviews of resources

 

  • Gather and log data with student response devices (aka clickers)
  • Create a standardized online survey through SurveyMonkey®
  • Create an automated system for emailing the survey link
  • Focus groups for select programs only
Level III: Learning

 

(Did the participants learn?)

  • Application exercises or assessments at the event
  • Open-ended surveys
  • Post-event follow-up surveys
  • Pre-post assessments
  • Same as Level II strategies
  • Embed questions into satisfaction surveys
Level IV: Impact on teaching

 

(Did participants change their attitudes or practices as a result of the program?)

  • Require grant reports to include changes in teaching
  • Require a publication or presentation reporting changes in teaching for funded projects
  • Solicit self-reports of changes in teaching through surveys, emails, interviews, or focus groups
  • Student ratings
  • Student surveys or in-class feedback
  • Pre-post classroom observations
  • Pre-post reviews of teaching products
  • Faculty-created critical incident analysis
  • Teaching portfolios
  • Experimental studies
  • Combine satisfaction and behavioral change inquiries into one online survey
  • Gather pre-post data for consultation services using a coding system for anonymity
  • Ask the faculty member to provide measures indicating a need for consultation and use it for pre-post measures
  • Data mine select items on student ratings
  • Set up blogs for faculty to record critical incident analyses or other reflective writing
  • Set up e-portfolios for faculty projects receiving funding
  • Combine survey efforts with other institutional surveys to reduce survey fatigue
  • Use experimental designs for only high-impact or high-cost programs
Level V: Impact on Student Learning Outcomes

 

(Did the student learning outcomes change as a result of the program?)

 

  • Require grant reports to include changes in student learning
  • Require a publication or presentation that reports changes in student learning for funded projects
  • Solicit self-reports of changes in student learning through surveys, emails, interviews, or focus groups
  • Student surveys or in-class feedback
  • Pre-post measures of student performance
  • Pre-post measures of student progression
  • Experimental studies
  • Combine behavioral change and student learning outcome inquiries into one online survey
  • Gather pre-post data for consultation services using a coding system for anonymity
  • Ask the faculty member to provide measures indicating a need for consultation and use it for pre-post measures
  • Set up blogs or discussion boards for faculty to record changes in student learning
  • Set up e-portfolios and include student learning outcomes for faculty projects receiving funding
  • Combine survey efforts with other institutional surveys to reduce survey fatigue
  • Use experimental designs for only high-impact or high-cost programs
Level VI: Impact on the Institution

 

(Was there an institutional change as a result of the program?)

  • Pre-post measures of attrition, retention, or graduation rates
  • Experimental studies
  • Data mining of NSSE results
  • Work with the office of assessment
  • Use experimental designs for only high-impact or high-cost programs

Last, once the evaluation levels, methods, and timing are determined for each program or service, identify who will be responsible for gathering the data and when the analysis will occur. Setting time aside for an annual data review and implementation is common.

Evaluation of faculty development programs can be done in an efficient and effective manner by developing a systemized plan designed for staggered, staged-out evaluation that considers staff time and available technology. Following this evaluation approach will lead to feasible, purposeful, and informative data that can you help determine whether faculty development really is making a difference.

References:

Hines, S.R. (2009). Investigating faculty development program assessment practices: What’s being done and how can it be improved? Journal of Faculty Development23(3), 5-19.

Hines, S. R. (in press). How established centralized teaching and learning centers evaluate their services. In J. Miller & J. Grocia (Ed.), To Improve the Academy (30). San Francisco: Jossey-Bass.

Minter, R.L. (2009). The paradox of faculty development. Contemporary Issues in Education Research, 2(4), 65-70.

Sue Hines is the director of faculty development and an assistant professor at Saint Mary’s University of Minnesota and teaches in SMU’s doctor of education in leadership program.