Program Evaluation and Assessment of Learning




© Springer International Publishing Switzerland 2016
Lawrence M. Gillman, Sandy Widder, Michael Blaivas MD and Dimitrios Karakitsos (eds.)Trauma Team Dynamics10.1007/978-3-319-16586-8_37


37. Program Evaluation and Assessment of Learning



Vicki R. LeBlanc1, 2   and Walter Tavares3, 4, 5  


(1)
University of Ottawa Skills and Simulation Centre, Ottawa, ON, Canada

(2)
Department of Innovation in Medical Education, Faculty of Medicine, University of Ottawa, 451 Smyth Rd. RGN 2211, Ottawa, ON, K1H 8M5, Canada

(3)
Wilson Centre, Toronto, ON, Canada

(4)
McMaster University, Hamilton, ON, Canada

(5)
School of Community and Health Studies, Centennial College, 755 Morningside Ave., Toronto, ON, Canada, M1K 5E9

 



 

Vicki R. LeBlanc (Corresponding author)



 

Walter Tavares



Keywords
Outcomes-based program evaluationProcess-based program evaluationAssessment of learningValidityReliability


The use of simulation modalities for health professions education is both resource and time intensive. As such, there is an expectation to demonstrate that the program offered had the desired outcome, in terms of participant experiences, impact on learning or behavior of the participants, as well as in terms of desired institutional or patient-related outcomes. The goal of this chapter is to present a brief overview of program evaluation (outcomes-based and process-based). It will conclude with a more focused discussion of the assessment of learners, as this is a common foci of interest, not only for program evaluation but also for those interested in using simulation to determine their learners’ level of abilities.


Program Evaluation


Because simulation-based education is time and resource intensive, there is often the need to demonstrate that the resources and time allotted to simulation-based courses or program are effective in meeting the desired learning objectives. Program evaluation is the systematic investigation of a program’s worth [1]. It is the process of determining whether the program or course that has been designed works (or not) and whether (and how) in needs to be modified or improved. Program evaluation is the systematic collection, analysis, and use of information to answer questions about projects, policies, and programs, particularly about their effectiveness and efficiency.

There are several foci for program evaluation, including but not limited to:



  • Assessment of the program’s cost and efficiency.


  • Assessment of program design.


  • Assessment of the program’s outcome (i.e., what it has actually achieved).


  • Assessment of the program’s impact on learning.


  • Assessment of how the program is being implemented (i.e., is it being implemented according to plan?).

The two main approaches to program evaluation are outcomes-based evaluation and process-based evaluations. Program evaluation is often conceptualized as occurring at the end of the program (outcome-based), to determine whether the desired outcomes came to be. However, program evaluation can occur at several stages of a program (process-based), with the goal of determining how a program was implemented, as well as how and why it did—or did not—work as intended [2].


Outcomes-Based Program Evaluation


Outcomes-based program evaluation is aimed at answering the question of whether the program brings about the desired outcomes, generally defined by the course and learning objectives [2]. This is the type of program evaluation that is most familiar to health professions educators. There are several models to guide outcomes based program evaluation that have been used related to health professions education.

The Kirkpatrick framework involves four levels designed as a sequence of ways to evaluate programs [3]. The four levels of the Kirkpatrick framework are:



  • Learner reactions: How the participants thought and felt about the program.


  • Learning: The increased in knowledge and/or skill, or the changes in attitudes that occurred as a result of the program.


  • Behavior: The transfer of the knowledge, skills, or attitudes to the work or clinical setting.


  • Results: The tangible results of the program in terms of costs, quality, or efficiency.

A similar framework, specifically applied to health professions, was proposed by Miller [4]. It is similar to the Kirkpatrick framework, in that there are several levels of assessment of skills and performance. The four levels of the Miller framework are:



  • Knows: Whether the student’s knowledge has increased. This is considered the foundation of clinical skills, but is not sufficient to demonstrate changes in clinical skills and performance itself. It is generally measured with written exams.


  • Knows How: Whether a student can apply the knowledge learned. This is generally assessed using tests involving clinical problem solving.


  • Shows How: Whether the student can demonstrate a change in clinical skills and performance. This is generally assessed with behavioral examinations such as OSCEs and simulation modalities.


  • Does: Whether the learner’s clinical skills and knowledge are improved during daily patient care. This is generally assessed with direct observation in real clinical contexts during patient care.


Program Evaluation Designs


In addition to determining the level of outcomes at which they are going to evaluate their program, educators also need to determine the design by which they will assess if there is desired improvement of knowledge, skills, performance or patient care has occurred.

The most straightforward method for conducting a program evaluation is to measure the impact of the program on the desired outcome measures after the program has been delivered. The limitation with this type of measure is that it does not provide any information regarding a learner’s level of knowledge or performance before the course. As such, this approach does not allow the educator to determine whether a learner’s current level of performance or knowledge is due to the program itself, due to baseline levels before the course, or other factors that occurred in parallel to the course.

One way to determine whether improvements occurred through the program is to use a pre–post design, where the outcome of interest is measured before the program, and then once again after the completion of the program. While this design allows educators to determine whether knowledge or skills improved from the baseline, it does not eliminate the possibility that the improvements were due to other factors that occurred in parallel to the program or to increase in performance merely due to repeated testing.

The next level of program evaluation design is to have an experimental design, with pre course and post course assessments, and a comparison group that is identical to the learners in the program, but does not receive the program. This design is considered superior for program evaluation, as it allows educators to separate out the influence of baseline ability and parallel events from those of the program itself. However, the limitation of this type of design is that it is resource and time intensive, and may create ethical concerns if the learners in the control group are withheld a particular learning opportunity. This last concern can be overcome with a crossover design, in which the students in the control group are offered the learning intervention after the post-course measures have been obtained.

In designing their outcomes based program evaluation, educators are often challenged by the difficulties in obtaining outcome measures at the level of “behavior” or “does,” as well as with creating rigorous experimental designs. One question often asked is how strong is the need to prove that a program delivers the desired outcomes. In the case of a program that is already supported and established, in which other educators have shown that it works, it is generally acceptable to focus on learner feedback, pre- and post-course knowledge and skills assessments, mostly as a form of continuous monitoring of the quality of a program. If a program is novel, and potentially contested and challenged, educators will need to focus on behavioral or patient outcomes data, and will need to move towards a true experimental design with pre and post course measures and a comparison group. If one objective is to publish the results of the program evaluation in a peer-reviewed journal, the standards for publication have been consistently increasing over the years. A design with a control group or comparison training method, with validated measures of performance, is generally required for publication in peer-reviewed journals targeted to health professions educators.


Process-Based Program Evaluation


Process-based program evaluation, in addition to assessing the measurable outcomes of a program, is also geared towards fully understanding how a program was implemented as well as how and why the program did or did not have the desired outcomes [5]. This type of program evaluation looks beyond what a program is supposed to do, to evaluate how a program is being implemented and to determine whether the components critical to the success of the program are—or have been—implemented [2].

This type of program evaluation is an ongoing process in which multiple measures are used to evaluate how the program is being developed and delivered:

Oct 28, 2016 | Posted by in CRITICAL CARE | Comments Off on Program Evaluation and Assessment of Learning

Full access? Get Clinical Tree

Get Clinical Tree app for offline access