Use of Program Evaluation in Health Information Management Educational Programs

by Jennifer Peterson, PhD, RHIA, CTR

Abstract

Health information management educational programs are required to participate in a variety of evaluation and assessment activities. Programs accredited by the Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) are required to undergo ongoing program assessments and evaluation. Many educational programs will also undertake evaluation and assessment activities required by their institutions or carried out on an independent basis. Many times, data are gathered for these activities, the analysis is completed, a report is written and submitted to the oversight body, and then the report is carefully filed away somewhere in the health information program files. Programs carrying out evaluation and assessment activities in this manner are missing valuable data that can be used to improve the program and its student learning outcomes. This article provides an overview of the purpose of program evaluation and assessment that points to the value of the findings from such activities. The article offers suggestions for the effective use of program evaluation and assessment findings as well as additional evaluative methods that can be used to supplement this information to provide a truly meaningful and useful evaluation. Health information management educational programs are gathering valuable information through their evaluation and assessment activities. This article encourages programs to use the data to make programmatic improvements to ensure the best student outcomes possible.

Keywords: evaluation; assessment; health information education; CAHIIM; accreditation; programmatic improvement

Health information management (HIM) educational programs accredited by the Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) face many requirements for programmatic evaluation, assessment, and review. Maintenance of compliance with CAHIIM accreditation standards, preparation of the CAHIIM Annual Program Assessment Report, and annual program evaluation planning are evaluation and assessment activities required for accreditation. In addition to these requirements, many educational programs undergo internal program reviews through their colleges or universities and independently conduct additional evaluations and assessments for their own use. While these efforts may seem overwhelming and, at times, may feel like paper-pushing exercises, they should be seen as learning opportunities that can help to improve educational programs. The data and results from such activities can provide valuable insights. Such findings can reveal what is and is not working in a program, where the program is meeting student learning outcomes and where it is not, and, most importantly, where the program has room for improvement. Ongoing program evaluation and improvement will result in students who are better prepared to join the HIM workforce as new graduates.

Purpose of Evaluation and Assessment

Historically, accreditation was developed to enable the assignment of value to a degree and to indicate a level of quality held by an educational institution or program. Government assessment guidelines have been put into place to require a level of accountability for funding of educational institutions or programs. Internal evaluation and assessment programs have been developed to ensure internal accountability for the use of resources or to ensure conformity to institutional standards. However, evaluation and assessment can and should be used to improve student learning and outcomes through changes to curriculum design and program methodology.

Although the purpose of higher education is the subject of much debate, one goal is to prepare students to be successful in careers and useful to society. In higher education’s quest to meet this goal, evaluation and assessment can be an invaluable tool. According to Tyler, one of the first experts to tie outcomes to objectives, “it should be clear that evaluation then becomes a process for finding out how far the learning experiences as developed and organized are actually producing the desired results and the process of evaluation will involve identifying the strengths and weaknesses of the plans.”1 Tyler’s efforts to tie objectives to outcomes and methodology reflect the real purpose of evaluation and assessment, namely, using the results of such tools to make curricular and other improvements and thereby improve outcomes. The results of evaluation and assessment should thus be used to evaluate the learning methods and curriculum and to make needed improvements to meet the goals of student learning and student preparedness. Grayson points to the use of program evaluation as an important component in the evaluation and assessment process: “Professional program evaluation is to be methodologically systematic, addressing questions that provide information about the quality of a program in order to assist decision-making aimed at program improvement, development, or accountability and to contribute to a recognized level of value.”2 Without evaluation and assessment, program leaders can only assume a certain level of quality of their educational program. Evaluation and assessment activities provide sound data that can be used to clearly see what outcomes the program is producing. “Assessment, to be useful, needs the collection of evidence that allows judgments to be made regarding the achievement of SLOs [student learning outcomes].”3

A higher education institution or program is driven by its mission, goals, and objectives.

The mission defines the institution’s or program’s stated purpose and the anticipated result (goals or outcomes) of the experiences that occur during the program. What occurs during the program is what we refer to as the curriculum. . . . The activities that comprise the curriculum are driven by the outcomes—one should be able to discern an alignment between these activities and the outcomes to which they are contributing.4

 

This alignment between the mission, curriculum, and outcomes is the key to the success of an educational program.5 The process of evaluation and assessment fits into this equation as the measure of alignment between these three components. For example, if an educational program has a mission that graduates will be adequately prepared to start in an entry-level Registered Health Information Administrator (RHIA) position, the curriculum must provide activities that enable student learning of entry-level HIM skills. Student learning outcomes, such as passing the RHIA exam and performing skills required of an entry-level HIM professional, close the loop and demonstrate that the curriculum allowed the program to meet its goals and mission. Evaluation and assessment of this process and of student outcomes allow for the discovery of areas in which the program is not meeting anticipated goals and objectives, or is not meeting them at the desired level. These areas can serve as the focus for changes and improvements to the program. Curricular and programmatic changes can then be made in the areas identified as needing improvement. Future evaluation and reassessment can determine if the changes result in closer alignment with the desired outcomes. With the collection of data and information, evaluation and assessment provides evidence that mere supposition cannot. “When the evidence points to a weakness in the achievement of an outcome, it . . . inform[s] the stakeholders where action can be taken to strengthen the outcome.”6

Use of Program Evaluation Methods in General

Formal evaluation and assessment is the key to improving educational programs. “Traditionally, university functions are comprehensively assessed in informal ways. . . . Informal ways are fallible.”7 To meaningfully evaluate and assess a program, formal evaluation and assessment techniques are needed. Formal evaluation and assessment “reveals . . . not only the quality of operations but their complexity.”8 A formal evaluation and assessment can reveal findings from multiple sources that can be used to analyze that complexity. Results from evaluation and assessment can provide the opportunity for triangulation, which can “strengthen our confidence in our conclusions and recommendations.”9 Triangulation is the use of multiple data sources to enable confirmation of findings and adds to the “trustworthiness of our analysis.”10 Evaluation and assessment can also be done with tools such as benchmarking to examine comparisons or trends over time, providing valuable data that can be used for improvement.11 The variety of evaluation and assessment activities required for most CAHIIM-accredited programs can easily be used as the basis for a formal evaluation and assessment plan. A formal evaluation and assessment process provides the basis for review of the alignment of a program’s mission, goals, and outcomes, with resultant findings pointing to areas of misalignment that can be improved upon.

Use of Program Evaluation in Health Information Management Educational Programs

As outlined previously, program review, evaluation, and assessment are integral parts of efforts to ensure that an educational program meets its mission, goals, and outcomes and adequately prepares students for their careers. Accreditation is one form of educational program evaluation and assessment and is a key component of education and credentialing in the HIM profession. In fact, educational program accreditation is directly tied to professional credentialing criteria. To be eligible to take the credentialing exams for the HIM profession (Registered Health Information Technician [RHIT] and RHIA), an individual must have graduated from a CAHIIM-accredited educational program. “CAHIIM is an independent accrediting organization whose Mission is to serve the public interest by establishing and enforcing quality Accreditation Standards for Health Informatics and Health Information Management (HIM) educational programs.”12

Accreditation of HIM educational programs by CAHIIM is designed to ensure quality of the educational program as well as to serve other functions such as aiding in student transfer between universities or student progression to higher degree programs and ensuring that educational program curricula meet current professional needs. Graduates of such programs can also point to this accreditation to assure employers or graduate schools that they have received a quality HIM education. The CAHIIM accreditation process is based on standards regarding the program’s mission, goals, assessment, and outcomes; the program director and faculty; the program’s resources; and the program’s educational curriculum. To achieve accreditation, the educational program must meet and demonstrate compliance with the CAHIIM standards. These standards include ongoing improvement processes, a faculty development plan, an up-to-date curriculum, entry-level student learning outcomes, qualified faculty, and appropriate student resources. Programs must address these standards through completion of a self-study as well as during a site visit by CAHIIM. HIM educational programs are required to select goals related to students and graduates as a part of their self-study; however, there are no required goals that must be met, and there are no penalties for not meeting the selected goals. For example, a program may set a goal of an 85 percent passage rate on the RHIA exam, but if only 75 percent of the students who take the exam pass, the lower passage rate does not result in a penalty related to CAHIIM accreditation. The program, however, is encouraged to use this information for programmatic improvement to meet that goal in the future or to realign goals as needed. CAHIIM’s focus is on the general standards, not necessarily the individual program goals. After the accreditation review, a program may be granted full accreditation (either initial or continuing), be granted probationary accreditation, or have its accreditation withheld or withdrawn.13

Accreditation and ongoing review and improvement is required by CAHIIM and AHIMA for graduate eligibility for certification. HIM educational programs may also undergo department- or university-specific evaluations and assessments or may take further steps to review, evaluate, or assess the program. HIM programs should be encouraged to use the evaluation and assessment techniques available to them or develop their own techniques to add to or enhance data collected for CAHIIM and internally. These various evaluation and assessment techniques can be integrated into a formal plan. Such a plan will enable a program to collect a variety of data that can be used, as discussed, in the process of triangulation, to see a full picture of the program, student learning within the program, and the areas in which improvement is needed.

A program may, for example, use metrics such as RHIA exam pass rates as outcome measures and set its own goals and action plans related to these metrics. However, RHIA exam pass rates do not provide a full picture of the effectiveness of the program. The RHIA exam content is not tied directly to the CAHIIM curriculum competencies, and some graduates do not take the exam if it is not required for their job. However, the use of such metrics in combination with other data collected for CAHIIM or other evaluation and assessment processes can present a complete picture. Data from surveys on student, graduation, and employer perceptions can be triangulated with RHIA exam pass rates to determine outcomes that can be tied back to the goals of the educational program. As noted, alignment of a program’s mission, goals, and outcomes can result in identification of areas needing improvement and lead to ongoing improvement of an educational program.

Evaluation Methodology

For a program evaluation to be useful and meaningful, a variety of data collection methods may be needed. Although CAHIIM-required data collection and institutional reviews may provide basic information, programs may need to gather additional data to truly identify areas for improvement or to answer questions about the quality of the program. The first step in an effective program evaluation is the identification of areas to evaluate and/or questions to answer through the evaluative process. The ongoing changes in the CAHIIM-required curriculum for HIM academic programs make this an ideal time to conduct program evaluation and assessment. Academic program faculty may be asking if they have made appropriate changes that meet the CAHIIM curriculum competencies as well as adequately and effectively prepare students for the current job market. CAHIIM-required data that have been collected for accreditation or annual reviews may pique interest in evaluation of specific areas of the HIM program. Institutional program reviews or evaluations may point to areas that should be looked at more closely. The program may be interested in student satisfaction. The data collected for CAHIIM or for other existing evaluation and assessment processes should be supplemented with additional data to create a utilization-focused evaluation (UFE) that can be helpful in identifying areas for improvement and helping the program meet its goals and objectives.

The UFE theoretical framework is useful in such a situation because it focuses on use of the findings to make positive changes in the program. In conjunction with this framework, use of the concept of evaluation capacity building is helpful because it can ensure that the faculty, department chair, and other stakeholders understand the evaluative process and are ready to use the results for positive change. Patton’s UFE framework14 focuses on the use of evaluation information. Evaluation and assessments completed under Patton’s theoretical framework use a “targeted group of stakeholders whom it empowers to determine the evaluation questions and information needs.”15 In such evaluations, “the primary intended users of the evaluation”16 determine the evaluative criteria. The theory behind this high level of stakeholder involvement is that when the stakeholders are included in the design of the evaluation, the results will be more useful and more likely to be used. The purpose is to “give them [the stakeholders] the information they need to fulfill their objectives.”17 This theoretical framework can be quite valuable in educational program improvement because “this approach is geared toward maximizing evaluation impacts [and] fits well with the key principle of change.”18 The weaknesses of this theoretical framework include the possibility of stakeholder turnover and the potential for stakeholders to “look for evidence to confirm [their] preconceptions and biases.”19

The personalized nature of UFE can result in faster and more easily accepted changes and improvements. “The crucial point is that evaluators must determine and focus their studies on intended evaluation uses and produce and report findings that an identified group of intended users can and probably will value and apply to program improvement.”20 The goal of this theoretical framework is to use the evaluation findings to improve educational programs, thus resulting in improved programmatic outcomes and student learning.

While objective measures such as student grades, student completion rates, and graduate RHIA examination pass rates can be used in evaluations, many times subjective data may be needed for a meaningful evaluation. For example, student preparedness for the field may mean one thing for one student, something different for a second student, and something else entirely for an employer. A useful evaluation therefore goes beyond the mere collection of metrics to truly evaluate the program or programmatic elements under review. While HIM programs undergo accreditation and possibly other types of internal program review, those program evaluations frequently focus on structure and process and do not focus as much on outcomes, such as student learning and preparedness for the field. A meaningful evaluation therefore may include not only these evaluative results but also information on the perceptions of students, recent graduates, and employers on the HIM program or specific aspects of the program. For example, effectiveness in adequately preparing students cannot be measured by grades alone. While particular students may receive good grades on assignments in classes, they may not be adequately prepared to function in the workforce. Therefore, a meaningful evaluation in this scenario would delve into the review of the HIM program through a utilization-focused evaluative methodology designed to determine the students’ readiness for the professional world based on their education in the program as a whole.

Data Collection

A variety of methods should be considered for such an evaluation, including surveys and interviews as well as review of various documents and metrics. While CAHIIM accreditation documents, other evaluation reports, enrollment rates, retention rates, and RHIA exam pass rates will be used, other data may be needed as well. Depending on the focus of the evaluation or the questions to be answered, faculty and staff should identify sources of meaningful information, and these should be targeted in the collection of evaluation data. Data can be collected from current students, recent graduates, employers, and others. Information can be collected from these groups in a variety of ways that will elicit valuable quantitative as well as qualitative data.

To assess students’, recent graduates’, or employers’ perceptions of the HIM program or HIM programmatic elements, surveys could be administered to these groups. These surveys should be carefully worded to elicit useful information for the program’s evaluation. Pilot testing of surveys and survey questions can help ensure that the questions will be reliable and valid. While surveys may already be completed for CAHIIM accreditation purposes, additional survey questions may need to be added to existing surveys or new surveys may need to be developed to ensure collection of the data that are needed for program evaluation.

In addition, to gather more in-depth data from employers, interviews could be held with targeted employers who host many professional practice students or who hire many new graduates from the program. The employer interviews would allow for the collection of more in-depth data regarding the employers’ experiences with the program’s students and graduates and the employers’ perceptions of the program and its graduates. Again, interview questions should be carefully worded to ensure that they will elicit the information needed, and pilot testing of questions should be completed before the full interviews are conducted.

Interviews could also be held with select graduates. Interviewing new professionals who graduated from the program within the last few years would be an excellent source of information regarding their experiences in the program, perceptions of career readiness, and suggestions for program improvement. Again, interviews would allow the program evaluators to delve deeper into the graduates’ perceptions and obtain in-depth data in this area.

Focus groups could also be held to obtain more in-depth data from the program’s students regarding their perceptions of the HIM program. The focus group could be guided by basic questions designed to start a deeper discussion of the students’ perceptions. Completing such focus groups in the students’ final year or in the semester after at least one professional practice experience would result in the most useful information.

Accreditation and internal program review reports should be included as an integral part of the evaluation and should be reviewed for specific information related to the focus of the evaluation. Program metrics, such as enrollment trends, retention rates, and time to graduation, should also be used for review of specific information. Pass rates for the RHIA exam could be analyzed as a measure of student preparedness. Through the review of multiple types of data, including these documents as well as the surveys, interviews, and focus groups, the evaluators can use triangulation to analyze the data and to determine areas needing improvement or answers to the program’s questions.

Data Analysis

The constant comparative method outlined by Glaser (1965) is suitable for analysis of the data collected in such an evaluation.21 Interviews and focus groups can be audio recorded and transcribed so that responses can be coded to identify themes. General categories can be identified through this coding. Initially broad categories can be identified, followed by identification of specific details within the broad categories. Eventually, this process will result in themes that can be identified and analyzed. The coded data and themes can be used to summarize the data and the overarching themes found in the data. The constant comparative model allows for a thorough, organized approach to understanding the data collected through the various methodologies used in this evaluative study. Other qualitative data collected during the evaluation should be analyzed in a similar manner. Eventually, qualitative data from multiple sources can be analyzed together to identify similar themes throughout the data.

Program metrics, exam pass rates, and closed-ended survey questions can be analyzed using basic descriptive statistics, such as means, modes, standard deviations, and frequencies, to determine trends and variations. These data can be tied into the constant comparative analysis through triangulation to gain a deeper picture of the program and to determine whether the program is meeting its current goals and desired outcomes. These metrics can be used to evaluate basic program outcomes and trends over time, and they can be compared with and combined with qualitative data to further evaluate relationships between the groups’ perceptions and quantitative outcomes. Such an evaluation can provide valid and trustworthy insights based on the fact that the data are collected from a variety of sources.

Some faculty members undergoing program evaluation offer students extra-credit points to participate in surveys or focus groups. This approach may increase student participation but could decrease anonymity. Program faculty and staff should consider the goal of student participation when making such decisions. Extra-credit or classroom points may increase participation, but anonymity may increase honesty in student responses.

Evaluation and Use of Results

A well-designed evaluation similar to that described previously can provide the program with valuable information. The information collected can be triangulated and analyzed to determine areas of needed improvement or alignment of the program with its goals and objectives.

The most important concept at this point is the use of the results. The results should be closely analyzed to determine what parts of the program are effective and where improvements should be made. Through the use of such evaluative methods, programs may find, for example, that they need to make curricular changes to ensure student preparedness for the local job market, add review sessions to assist students in preparation for the RHIA exam, or revise course schedules to increase student satisfaction. Key stakeholders should be involved in the analysis of the data and in the identification of areas for improvement. The results may point to specific programmatic improvements that should be made, or the program faculty and staff may need to conduct further research to identify the improvements needed to meet the program’s goals and objectives. Stakeholder involvement is important in obtaining support for improvement activities and changes to the program. Making sure that faculty and staff understand the data and areas for improvement is key to gaining their buy-in for the improvement implementation process.

As with any improvement process, changes should be followed by further monitoring to ensure true program improvement. This model provides ongoing program improvement based on data and input from a wide variety of sources, which, in turn, can enable programs to truly meet their goals.

The results of such an evaluation are extremely important in the current HIM academic environment. The latest CAHIIM curricular changes have only recently been implemented, and additional curriculum changes are forthcoming; the new competencies allow for a focus on local needs. Thus, formal evaluation of the effectiveness of HIM programs is both warranted and necessary for ongoing improvement of HIM academic programs.

Conclusions

Thoughtful use of the various evaluation and assessment processes required by CAHIIM, as well as those required by colleges, universities, and individual programs, can be combined with other program evaluation processes to result in valuable data that can be used to improve the program. Many methods of evaluation can lead to useful programmatic data. This paper provides an example of one evaluation method, a utilization-focused evaluation. Regardless of the specific method used, the use of the results to improve educational programs is essential. While certain evaluation types may have a specific focus on quality improvement, many types of evaluation can be used for such purposes. Reviewing evaluation and assessment results and providing those results to key stakeholders are the first steps. Using the results to determine areas of needed improvement and change, implementing those changes, and monitoring the results of those changes are further steps needed to ensure true program improvement. Evaluation and assessment reports placed on a shelf and never reviewed or used will not improve an educational program or ensure its effectiveness. True review and use of the results will provide insight into areas that need improvement and the means to truly achieve programmatic goals. By using evaluation tools for improvement, programs can ensure that they are best meeting the needs of their students to understand the HIM field and be prepared for successful careers in HIM.

 

Jennifer Peterson, Ph.D., RHIA, CTR, is an assistant professor in the Health Information Management Program at Illinois State University in Normal, IL.

 

Notes

  1. Tyler, Ralph W. Basic Principles of Curriculum and Instruction. Chicago, IL: University of Chicago Press, 1949, 105.
  2. Grayson, Thomas E. “Program Evaluation, Performance Measures, and Evaluability Assessment in Higher Education.” In Charles Secolsky and D. Brian Denison (Editors), Handbook on Measurement, Assessment, and Evaluation in Higher Education. 2nd ed. New York, NY: Routledge, 2018, 457.
  3. Judd, Thomas, and Bruce Keith. “Implementing Undergraduate Student Learning Outcomes Assessment at the Program and Institutional Levels.” In Charles Secolsky and D. Brian Denison (Editors), Handbook on Measurement, Assessment, and Evaluation in Higher Education. 2nd ed. New York, NY: Routledge, 2018, 75.
  4. Ibid, 70.
  5. Ibid, 71.
  6. Ibid, 75.
  7. Stake, Robert E., Gloria Contreras, and Isabel Arbesu. “Assessing the Quality of a University, Particularly Its Teaching.” In Charles Secolsky and D. Brian Denison (Editors), Handbook on Measurement, Assessment, and Evaluation in Higher Education. New York, NY: Routledge, 2018, 44.
  8. Ibid.
  9. Judd, Thomas, and Bruce Keith. “Implementing Undergraduate Student Learning Outcomes Assessment at the Program and Institutional Levels,” 77.
  10. Miles, Matthew B., A. Michael Huberman, and Johnny Saldana. Qualitative Data Analysis: A Methods Sourcebook. 3rd ed. Thousand Oaks, CA: Sage, 2014, 299.
  11. Guthrie, Lou A., and Jeffrey A. Seybert. “Benchmarking in Community Colleges.” In Charles Secolsky and D. Brian Denison (Editors), Handbook on Measurement, Assessment, and Evaluation in Higher Education. New York, NY: Routledge, 2018, 115.
  12. Commission on Accreditation of Health Informatics and Information Management Education. “Welcome to CAHIIM.” Available at http://www.cahiim.org, para. 1.
  13. Standards and Interpretations for Accreditation of Baccalaureate Degree Programs in Health Information Management. 2012. http://www.cahiim.org/documents/2012_HIM_Bacc_Stndrds.pdf
  14. Patton, Michael Quinn. Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: Sage, 2012.
  15. Stufflebeam, Daniel L., and Chris L. S. Coryn. Evaluation Theory, Models, and Applications. San Francisco, CA: Jossey-Bass, 2014, 215.
  16. Patton, Michael Quinn. Essentials of Utilization-Focused Evaluation, 66.
  17. Stufflebeam, Daniel L., and Chris L. S. Coryn. Evaluation Theory, Models, and Applications, 215.
  18. Ibid, 218.
  19. Patton, Michael Quinn. Essentials of Utilization-Focused Evaluation, 25.
  20. Stufflebeam, Daniel L., and Chris L. S. Coryn. Evaluation Theory, Models, and Applications, 404.
  21. Glaser, Barney G. “The Constant Comparative Method of Qualitative Analysis.” Social Problems 12, no. 4 (1965): 437

Printer friendly version of this article.

Leave a Reply