EHR Usability on Mobile Devices

by Ryan Sandefer, MA, CPHIT; Danika Brinda, MA, RHIA, CHPS; Janelle Wapola, MA, RHIA; Shirley Eichenwald Maki, MBA, RHIA, FAHIMA; and David Marc, MBS

Abstract

Currently, minimal requirements exist for assessing the usability of electronic health record (EHR) systems. Usability requirements are especially lacking for the increasing use of mobile devices to access EHRs. Therefore, the authors investigated the usability of three commercially available certified ambulatory EHR systems as accessed on mobile devices. The study used the System Usability Scale (SUS) among a sample of college-level health professions students. Twenty-seven students participated in the study. Two-way analysis of variance (ANOVA) using bootstrapped sampling demonstrated that the EHR system specifically built for mobile devices had significantly higher usability scores, graduate students were more likely to rate the usability of EHR systems higher than undergraduate students, and there was a significant relationship between the evaluator’s prior use of EHR systems and usability scores. These findings suggest that usability requirements for all EHR platforms should be required and that students should be trained on and exposed to EHRs early in their education.

Introduction

The adoption of electronic health record (EHR) systems has the potential to improve the quality, effectiveness, and efficiency of healthcare; advance research; and engage patients and families with their healthcare.1 The EHR incentive program created through the American Recovery and Reinvestment Act (ARRA) provides monetary incentives to eligible healthcare professionals and hospitals for purchasing “certified” EHR technology and using the technology to meet a host of technical objectives that constitute “meaningful use.” Notably absent from the Stage 1 certification criteria and meaningful use objectives are any requirements related to the usability of the EHR software. The exclusion of usability requirements is a great shortcoming of the incentive program because the usability of these systems is crucial to their success. Improved usability can reduce errors, thereby leading to improved patient safety and increased efficiency while also enabling clinicians to spend more time with patients.2 Meaningful use Stage 2 criteria included a requirement for vendors to conduct usability testing.

Since the passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act, usability has received considerable attention from policy makers, federal agencies, the vendor community, professional associations, and various advocacy groups. The recent testimony of Farzad Mostashari, the national coordinator for health information technology, illustrates the focus on the topic: “The goal is clear . . . there should be a greater emphasis on usability, a better science around how to measure usability, and an improved ability of providers to use usability in their purchasing decisions.”3 The implementation and meaningful use of EHRs is fundamental for health information management (HIM) professionals, but the usability of these systems is of critical importance across all health professions. As the healthcare industry increases its adoption of EHR systems, the use of mobile devices by clinicians will increase in tandem.

According to results of the Health Information Management Systems Society (HIMSS) 2012 Mobile Technology Survey, clinicians in the United States make extensive use of mobile technology. The findings show that 80 percent of physicians and 73 percent of nonphysician clinicians use mobile technology to “facilitate” patient care. The vast majority of mobile technology currently being provided to clinicians falls into either the laptop or mobile computer cart categories (89 percent), while about 50 percent of respondents reported providing clinicians with mobile phones or tablets. About 75 percent of respondents reported that they plan to expand the number and types of mobile devices provided to clinicians for facilitating patient care. Perhaps more importantly, 19 percent of respondents reported that the mobile technology does not fit into their workflow.4 Clearly, usability of EHR systems on mobile devices is an increasingly important factor in product selection.

Background

Usability, according to the International Standards Organization (ISO) definition adopted by the National Institute of Standards and Technology (NIST), is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”5 HIMSS defines usability as having nine attributes: simplicity, naturalness, consistency, forgiveness and feedback, effective use of language, efficient interactions, effective information presentation, preservation of context, and minimization of cognitive load.6

The US Department of Health and Human Services developed a website (usability.gov) to share resources to assist individuals and organizations in promoting usability and user-centered design. The website defines usability as a combination of factors, including intuitive design, ease of learning, efficiency of use, memorability, error frequency and severity, and subjective dissatisfaction.7

Electronic health records, mobile technologies, and usability are on the cutting edge of health informatics and information management. Research has demonstrated a need for adopting usability and human factors principles in the design of EHR systems.8–10 The landmark Institute of Medicine report titled To Err Is Human identified the need for well-designed EHR systems to improve patient safety.11 Usability of EHR systems has been identified as a barrier to EHR adoption.12, 13 Gans et al. found that usability and productivity concerns outranked cost in terms of EHR adoption concerns.14 Multiple studies by the Agency for Healthcare Research and Quality (AHRQ) and HIMSS have reported the need for developing usable EHR systems by adopting strategies such as a user-centered design approach.15–17 DesRoches et al. summarized the issue in stating, “Improving the usability of electronic health records may be critical to the continued successful diffusion of the technology.”18

However, current evidence suggests that usability and human factors principles are not commonly used by certified EHR vendors at any point throughout the product design, development, and use life cycle. “Formal usability assessments, such as task-centered user tests, heuristic evaluations, cognitive walkthroughs, and card sorts, are not a common activity during the design and development process for the majority of vendors. Lack of time, personnel, and budget resources were cited as reasons for this absence.”19 The Institute of Medicine report titled Health IT and Patient Safety: Building Safer Systems for Better Care reported that “[w]hile many vendors already have some types of quality management principles and processes in place, not all vendors do and to what standard they are held is unknown.”20

Zhang and Walji have identified fourteen usability principles for the design of electronic health records,21 but the issue that is presented to the EHR industry is how best to evaluate the usability of these systems. There is no shortage of methods—both quantitative and qualitative—for evaluating the usability of technologies.

Because of the concern regarding effective design of EHR systems and the implications related to patient safety, productivity, fatigue, error, and user satisfaction concerns, usability has emerged as a potential requirement for inclusion in the EHR incentive program’s requirements for EHR certification testing. In fact, NIST guidelines listed usability testing as a certification requirement in the final rule22 for Stage 2 of the EHR incentive program, specifically “safety-enhanced design” as demonstrated by incorporating all of the data elements defined in the Customized Common Industry Format Template for EHR Usability Testing (NISTIR 7742).23 Therefore, the Office of the National Coordinator for Health Information Technology (ONC) has now embedded elements of user-centered design (UCD) directly into the EHR incentive program. The testing criteria for the 2014 EHR certification program provide specific guidance on what technology the EHR has to exhibit.24 In order for an EHR system to be considered a certified EHR under the ONC’s 2014 certification program and therefore be eligible for incentive payments, vendors will be required to demonstrate that their systems incorporate UCD by meeting the eight certification criteria that are most related to patient safety.

To comply with the new usability requirements, vendors will be required to show evidence that they used established UCD evaluation methods to design and test their software. The final rule25 is not prescriptive as to any specific methods, but points developers to established methods as defined in guidelines such as ISO 9241-11, ISO 13407, ISO 16982, and NISTIR 7741.26 Developers will be required to document how UCD processes were applied to EHR systems before they are deemed certified, and the documentation must contain all of the “data elements defined in the Customized Common Industry Format Template for EHR Usability Testing (NISTIR 7742).”27, 28

The inclusion of this criterion did not come without strong opposition from the EHR vendor community, as noted in the summary of comments within the final rule:

We note, however, that of all of the proposed certification criteria, this one appeared to be the most polarizing. Provider organizations, hospitals, and consumer advocates supported its inclusion in certification and most (but not all) EHR technology developers expressed some form of opposition—with concern about the public availability of user-centered design testing results.29

 This final rule is the beginning of what is sure to be a movement toward requiring additional usability testing within the EHR incentive program. Considerable work is being conducted at the federal level related to development of usability guidelines focused specifically on EHRs.

NIST, ONC, and AHRQ are also collaborating on a multiyear project to develop evaluation methods specifically for the usability of EHR systems. The project involves short- and long-term goals, including a plan for performance-based usability testing with pass/fail criteria.30

Given the increasing importance of EHR usability testing and the rise of the use of mobile technologies (tablets in particular) by clinicians for the purpose of facilitating patient care, our study aimed to employ developed use cases and a validated usability instrument to assess three commercially available, certified EHR products for Apple iPads.

Methods

The study aimed to assess the usability of three EHR systems from the perspective of health science students at the College of St. Scholastica in Duluth, Minnesota. All health science students studying on the Duluth campus were e-mailed and invited to participate in the research study. Those students who volunteered for the study (n = 27) were accepted as participants. Participation included signing a consent form, viewing a short (approximately 10-minute) demonstration of an EHR product for basic navigational purposes, completing a series of tasks related to the use case, and completing a survey regarding the usability of the software.

The research involved the use of an ambulatory care–based scenario developed by NIST that involved a patient with chronic, complex conditions being cared for by a nurse practitioner. The scenario was developed specifically for usability testing. The research team built the patient specifics of the scenario into each of the three certified EHR products, including patient age, race, diagnoses, problems, medications, and smoking status. The scenario involves a patient presenting to a primary care clinic for a recheck of weight and diabetes. The scenario involved the tasks of reviewing the medication list, reviewing patient lab results, modifying active medications, prescribing new medications, updating the problem list, ordering a consult, and documenting a progress note (to access the entire scenario, see pages 88–90 of Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records [NISTIR 7804], available at http://www.nist.gov/customcf/get_pdf.cfm?pub_id=909701).31

Upon completion of the scenario, the participants were asked to complete a survey using the Qualtrics web-based survey software. The survey consisted of 20 items and included the 10 Likert-scale items from the System Usability Scale (SUS) that were published in the NIST publication Customized Common Industry Format for Electronic Health Record Usability Testing (NISTIR 7742),32 available at http://www.nist.gov/customcf/get_pdf.cfm?pub_id=907312. The purpose of the questionnaire was to evaluate the efficiency of system navigation, the system design attributes, and the end-user satisfaction.

In this study, participants used one of three different EHR systems, designated EHR 1, EHR 2, and EHR 3. The SUS was used to compare the usability of the EHR systems. The SUS yields a score based on each participant’s responses to the questions. The SUS score ranges from 0 to 100, with higher scores associated with a more usable system.

Because of a low sample size in each EHR group (EHR 1, 10 participants; EHR 2, 4 participants; and EHR 3, 10 participants), we employed a bootstrapping method in which we sampled the SUS scores 100 times with replicates for each EHR group. Two-way ANOVA and Tukey post hoc tests were used to compare the mean SUS scores between each of the three EHR groups by academic standing (graduate or undergraduate) or history of EHR use (yes or no).

Results

Of the 27 participants, 24 completed the questionnaire in its entirety, and therefore those 24 responses were used in the statistical analyses. The results of a two-way ANOVA revealed a main effect of EHR type (F = 48.12; p < .001) and academic standing (F = 4.96; p = .027) in addition to an interaction effect (F = 22.7; p < .001) when comparing SUS scores (See Figure 1). As shown in Table 1 and revealed by the Tukey post hoc test, there were significantly higher SUS scores for participants who used EHR 1 (p < .001) and EHR 2 (p < .001) when compared to EHR 3. Also, graduate students had higher SUS scores than undergraduate students (p = .04). There was an interaction by which graduate students tended to have higher SUS scores using EHR 2 compared to undergraduate students (p < .001).

We also compared SUS scores for each EHR group by history of EHR use. A two-way ANOVA revealed a main effect for EHR type (F = 71.71; p < .001) and whether participants had a history of EHR use (F = 73.66; p < .001) but no interaction effect (F = 1.20; p = .303) on SUS scores. A Tukey post hoc test revealed that SUS scores were significantly higher for EHR 1 (p < .001) and EHR 2 (p < .001) when compared to EHR 3. In addition, participants who did not have previous experience using an EHR had higher SUS scores than those who did have previous EHR experience (p < .001). (See Table 2)

Discussion

To our knowledge, this is the first study to evaluate the usability of EHRs accessed via iPads by a group of health science students. Overall, the results of the study are in agreement with our predictions. We were able to show significant differences in the usability scores based upon academic standing (graduate vs. undergraduate) and previous experience using EHRs.

This research clearly demonstrates that certain EHR systems are more usable on iPads (handheld mobile devices) than others. The research shows a clear association between the design of the system and overall usability scores—the EHR system designed specifically for the iPad (EHR 1) had the highest overall usability score. The other two EHRs used applications to access the EHR systems remotely. EHR 2 uses LogMeIn to remotely access a desktop computer (this is a workaround based on Adobe Flash for using the iPad to access an EHR). EHR 3 uses Citrix to create a virtual private network to remotely access the EHR system. Despite the fact that both of these systems used third-party applications to access the EHR remotely, one of them (EHR 2) had a significantly higher usability score than the other (EHR 3).

When access via iPads or other mobile technology is deemed a requirement of an EHR system that is being evaluated by an organization, it is important to thoroughly assess the supporting technology needed to access and effectively use the EHR on the iPad or mobile technology platform. Both EHR 2 and EHR 3 promote that they offer an “iPad EHR”; however, both of the products require the use and potential purchase of third-party software in order to access the EHR on the iPad. In addition, one of the third-party software products, LogMeIn, requires that the information be available and downloaded on the main computer to which the iPad is directly linked. The information from this study suggests that remote access to an EHR system on mobile devices can negatively impact usability when compared to a dedicated mobile application.

There was a significant difference in usability scores based upon the students’ prior experience with EHR systems. Students who reported having prior experience with EHRs were significantly more likely to rank the usability of the system higher than those without prior experience. Thus, having a foundational knowledge of EHR systems provides a student with the understanding to better evaluate EHRs on the basis of perceptions of ease of navigation, system complexity, potential training needs, and system learnability. This finding demonstrates the need for academic programs to increase the level of student exposure to EHRs throughout the course of study. If future professionals are to possess the skills and competencies needed to effectively evaluate systems and therefore be effective as clinicians, technology super-users, and members of technology selection committees, they need to be exposed to these products as early as possible in their careers.

Undergraduate students on average rated the EHRs as less usable than graduate students. This finding suggests that graduate students are less critical of the usability of EHRs, which could partly be explained by the students’ maturity, experience with healthcare, and prior experience with EHR systems. Again, it is important to expose students to EHRs early and often to overcome the novelty of simply working with this type of new technology.

Conclusion

This study is useful on many levels. As described above, health informatics and information management students will be responsible for understanding certified EHR systems and must be proficient in usability concepts. UCD methods must be employed by vendors for their EHR systems to become certified under the 2014 certification standards for the EHR incentive program. However, the requirement is not prescriptive, so vendors have significant flexibility in how they interpret this requirement in order to comply with the standards. The final rule cited above refers vendors to existing methods, but the bar has been set relatively low in this area. It is also important to remember that usability testing only needs to be documented on the vendor’s product that is utilized for the actual certification test for the ONC’s EHR certification program. If a version of the product is able to be accessed by a mobile device, for example, that version does not technically require usability testing if it was not used for the certification test.

Nevertheless, it is critical that organizations and health informatics and information management professionals understand how to engage their vendors regarding the methods used to comply with the new requirements. The findings from this study indicate that in order to prepare graduates to meet these workforce needs, academic programs should increase the level of student exposure to EHRs throughout their course of study to provide students with experience that enhances their ability to evaluate the usability of EHR products.

 

Ryan Sandefer, MA, CPHIT, is chair and assistant professor in the Department of Health Informatics and Information Management at the College of St. Scholastica in Duluth, MN.

Danika Brinda, MA, RHIA, CHPS, is assistant professor in the Department of Health Informatics and Information Management at the College of St. Scholastica in Duluth, MN.

Janelle Wapola, MA, RHIA, is assistant professor in the Department of Health Informatics and Information Management at the College of St. Scholastica in Duluth, MN.

Shirley Eichenwald Maki, MBA, RHIA, FAHIMA, is emeritus faculty in the Department of Health Informatics and Information Management at the College of St. Scholastica in Duluth, MN.

David Marc, MBS, is a PhD student in health informatics at the University of Minnesota and is an adjunct faculty member in the Department of Health Informatics and Information Management at the College of St. Scholastica in Duluth, MN.

Notes

1. Corrigan, J. M. “Crossing the Quality Chasm.” In P. P. Reid, W. D. Compton, J. H. Grossman, and G. Fanjiang (Editors), Building a Better Delivery System: A New Engineering/Health Care Partnership. Washington, DC: National Academies Press, 2005, 95–97.

2. Edwards, P. J., K. P. Moloney, J. A. Jacko, and F. Sainfort. “Evaluating Usability of a Commercial Electronic Health Record: A Case Study.” International Journal of Human-Computer Studies 66, no. 10 (2008): 718–28.

3. Terry, K. “Mostashari: ‘Usability,’ Better Decision-Making, Key EHR Success.” FierceHealthIT. May 9, 2011. Available at http://www.fiercehealthit.com/story/mostashari-usability-key-ehr-standards/2011-05-09.

4. HIMSS. 2nd Annual HIMSS Mobile Technology Survey. HIMSS Analytics. 2012. Available at http://www.himssanalytics.org/research/AssetDetail.aspx?pubid=81559&tid=131.

5. National Institute of Standards and Technology (NIST). “Usability.” Available at http://www.nist.gov/healthcare/usability/ (accessed July 6, 2013).

6. HIMSS. “EHR Usability Basics.” Available at http://www.himss.org/resourcelibrary/TopicList.aspx?MetaDataID=1719&navItemNumber=17121 (accessed April 15, 2013).

7. Department of Health and Human Services. “Usability Evaluation.” Usability.gov. Available at http://www.usability.gov/what-and-why/usability-evaluation.html (accessed July 6, 2013).

8. Edwards, P. J., K. P. Moloney, J. A. Jacko, and F. Sainfort. “Evaluating Usability of a Commercial Electronic Health Record: A Case Study.”

9. Karsh, B. T. “Beyond Usability: Designing Effective Technology Implementation Systems to Promote Patient Safety.” Quality and Safety in Health Care 13, no. 5 (2004): 388–94.

10. Scanlon, M. C., and E. M. Densmore. “Human Factors and Children: Implications for Patient Safety.” In R. Tartaglia, S. Bagnara, T. Bellandi, and S. Abolino (Editors), Healthcare Systems Ergonomics and Patient Safety. London: Taylor & Francis, 2005, 127.

11. Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press, 1999.

12. Middleton, B., M. Bloomrosen, M. A. Dente, B. Hashmat, R. Koppel, J. M. Overhage, et al. “Enhancing Patient Safety and Quality of Care by Improving the Usability of Electronic Health Record Systems: Recommendations from AMIA.” Journal of the American Medical Informatics Association (2013). doi:10.1136/amiajnl-2012-001458.

13. Blumenthal, D. “Launching HITECH.” New England Journal of Medicine 362, no. 5 (2010): 382–85.

14. Gans, D., J. Kralewski, T. Hammons, and B. Dowd. “Medical Groups’ Adoption of Electronic Health Records and Information Systems.” Health Affairs 24, no. 5 (2005): 1323–33.

15. Belden, J. L., R. Grayson, and J. Barnes, J. Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating. Chicago, IL: Healthcare Information and Management Systems Society, 2009.

16. Armijo, D., C. McDonnell, and K. Werner. Electronic Health Record Usability: Interface Design Considerations (AHRQ Publication No. 09(10)-0091-2-EF). Rockville, MD: Agency for

Healthcare Research and Quality, October 2009.

17. McDonnell, C., K. Werner, and L. Wendel. Electronic Health Record Usability: Vendor Practices and Perspectives (AHRQ Publication No. 09(10)-0091-3-EF). Rockville, MD: Agency for Healthcare Research and Quality, 2010.

18. DesRoches, C. M., E. G. Campbell, S. R. Rao, K. Donelan, T. G. Ferris, A. Jha, A., et al. “Electronic Health Records in Ambulatory Care—A National Survey of Physicians.” New England Journal of Medicine 359, no. 1 (2008): 50–60.

19. McDonnell, C., K. Werner, and L. Wendel. Electronic Health Record Usability: Vendor Practices and Perspectives (AHRQ Publication No. 09(10)-0091-3-EF), p. 6.

20. Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press, 2012, p. 9.

21. Zhang, J., and M. F. Walji. “TURF: Toward a Unified Framework of EHR Usability.” Journal of Biomedical Informatics 44, no. 6 (2011): 1056–67.

22. Schumacher, R. M., and S. Z. Lowry. Customized Common Industry Format Template for Electronic Health Record Usability Testing (NISTIR 7742). Gaithersburg, MD: National Institute of Standards and Technology, 2010.

23. Department of Health and Human Services. “Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology; Final Rule.” 45 CFR Part 170. Federal Register 77, no. 171 (September 4, 2012): 54187. Available at http://www.gpo.gov/fdsys/pkg/FR-2012-09-04/pdf/2012-20982.pdf.

24. Office of the National Coordinator for Health Information Technology. Test Procedure for §170.314(g)(3) Safety-enhanced Design. 2014 Edition. Approved Test Procedure Version 1.3. March 29, 2013. Available at http://www.healthit.gov/sites/default/files/170.314g3safetyenhanceddesign_2014_tp_approved_v1.3_0.pdf.

25. Department of Health and Human Services. “Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology; Final Rule.”

26. Schumacher, R. M., and S. Z. Lowry. NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records (NISTIR 7741). Gaithersburg, MD: National Institute of Standards and Technology, 2010.

27. Department of Health and Human Services. “Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology; Final Rule.”

28. Schumacher, R. M., and S. Z. Lowry. Customized Common Industry Format Template for Electronic Health Record Usability Testing (NISTIR 7742).

29. Department of Health and Human Services. “Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology; Final Rule.”

30. National Institute of Standards and Technology (NIST). “Usability Framework.” Available at http://www.nist.gov/healthcare/usability/framework.cfm (accessed April 15, 2013).

31. Lowry, S. Z., M. T. Quinn, M. Ramaiah, R. M. Schumacher, E. S. Patterson, R. North, et al. Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records (NISTIR 7804). Gaithersburg, MD: National Institute of Standards and Technology, 2012.

32. Schumacher, R. M., and S. Z. Lowry. Customized Common Industry Format Template for Electronic Health Record Usability Testing (NISTIR 7742).

Printer friendly version of this article.

Ryan Sandefer, MA, CPHIT; Danika Brinda, MA, RHIA, CHPS; Janelle Wapola, MA, RHIA; Shirley Eichenwald Maki, MBA, RHIA, FAHIMA; and David Marc, MBS. “EHR Usability on Mobile Devices.” Educational Perspectives in Health Informatics and Information Management (Summer 2013): 1-11.

Leave a Reply