New AHRQ-Funded Report Provides Snapshot of Electronic Health Record (EHR) Vendor Usability Processes and Practices

New AHRQ-Funded Report Provides Snapshot of Electronic Health Record (EHR) Vendor Usability Processes and Practices

Received via email on May 27, 2010:  “The Agency for Healthcare Research and Quality (AHRQ) has issued a new report that focuses on assessing and improving the state of usability in Electronic Health Record (EHR) systems.  Key recommendations from the project’s expert panel include establishment of usability / information design of EHRs as an essential part of the certification requirements for EHRs, basing certification on a practical and fair process of usability evaluation, and designing certification programs for EHR usability in a way that focuses on objective and important aspects of system usability. Select to access the report and learn more about the panel’s recommendations.(PDF file).

“AHRQ is working closely with the National Institute of Standards and Technology (NIST) and the Office of the National Coordinator to address the recommendations identified in its research.  In June, AHRQ plans to award a follow-on project for the development, testing and dissemination of an easy-to-use, objective and evidence-based toolkit that healthcare organizations can use to evaluate critical aspects of their EHR systems’ usability, accessibility and information design.  In addition, NIST is currently seeking applications for development of an EHR usability evaluation framework at a meeting, titled “Health Care IT Usability: Strategy, Research, Implementation,” to be held in Gaithersburg, MD, on July 13, (2010).”

The report was written by Cheryl McDonnell, Kristen Werner, and Lauren Wendel; and was prepared by James Bell Associates and The Altarum Institute. Suggested citation: McDonnell C, Werner K, Wendel L. Electronic Health Record Usability: Vendor Practices and Perspectives. AHRQ Publication No. 09(10)-0091-3-EF. Rockville, MD: Agency for Healthcare Research and Quality. May 2010.

Excerpted from the pdf report:
Electronic Health Record Usability:
Vendor Practices and Perspectives
May 2010

Executive Summary 

One of the key factors driving the adoption and appropriate utilization of electronic health record (EHR) systems is their usability. (1) However, a recent study funded by the Agency for Healthcare Research and Quality (AHRQ) identified information about current EHR vendor usability processes and practices during the different phases of product development and deployment as a key research gap. (2)To address this gap and identify actionable recommendations to move the field forward, AHRQ contracted with James Bell Associates and the Altarum Institute to conduct a series of structured discussions with selected certified EHR vendors and to solicit recommendations based on these findings from a panel of multidisciplinary experts in this area.

The objectives of the project were to understand processes and practices by these vendors with regard to:

     •   The existence and use of standards and “best practices” in designing, developing, and deploying products.

     •  Testing and evaluating usability throughout the product life cycle.

     •  Supporting postdeployment monitoring to ensure patient safety and effective use.

In addition, the project solicited the perspectives of certified EHR vendors with regard to the role of certification in evaluating and improving usability.

The key findings from the interviews are summarized below.

     •  All vendors expressed a deep commitment to the development and provision of usable EHR product(s) to the market.

     •  Although vendors described an array of usability engineering processes and the use of end users throughout the product life cycle, practices such as formal usability testing, the use of user-centered design processes, and specific resource personnel with expertise in usability engineering are not common.

     •  Specific best practices and standards of design, testing, and monitoring of the usability of EHR products are not readily available. Vendors reported use of general (software) and proprietary industry guidelines and best practices to support usability. Reported perspectives on critical issues such as allowable level of customization by customers varied dramatically.

     •  Many vendors did not initially address potential negative impacts of their products as a priority design issue. Vendors reported a variety of formal and informal processes for identifying, tracking, and addressing patient safety issues related to the usability of their products.

     •  Most vendors reported that they collect, but do not share, lists of incidents related to usability as a subset of user-reported “bugs” and product-enhancement requests. While all vendors described a process, procedures to classify and report usability issues of EHR  products are not standardized across the industry.

     •  No vendors reported placing specific contractual restrictions on disclosures by system users of patient safety incidents that were potentially related to their products.

     •  Disagreement exists among vendors as to the ideal method for ensuring usability standards, and best practices are evaluated and communicated across the industry as well as to customers. Many view the inclusion of usability as part of product certification as part of a larger “game” for staying competitive, but also as potentially too complex or something that will “stifle innovation” in this area.

     •  Because nearly all vendors view usability as their chief competitive differentiator, collaboration among vendors with regard to usability is almost nonexistent.

     •  To overcome competitive pressures, many vendors expressed interest in an independent body guiding the development of voluntary usability standards for EHRs. This body could build on existing models of vendor collaboration, which are currently focused predominantly on issues of interoperability.

Based on the feedback gained from the interviews and from their experience with usability best practices in health care and other industries, the project expert panel made the following recommendations:

     •  Encourage vendors to address key shortcomings that exist in current processes and practices related to the usability of their products. Most critical among these are lack of adherence to formal user-design processes and a lack of diversity in end users involved in the testing and evaluation process.

     • Include in the design and testing process, and collect feedback from, a variety of end-user contingents throughout the product life cycle. Potentially undersampled populations include end users from nonacademic backgrounds with limited past experience with health information technology and those with disabilities.

     •  Support an independent body for vendor collaboration and standards development to overcome market forces that discourage collaboration, development of best practices, and standards harmonization in this area.

     •  Develop standards and best practices in use of customization during EHR deployment.

     •  Encourage formal usability testing early in the design and development phase as a best practice, and discourage dependence on postdeployment review supporting usability assessments.

     •  Support research and development of tools that evaluate and report EHR ease of learning, effectiveness, and satisfaction both qualitatively and quantitatively.

     •  Increase research and development of best practices supporting designing for patient safety.

     •  Design certification programs for EHR usability in a way that focuses on objective and

important aspects of system usability.

Background

Encouraged by Federal leadership, significant investments in health information technology (IT) are being made across the country. While the influx of capital into the electronic health record (EHR)/health information exchange (HIE) market will undoubtedly stimulate innovation, there is the corresponding recognition that this may present an exceptional opportunity to guide that innovation in ways that benefit a significant majority of potential health IT users.

One of the key factors driving the adoption and appropriate utilization of EHR systems is their usability. (1) While recognized as critical, usability has not historically received the same level of attention as software features, functions, and technical standards. A recent analysis funded by the Agency for Healthcare Research and Quality (AHRQ) found that very little systematic evidence has been gathered on the usability of EHRs in practice. Further review established a foundation of EHR user-interface design considerations, and an action agenda was proposed for the application of information design principles to the use of health IT in primary care settings. (2), (3)

In response to these recommendations, AHRQ contracted with James Bell Associates and the Altarum Institute to evaluate current vendor-based practices for integrating usability during the entire life cycle of the product, including the design, testing, and postdeployment phases of EHR development. A selected group of EHR vendors, identified through the support of the Certification Commission for Health Information Technology (CCHIT) and AHRQ, participated in semistructured interviews. The discussions explored current standards and practices for ensuring the usability and safety of EHR products and assessed the vendors’ perspectives on how EHR usability and information design should be certified, measured, and addressed by the government, the EHR industry, and its customers. Summary interview findings were then distributed to experts in the field to gather implications and recommendations resulting from these discussions.

Vendor Profiles

The vendors interviewed were specifically chosen to represent a wide distribution of providers of ambulatory EHR products. There was a representation of small-sized businesses (less than 100 employees), medium-sized businesses (100-500 employees), and large-sized businesses (greater than 500 employees). The number of clinician users per company varied from 1,000 to over 7,000, and revenue ranged from $1 million to over $10 billion per year. The EHR products discussed came on the market in some form in the time period from the mid-1990s to 2007. All vendors except one had developed their EHR internally from the ground up, with the remaining one internally developing major enhancements for an acquired product. Many of these products were initially designed and developed based on a founding physician’s practice and/or established clinical processes. All companies reported that they are currently engaged in groundup development of new products and/or enhancements of their existing ambulatory products. Many enhancements of ambulatory products center on updates or improvements in usability. Examples of new developments include changes in products from client-based to Web-based EHRs; general changes to improve the overall usability and look and feel of the product; and the integration of new technologies such as patient portals, personal health records, and tablet devices.

The full list of vendors interviewed and a description of their key ambulatory EHR products are provided in Appendixes I and II. The following discussion provides a summary of the themes encountered in these interviews.

Standards in Design and Development

End-User InvolvementAll vendors reported actively involving their intended end users throughout the entirendesign and development process. Many vendors also have a staff member with clinical experience involved in the design and development process; for some companies the clinician was a founding member of the organization.nWorkgroups and advisory panels are the most common sources of feedback, with some vendors utilizing a more comprehensive participatory design approach, incorporating feedback from all stakeholders throughout the design process. Vendors seek this information to develop initial product requirements, as well as to define workflows, evaluate wireframes and prototypes, and participate in initial beta testing. When identifying users for workgroups, advisory panels, or beta sites, vendors look for clinicians who have a strong interest in technology, the ability to evaluate usability, and the patience to provide regular feedback. Clinicians meeting these requirements are most often found in academic medical centers. When the design concerns an enhancement to thencurrent product, vendors often look toward users familiar with the existing EHR to provide thismfeedback.

“We want to engage with leadership-level poartners as well as end users from all venues that may be impacted by out product.”

Design Standards and Best Practices

A reliance on end-user input and observation for ground-up development is seen as a requirement in the area of EHR design, where specific design standards and best practices are not yet well defined. Vendors indicated that appropriate and comprehensive standards were lacking for EHRspecific functionalities, and therefore they rely on general software design best practices to inform design, development, and usability. While these software design principles help to guide their processes, they must be adjusted to fit specific end-user needs within a health care setting. In addition to following existing general design guidelines such as Microsoft Common User Access, Windows User Interface, Nielsen Norman Group, human factors best practices, and releases from user interface (UI) and usability professional organizations, many vendors consult with Web sites, blogs, and professional organizations related to health IT to keep up to date with specific industry trends. Supplementing these outside resources, many vendors are actively developing internal documentation as their products grow and mature, with several reporting organized efforts to create internal documentation supporting product-specific standards and best practices that can be applied through future product updates and releases.

There are no standards most of the time, and when there are standards, there is no enforcement of them. The software industry has plenty of guidelines and good best practices, but in health IT, there are none.”

Industry Collaboration

As these standards and best practices are being developed, they are not being disseminated throughout the industry. Vendors receive some information through professional  organizations and conferences, but they would like to see a stronger push toward an independent body, either governmental or research based, to establish some of these standards. The independent body would be required, as all vendors reported usability as a key competitive differentiator for their product;this creates a strong disincentive for industrywide collaboration. While all were eager to take advantage of any resources commonly applied across the industry, few were comfortable with sharing their internally developed designs or best practices for fear of losing a major component of their product’s competitiveness. Some vendors did report they collaborate informally within the health IT industry, particularly through professional societies, trade conferences, and serving on committees. For example, several vendors mentioned participation in the Electronic Health Record Association (EHRA), sponsored by the Healthcare Information and Management Systems Society (HIMSS), but noted that the focus of this group is on clinical vocabulary modeling rather than the usability of EHRs. Some interviewees expressed a desire to collaborate on standards issues that impact usability and patient safety through independent venues such as government or research agencies.

“The field is competitive so there is little sharing of best practices in the community. The industry should not look toward vendors to create these best practices. Other entities must step up and define [them] and let the industry adapt.”

Customization

In addition to the initial design and development process, vendors commonly work with end users to customize or configure specific parts of the EHR. Vendors differed in the extent to which they allowed and facilitated customization and noted the potential for introducing errors when customization is pursued. Most customizations involve setting security rules based on roles within a clinic and the creation of document templates that fit a clinic’s specific workflow. Many vendors view this process as a critical step toward a successful implementation and try to assist users to an extent in developing these items. While some vendors track these customizations as insight for future product design, they do not view the customizations as something that can be generalized to their entire user base, as so many are context specific. The level of customization varies according to vendor since vendors have different views about the extent to which their product can or should be customized. Vendors do not routinely make changes to the code or underlying interface based on a user request; however, the level to which end users can modify templates, workflows, and other interface-related characteristics varies greatly by vendor offering.

“You cannot possibly adapt technology to everyone’s workflow. You must provide the most optimum way of doing something which [users] can adapt.”

Usability Testing and Evaluation

Informal Usability Assessments

Formal usability assessments, such as task-centered user tests, heuristic evaluations, cognitive walkthroughs, and card sorts, are not a common activity during the design and development process for the majority of vendors. Lack of time, personnel, and budget resources were cited as reasons for this absence; however, the majority expressed a desire to increase these types of formal assessments. There was a common perception among the vendors that usability assessments are expensive and time consuming to implement during the design and development phase. The level of formal usability testing appeared to vary by vendor size, with larger companies having more staff and resources dedicated to usability testing while smaller vendors relied heavily on informal methods (e.g., observations, interviews), which were more integrated into the general development process. Although some reported that they conduct a full gamut of both formal and informal usability assessments for some parts of the design process, most reported restricting their use of formal methods to key points in the process (e.g., during the final design phase or for evaluation of specific critical features during development).

“Due to time and resource constraints, we do not do as much as we would like to do. It is an area in which we are looking to do more.”

Measurement

Functions are selected for usability testing according to several criteria: frequency of use, task criticality and complexity, customer feedback, difficult design areas, risk and liability, effects on revenue, compliance issues (e.g., Military Health System HIPAA [Health Insurance Portability and Accountability Act], and the American Recovery and Reinvestment Act) and potential impacts on patient safety. The most common or frequent tasks and tasks identified as inherently complex are specifically prioritized for usability testing. Neither benchmarks and standards for performance nor formalized measurements of these tasks are common in the industry. While some vendors do measure number of clicks and amount of time to complete a task, as well as error rates, most do not collect data on factors directly related to the usability of the product, such as ease of learning, effectiveness, and satisfaction. Many vendors reported that the amount of data collected does not allow for quantitative analysis, so instead they rely on more anecdotal and informal methods to ensure that their product works more effectively than paper-based methods and to inform their continuous improvements with up rades and releases.

“Testing is focused more on functionality rather than usability.”

Observation

Observation is the “gold standard” among all vendors for learning how users interact with their EHR. These observations usually take place within the user’s own medical practice, either in person or with software such as TechSmith’s Morae. (4) Vendors will occasionally solicit feedback on rototypes from user conferences in an informal lablike Sstting. These observations are typically used to gather information on clinical workflows or process flows, which are incorporated into the product design, particularly if the vendor is developing a new enhancement or entire product.

“[Methods with] low time and resource efforts are the best [to gather feedback]; wherever users are present, we will gather data.”

Changing Landscape

While informal methods of usability testing seem to be common across most vendors, the landscape appears to be changing toward increasing the importance of usability as a design necessity. Multiple vendors reported the current or recent development of formal in-house observation laboratories where usability testing could be more effectively conducted. Others reported the recent completion of policies and standards directly related to integrating usability more formally into the design process, and one reported a current contract with a third-party vendor to improve usability practices. While it is yet to be seen if these changes will materialize, it appeared that most respondents recognized the value of usability testing in the design process and were taking active steps to improve their practices.

Postdeployment Monitoring and Patient Safety

Feedback Solicitation

Vendors are beginning to incorporate more user feedback into earlier stages of product design and development; however, most of this feedback comes during the postdeployment phase. As all vendors interviewed are currently engaged in either the development of enhancements of current products or the creation of new products, the focus on incorporating feedback from intended end users at all stages of development has increased. Many of the EHRs have been on the market for over 10 years; as a result, many vendors rely heavily on this postdeployment feedback to evaluate product use and inform future product enhancements and designs. Maintaining contact with current users is of high priority to all EHR vendors interviewed and in many ways appeared to represent the most important source of product evaluation and improvement. Feedback is gathered through a variety of sources, including informal comments received by product staff, help desk support calls, training and implementation staff, sales personnel, online user communities, beta clients, advisory panels, and user conferences. With all of these avenues established, vendors appear to attempt to make it as easy as possible for current users to report potential issues as well as seek guidance from other users and the vendor alike.

“A lot of feedback and questions are often turned into enhancements, as they speak to the user experience of the product.”

Review and Response

Once the vendors receive both internal and external feedback, they organize it through a formal escalation process that ranks the severity of the issue based on factors such as number of users impacted, type of functionality involved, patient safety implications, effects on workflow, financial impact to client, regulation compliance, and the source of the problem, either implementation based or product based. In general, safety issues are given a high-priority tag. Based on this escalation process, priorities are set, resources within the organization are assigned, and timelines are created for directly addressing the reported issue. Multiple responses are possible depending on the problem. Responses can include additional user training, software updates included in the next product release, or the creation and release of immediate software patches to correct high-priority issues throughout the customer base.

“Every suggestion is not a good suggestion; some things do not help all users because not all workflows are the same.”

Patient Safety

Adoption of health IT has the potential for introducing beneficial outcomes along many dimensions. It is well recognized, however, that the actual results achieved vary from setting to setting, (5) and numerous studies have reported health IT implementations that introduced unintended adverse consequences detrimental to patient care practice. (6) Surprisingly, in many interviews patient safety was not initially verbalized as a priority issue. Initial comments focused on creating a useful, usable EHR product, not one that addresses potential negative impacts on patient safety. Vendors rely heavily on physcians to notice potential hazards and report these hazards to them through their initial design and development advisory panels and postdeployment feedback mechanisms. After further questioning specific to adverse events, however, most vendors did describe having processes in place for monitoring potential safety issues on a variety of fronts. Some vendors become aware of patient safety issues through user feedback collected from patient safety offices and field visits; others educate support staff as well as users on how to identify potential patient safety risks and properly notify those who can address the issue. Once patient safety issues are identified, vendors address them in various ways, including tracking and reporting potential issues online, using patient safety review boards to quantify risk, and engaging cognitive engineers to uncover root causes.

When asked about client contracts, no vendors reported placing specific contractual restrictions on disclosures by system users of patient safety incidents that were potentially related to the EHR products, sharing patient safety incidents with other customers or other clinicians, or publishing research on how the EHR system affects patient safety or their clinical operations.

“Physicians are very acutely aware of how technology is going to impact patient safety; that’s their focus and motivation.”Role of Certification in Evaluating Usability

Current Certification Strategies

The issue of certification is one that elicited strong opinions from most vendors. Certification of any type represents an investment of time and money to meet standards originating outside the organization. For many vendors, particularly the smaller ones, this investment was seen as burdensome. Vendors commonly described the current CCHIT certification process as part of a larger “game” they must play in order to remain viable in the marketplace, not as a way to improve theirnproduct(s). Accounts of functions added specifically for certification but not used by customers were common, as well as specific
instances where vendors felt meeting certification guidelines actually reduced aspects of their product’s quality. As one vendor noted, sometimes providing the functionality for “checking the box” to meet a certification requirement involves a backward step and a lowering of a potentially innovative internal standard. As meaningful use has entered the picture, however, vendors are striving to provide their customers with products that will comply with this definition and plan to participate in any associated certifications.

“We don’t want to get dinged for an innovative standard that we’ve developed and [that] tested well with users because it doesn’t fit the criteria.”

Subjectivity

Interviewees held mixed opinions on whether the certification process can effectively evaluate the usability aspect of EHR performance. Without exception, participating vendors had concerns about the inherent subjectivity of evaluation of usability, which can be strongly affected by the past EHR experience of the user, the context in which the product is used, and even the education and background of the evaluator. Methods for overcoming these types of bias issues included suggestions such as certifying workflows rather than attempting to measure usability, comparing objective product performance (time and error rates) for specific tasks, or measuring usability based on end-user surveys instead of juror analysis.

“Some products may be strong, but due to the familiarity of jurors of a product or technology, some products may be overrated or  underrated.”

Innovation

Several interviewees also expressed concern about the effect of usability certification on innovation within the EHR marketplace. This seemed to stem from experience with CCHIT’s feature- and function-based criteria. It was noted that in the developing EHR marketplace, current systems are striving to make significant changes in the way physicians practice care, which has inherent negative implications for perceived usability early in the product’s release. Guidelines or ratings that are too prescriptive may have the effect of forcing vendors to create technologies that more directly mirror current practices, a strategy that could limit innovation and the overall effectiveness of EHRs.

“Products are picked on the amount of things they do, not how well they do them. CCHIT perpetuates this cycle; if a product contains certain functions, it is placed among the elite. That has nothing to do with usability.”

Recognized Need

Despite these concerns, vendors recognized the role certification could play both as an indicator to support customers in selecting EHRs and as a method through which established standards could be disseminated across the industry. While there is unease about the details of the conduct of certification, many vendors thought that some form of certification or evaluation had the potential to serve as a complement to what is now a predominantly market-driven issue. While each vendor viewed itself as a leader in the development of usable EHR systems and supported the role of consumer demand in continuing to improve product  usability, vendors recognized that there could be utility to more standardized testing that could be evenly applied throughout the industry.

“Being aware of standards and guidelines is
very important, but we also want to make sure we are not hamstrung by them.”

Conclusion

All vendors interviewed expressed a deep commitment to the continued development and provision of usable EHR product(s) to the market. Vendors believe that as features and functions become more standardized across the industry, industry leaders will begin to differentiate themselves from the rest of the market based on usability. Current best practices and standards of design, testing, and monitoring EHR product(s), particularly for usability, are varied and not well disseminated. While models for vendor collaboration for issues such as interoperability currently exist through EHRA and IHE (Integrating the Healthcare Enterprise), collaboration among vendors with regard to usability is almost nonexistent. Given the current move toward the adoption and meaningful use of health IT, and the role usability plays in realizing intended benefits, a transition from the current environment seems likely. This could be driven by many sources, including standards developed by academic research, certification required by government entities, collaboration through a nonprofit association such as EHRA or IHE, or simply market pressures demanding more usable offerings. Vendors recognize these pressures and the importance of usability to the continued success of their products. Disagreement exists as to the ideal method for ensuring that usability is evaluated and communicated across the industry as well as to customers. This disagreement exists even within companies, as well as across vendors. Regardless of this uncertainty, there is agreement that end users need to remain a central component within the development process, innovation needs to be encouraged, and usability needs to be a critical driver of efficient, effective, and safe EHRs.

Implications and Recommendations

The summary interview findings were distributed to selected experts in the field, who provided additional thoughts on the implications of these discussions and developed recommendations based on the discussions. A summary of these suggestions follows.

Standards in Design and Development

Increase diversity of users surveyed for pre-deployment feedback. While the use of subject-matter experts and inclusion of end-user feedback in the design and development process are beneficial and important approaches, the end-user selection process currently in use has a potential for bias. Vendors noted extensive use of volunteered feedback. Clinicians with a strong interest in technology, the ability to evaluate usability, and the patience to provide regular feedback are not indicative of the typical end user. Additionally, as these types of clinicians are commonly found in academic medical centers, they may rely on residents or other trainees to do most of the work involving the EHR. Similar issues exist when soliciting input from users familiar with the existing EHR; these users have potentially learned, sometimes unconsciously, to work around or ignore many of the usability problems of the current system. To some extent, vendors must utilize this “coalition of the willing” to gather feedback, given the extremely busy schedules of most practicing clinicians. However, steps must be taken both in the vendor community and by independent bodies to encourage inclusion of a more diverse range of users in all stages of the design process. This more inclusive approach will ultimately support a more usable end product.

Support an independent body for vendor collaboration and standards development. Lack of vendor collaboration resulting from attempts to protect intellectual property and uphold a competitive edge is understandable. However, with the accelerated adoption timeframe encouraged by recent legislation and increasing demand, letting the market act as a primary driver to dictate usability standards may not ensure that appropriate standards are adopted. The user base currently has relatively limited abilities to accurately determine product usability before purchase and, if dissatisfied after purchase, may incur significant expense to explore more usable products. Simply deeming an EHR usable or not usable does not create or disseminate standards and best practices for design. The market can provide direction, but more must be done to document trends and best practices, create new standards for design, and regulate implementation across the industry.

Develop standards and best practices in use of customization during EHR deployment. Customization is often a key to successful implementation within a site, as it can enable users to document the clinical visit in a way that accommodates their usual methods and existing workflow. However, customization may also serve to hide existing usability issues within an EHR, prevent users from interacting with advanced functions, or even create unintended consequences that negatively impact patient safety. There is an additional concern that customization may negatively impact future interoperability and consistency in design across the industry. Customer demand for customization exists and some level of customization can be beneficial to supporting individual workflows; however, more work must be done to evaluate the level of customization that maximizes the EHR’s benefits and limits its risks.

Usability Testing and Evaluation

Encourage formal usability testing early in the design and development phase as a best practice. Usability assessments can be resource intensive; however, it has been demonstrated that including them in the design and development phase is more effective and less expensive than responding to and correcting items after market release.7 Identifying and correcting issues before release also reduce help desk support and training costs. Vendors indicated an awareness of this tradeoff and a move toward investment in usability assessment up front. Further monitoring will be required to evaluate how the vendor community incorporates formal usability testing within future design and development practices.

Evaluate ease of learning, effectiveness, and satisfaction qualitatively and
quantitatively. Observations are an important component of usability testing but are insufficient for assessment of the root cause of usability issues. Alternatively, quantitative data such as number of clicks, time to complete tasks, and error rates can help the vendor identify tasks that may present usability issues but must be further explored to identify underlying issues. A mix of structured qualitative and quantitative approaches, incorporating at minimum an assessment of the three basic factors directly contributing to product usability—ease of learning, effectiveness, and satisfaction—will serve to broaden the impact of usability assessments beyond the informal methods commonly employed today.

Postdeployment Monitoring and Patient Safety

Decrease dependence on postdeployment review supporting usability assessments. Usability issues are usually not simple, one-function problems, but tend to be pervasive throughout the EHR. So while small-scale issues are often reported and corrected after deployment, the identified issue may not be the primary determinant of a product’s usability. It is chiefly within the main displays of information that are omnipresent, such as menu listings, use of pop-up boxes, and the interaction between screens, that the EHR’s usability is determined. Even with the best of intentions, it is unlikely that vendors will be able to resolve major usability issues after release. By not identifying critical usability issues through a wide range of user testing during design and development, vendors are opening the door to potential patient safety incidents and costly postrelease fixes.

Increase research and development of best practices supporting designing for patient safety. Monitoring and designing for patient safety, like usability testing, appear to be most prevalent late in the design of the product or during its release cycle. Vendors’ heavy reliance on end users or advisory panels to point out patient safety issues in many ways mirrors the informal  methods used to advance usability of their products. While patient safety similarly lacks specific standards for vendors to follow, vendors are currently collaborating on patient safety issues. These collaborations appear to be in their early stages, but they provide an opportunity toenhance vendor awareness and vendor response to potential patient safety issues within their products and improve their ability to incorporate patient safety much earlier in the design process. Further work must be done to directly connect design to patient safety and ensure that standards are created and disseminated throughout the industry.

Role of Certification in Evaluating Usability

Certification programs should be carefully designed and valid. Any certification or outside evaluation will be initially approached with questions as to its validity, and the concept of usability certification is no exception. Usability is a complex multifaceted system characteristic, and usability certification must reflect that complexity. Further complicating this issue is the fact that vendors have already participated in a certification process that most did not find particularly valuable in enhancing their product. Driving the EHR market toward creation of usable products requires development of a process that accurately identifies usable products, establishes and disseminates standards, and encourages innovation.

References

(1). Belden J, Grayson R, Barnes J. Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating. Healthcare Information Management and Systems Society Electronic Health Record Usability Task Force. Available at:
http://www.himss.org/content/files/HIMSS_DefiningandTestingEMRUsability.pdf
Accessed June 2009.

(2). Armijo D, McDonnell C, Werner K. Electronic Health Record Usability: Interface Design Considerations. AHRQ Publication No. 09(10)-0091-2-EF. Rockville, MD: Agency for Healthcare Research and Quality. October 2009.

(3). Armijo D, McDonnell C, Werner K. Electronic Health Record Usability: Evaluation and Use Case Framework. AHRQ Publication No. 09(10)-0091-1-EF. Rockville, MD: Agency for Healthcare Research and Quality. October 2009.

(4). TechSmith. Morae: usability testing and market research software. Available at: http://www.techsmith.com/morae.asp .

(5). Ammenwerth E, Talmon J, Ash JS, et al. Impact of CPOE on mortality rates—contradictory findings, important messages. Methods Inf Med 2006;45:586-93.

(6). Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293:1197-203.

(7). Gilb T, Finzi S. Principles of software engineering management. Reading, MA: Addison-Wesley Pub. Co.; 1988

The report includes two appendices: Summary of Interviewed Vendors and Description of Electronic Health Records.

Vendors interviewed were
athenahealth, Inc. — athenaClinicals 9.15.1
Cerner Corporation –  Cerner Millenium Powerchart/PowerWorks EMR 2007.19
Criterions, LLC — Criterions 1.0.0
e-MDs — e-MDs Solution Series 6.3
EHS — CareRevolution 5.3
GE Healthcare — Centricity Electronic Medical Record 9.2
NextGen — NextGen EMR 5.5
Veterans Administration –VISTA

See PDF file for complete report.

Add’l $30.3 Mil to Fund Two New Beacon Health IT Communities

ONC to Add 16th and 17th Beacon Health IT Communities
Excerpted from ONC Beacon Communities Page
“The Beacon Community Cooperative Agreement Program provides communities with funding to build and strengthen their health information technology (health IT) infrastructure and exchange capabilities. These communities will demonstrate the vision of a future where hospitals, clinicians, and patients are meaningful users of health IT, and together the community achieves measurable improvements in health care quality, safety, efficiency, and population health.

“In May 2010, ONC made awards to 15 Beacon Communities. An additional $30.3 million is currently available to fund additional Beacon Community cooperative agreement awards. An announcement to apply was made on May 26, 2010.

“Beacon Communities will generate and disseminate evidence and insights that are applicable to the rest of the nation about the use of health IT resources to inform a range of specific clinical, care delivery, and other reforms that, together, can enable communities to achieve measurable and sustainable improvements in health care cost, quality, and population health. The Beacon Community Program will include $250 million in awards to 17 communities with an additional $15 million for technical assistance to help these communities succeed and to evaluate what works.”

Learn more about the Beacon Community Cooperative Agreement Program:

New Standards/Policy Workgroup for Electronic Enrollment in Gov’t Health & Human Services

Patient Protection and Affordable Care Act
Sets Up New
HIT Policy & Standards Committee
Enrollment Workgroup

Aneesh Chopra, Chief Technology Officer, White House Office of Science and Technology Policy; and Sam Karp, of the California Healthcare Foundation, have been named chair and co-chair respectively of the new  Enrolllment Workgroup of the Health IT Policy & Standards Committees. In their first public appearance as workgroup chairs at the April 26, 2010 Health IT Standards Committee, they plan to share two slides available on the Web site of the Office of the National Coordinator for Health IT which quote the clause from the Patient Protection and Affordable Care Act and the recommendations for standards and protocols they are expected to develop:

Patient Protection and Affordable Care Act
§1561. HIT Enrollment, Standards and Protocols. Not later than 180 days after the enactment, the Secretary, in consultation with the HIT Policy and Standards Committees, shall develop interoperable and secure standards and protocols that facilitate enrollment in Federal and State health and human services programs through methods that include providing individuals and authorized 3rd parties notification of eligibility and verification of eligibility.

Develop recommendations for standards and protocols for electronic enrollment in Federal and state health and human service programs:
–Electronic matching across state and Federal data
–Retrieval and submission of electronic documentation for verification
–Reuse of eligibility information
–Capability for individuals to maintain eligibility information online
–Notification of eligibility

NHIN 104: The Trust Fabric of the NHIN: Making Exchange a Good Choice: May 24

Monday, May 24, 2010   1:00 to 2:30pm ET
NHIN 104: The Trust Fabric of the NHIN:
Making Exchange a Good Choice

Excerpted  from National E-Health Collaborative
COURSE DESCRIPTION:
Students will learn about: (1) the national-level model trust framework that has been developed to help enable the safe and secure exchange of electronic health information over the Internet; and (2) how the NHIN Exchange has established trust among its participants.

Presentation Slides
Recorded Webinar

The NHIN Workgroup of the HIT Policy Committee has recommended that it is the role of the government to “establish and maintain a framework of trust, including ensuring adequate privacy and security protections to enable health information exchange.” The Workgroup has found that there is a need for the adoption of an overarching trust framework at the national level that includes these essential elements:

–Agreed Upon Business, Policy and Legal Requirements/Expectations
–Transparent Oversight
–Accountability and Enforcement
–Identity and Authenticatio
–Identification of Minimum Technical Requirements

The Data Use and Reciprocal Support Agreement (DURSA) and the NHIN Coordinating Committee Operating Policies & Procedures are examples that will be discussed of how the NHIN Exchange meets these trust requirements.

COURSE OBJECTIVES: By participating in this NHIN University class, participants will become familiar with:

–Key components of trust and why they are important
–How the NHIN Exchange trust fabric maps to the key trust components
–Efforts to harmonize the NHIN Exchange trust components with the rest of the NHIN ecosystem, such as state-level HIEs and the NHIN Direct project

DATE: Monday, May 24, 2010
TIME: 1:00 – 2:30 pm ET

 FACULTY:
–Mariann Yeager – NHIN Policy & Governance Lead (Contractor), Office of the National Coordinator for Health IT
–Steve Gravely, JD – Troutman Sanders LLP

MODERATOR:
–Aaron Seib - NHIN Program Director, National eHealth Collaborative

Teachable Moment — EHR Implementation Video

Pittsburgh Regional Health Initiative (PRHI) Presents 
EHR Implementation in A Teachable Moment Video
  PHRI EHR-A Teachable Moment
Frank Civitarese, DO, President of Preferred Primary Care (PPC) and a Board Member of Pittsburgh Regional Health Initiative (PRHI) is featured in a brief video about his practice’s EHR implemenation, along with comments by Rick   Schaeffer, VP & CIO, ST. Clair Hospital, and Charity Dean, Office Manager in Dr. Civitarese’s practice. 

PRHI is a Pittsburgh, PA area “regional  consortium of medical, business and civic leaders to address healthcare safety and quality improvement as a social and business imperative” using its commmunity as a “demonstration lab.” PHRI “is a nonprofit operating arm of the Jewish Healthcare Foundation.”

“Preferred Primary Care Physicians consists of 34 board-certified physicians specializing in internal medicine and family practice. PPCP has 15 practice locations in the South Hills and two locations in Uniontown in Fayette County. In addition, PPCP offers state-of-the-art outpatient centers for cardiac testing, sleep disorders, and physical therapy.”

PRHI: EHR Implementation: A Teachable Moment Video
PRHI Teachable Moments Videos
PRHI Champions of Work Redesign
http://prhi.org

Evaluation of State HIE Coop Programs: Leadership Webinar

Webinar (Slides) with State HIE Participants May 20, 2010:
Evaluation of State Health Information Exchange Cooperative Agreements Program
NORC (National Opinion Research Center) at the University of Chicago made a presentation to State HIE Leaders on May 20, 2010 on the evaluation project that Office of National Coordinator (ONC) for Health IT has hired them to perform. NORC will seek best practices and evaluate progress of state programs against those best practices and goals of the State HIE Cooperative Agreement Program.

Excerpted from State HIE Leadership Forum
Slide set PDF

Webinar with State HIE Participants
May 20, 2010

Agenda
• Introduce the evaluation team
• Provide an overview of the evaluation approach
• Discuss the evaluation technical assistance that the evaluation team will provide
• Provide an overview of the work that is being done to identify measure to track progress towards state goals

NORC Key Staff
• Prashila Dullabh (Project Director)
• Alana Knudson (Co-Project Director)
Other listed as well as sub-contractors

Overview of the Evaluation
• ONC awarded NORC contract in March 2010 to:
          Evaluation of the State Health Information Exchange (HIE)
          Cooperative Agreement Program focusing on systematic and timely feedback
          Identify best practices that successful states are taking to enable or facilitate exchange
         Analyze issues and challenges
         Assess the effectiveness of the Program in furthering information exchange capability within states

• Convene a Technical Advisory Group (TAG)
• Provide evaluation-related Technical Assistance (TA)

Research Domains
 The evaluation team will explore the following key research questions:
1. What are the various approaches that states are taking to enable or facilitate health information exchange?
2. What progress have states made in supporting health information exchange?
3. How does the approach relate to the level of implementation progress?

Evaluation Activities
• Review of state plans
• Conduct initial case studies
• Conduct structured interviews
• Review and analysis of data from states
• Collect additional data

Case Studies
• Objective: Identify lessons learned and strategies for overcoming barriers to HIE
• After reviewing the state plans, the evaluation team will propose a selection of possible candidates for the TAG’s review and approval.
• Selection criteria include:
          Stage of development
          Best practices for rapid implementation
         Overall model, scope and level of stakeholder participation
         Innovative or different technical approach
         Services provided
         Geographic diversity/Approaches to interstate exchange
• A case study report of key findings will be produced and shared with the program managers

Case Studies Approach
• Case studies to involve:
         Site visits
         Analysis of background materials and developing profiles of selected states
         One-on-one discussions with HIE leadership and stakeholders
         Focus group discussions with users and non-users

Evaluation Technical Assistance
• The evaluation team will provide technical assistance to the
states in their evaluation efforts by:
          Conducting webinars and teleconferences related to specific components of evaluation
          Holding evaluation seminars/workshops for state evaluation personnel to share ideas and listen to best practices
         Collating available tools and resources on evaluation
• What other approaches to technical assistance would be helpful?

Background on Performance Measures
• Reporting on performance measures is a program requirement and stipulated as part of the funding received by all states
• Purpose is to provide assessments on the overall progress and success of the Cooperative Agreement Program
• Focus is to develop measures that allow ONC to gain a deeper understanding of the current level of exchange within each state program and monitor changes over time
• Evaluation team to collaborate with ONC to develop baseline performance measures that relate to the development of HIE and compliments the overall evaluation

Selection and Design
Factors considered:
• Relevance of the measure across or within each of the phases of HIE maturity
• Relevance of the measures to the proposed meaningful use criteria
• Availability of data sources which could populate each measure
• Feasibility of states to get access to data sources

Approach
• Assist ONC in the development of a draft set of process-based and outcome-based measures
• Refine the draft set of measures with input from states and ONC
What are optimum approaches to involve the states in the development of the measures?

Usability in Health IT: Strategy, Research, and Implementation Workshop

Usability in Health IT: Strategy, Research, and Implementation
NIST/AHRQ/ONC Workshop July 13, 2010
National Institute of Standards and Technology
Gaithersburg, MD

This follows in pattern of previous workshops conducted by ONC, eg., Workforce Initiative, when ONC is in early brainstorning stages of developing a systematic program to meet particular needs.

The remainder of this post is excerpted from NIST site.

Purpose:
To promote collaboration in health IT usability among Federal agencies, industry, academia, and others.

Goal: Bring together industry, academia, government, and others to prioritize, align and coordinate short, medium, and long term strategies and tactics to improve the usability of EHRs. 

Objectives:

  • Establish an immediate term set of actions to inform the national initiative to drive EHR adoption and meaningful use.
  • Develop a strategic approach to measure and assess use of EHRs, and impact of usability on their adoption and innovation.
  • Develop strategies to drive best practices and innovation to vendor products.
  • Inspire follow-on activities in the public and private sectors.

NIST ”will be updating workshop information. Please check the website again soon.”
Contains pdf of Prelimimary Agenda (in html below), Roundtable Discussion Participants, and Acronyms.

Agenda
8:00 – 9:00 Registration / Coffee
9:00 – 9:30 Greetings / Introduction / Opening Remarks – (ONC)
Moderator – Janice (Ginny) Redish, PhD
9:30 – 10:30 Current State and Need for Action
–HITECH (ONC)
–Current State of EHRs (AHRQ)
–Current Federal and Private EHR Usability Initiatives Government (ONC, NIST, AHRQ, FDA); Private (HIMSS, EHRA, Miscrosoft); Academia
–Meaningful use (AHRQ, FDA, Academia) – Standard Formats, PSO program, etc.
–Adoption (ONC, HIMSS, EHRA, Industry)
–Innovation (Industry, Academia)
–Q&A
10:30 – 10:45 Coffee Break
10:45 – 11:45 “Points of Pain” – Prevention of Cognitive Overload
–Current research (Academia
–Prevention of Cognitive Overhead During Initial Adoption / Transition from Paper
–Prevention of Cognitive Overhead During Transition from systems in multiple settings (One User / Many Systems Issue)
–Insufficient System Feedback (Critical Issue on Alert Overload)
–Dense Displays of Data (Prevention of Excessive Complexity of System)
–Q&A
11:45 – 1:00 Lunch (NIST Cafeteria)
1:00 – 1:30 “Points of Pain” – Addressing EHR User Disparities
–Clinical Workforce characteristics and limitations (NIST, Access Board, Academia)
–Accessibility Issues – Low/Poor Vision; Mobility/Dexterity; Cognitive Disabilities
–English Proficiency
–Lower socioeconomic demographics – digital divide
–Q&A
1:30 – 1:45 Coffee Break
1:45 – 2:45 Usability Framework (NIST, AHRQ, Academia)
–Best practices and gaps based on experience from other industries / sectors
–Usability Standards Development (NIST)
–Measurement domains
–Objective measures of human performance
–Effectiveness
–Efficiency
–Additional measures
–User satisfaction
–User acceptance
–Ongoing Projects and Research Initiatives (AHRQ Toolkit, SHARP, NIST grants, Common Formats, etc.
–Usability framework for product lifecycle
–Q&A
2:45 – 3:00 Coffee Break
3:00 – 4:00 Recommendations to support HITECH / Certification
–Accreditation Program, Certification
–Test Methods for Products and Users (Pass / Fail Criteria for Usability Standards)
4:00 – 4:45 Recommendations and Next Steps
Moderator: Janice (Ginny) Redish, PhD
–Research and Implementation
–Recommendations for Usability and Adoption
–Recommendations for Innovation
–Next steps

McClellan at Brookings: Making ‘Enhanced Use’ of Health Information Webcast

McClellan, Health IT Leaders Discuss More Effective Use of Health IT
in Half-Day Session with Far-Reaching Look Ahead
at Promotion, Models, and Policy Implications

Webcast, Podcast, Transcripts Available

In an excellent half-day session on May 14, 2010, Mark McClelland, MD, PhD,  Director, Engleberg Center for Health  Care Reform at Brookings, led a series of discussions among leaders of Health IT focusing on how to use the same data that is being collected, and will increasingly be collected, in patient care to help improve the health care system beyond the individual patient.

Brookings Events Page: “Making ‘Enhanced Use’ of Health Information”
Includes: Archived Webcast
Three Audio Sections
Issues Brief pdf (under Event Materials)
Transcripts

Summary
Starting off the discussion on promoting use of Electronic Health Records, Farzad Mostshari, Deputy Director, Policy and Programs, Office of National Coordinator (ONC) for Health IT,  said the ONC always started from the end goal, as he laid out key principles including keeping data as close to the source as possible and data “collected once and used many times.” When asked how meaningful use was going, he answered with one word “Fantastic” and a broad smile, and then pointed out that focus on quality was the core of “meaningful use.” (See John Halamka’s blog for a list of the principles Mostashari laid out.)

When it comes to promoting the use of Electronic Health Records, John Halamka, CIO, Harvard Medical School and Beth Israel Deaconess Medical Center, and Amanda Parsons, who oversees New York City’s Primary Care Information Project (PCIP), agreed “it’s about the workflow:” don’t be disruptive to the physician’s delivery of care to the patient, while at the same time changing the way they work/think to take best advantage of the data and  the wisdom that electronic health records and information exchange can offer. As Parons stressed “don’t let the perfect get in the way of the good,” one of the constant refrains of EHR and Health IT evangelists.

The next panel titled “Compelling Models of Enhanced Use of Health Information,” shared such models including those conducted by Geisinger Health System in Pennsylvania described by James Walker, chief health information officer of Geisinger; the multi-state metro Cincinnati HealthBridge described by Robert Steffel, president and CEO of HealthBridge; South Carolina HIE described by David Patterson who oversees the HIE along with the state’s Medicaid Director; Wisonsin Health Exchange described by Michael Raymer of Microsoft; and Kaiser Permanante’s Institute for Health Research described by its senior director John Steiner. Geisinger recently won a Beacon Community award from the ONC to extend the kind of Health IT structure it uses to support patients within its IDN to patients and physicians outside its delivery system.

“Implications for Policy” looked ahead with views from White Office of Science and Technology; Carol Diamond of Markle Foundation; Landen Bain of Clinical Data Interchange Standards Consortium Healthlink Program, and Andrew Weber, National Business Coalition on Health.

In answer to a question about what can be done on a policy end to help physicians  think and work with their patients differently for enhanced use of Health IT tools, Diamond said “The key from my perspective in terms of giving them the capacity to use these tools in a way that provides value to them is to not make quality and research a compliance exercise, but to make it part of the way care is delivered. And the only way I know how to do that is to give them the tools at the point of care while they’re with the patient and give them the flexibility to use those tools towards common goals.” Parsons agreed with another panelist when she added “Frankly, it just has to be an alignment of health reform and reimbursement rate.”

Bain may have summed up the impact of the day’s discusssions when he added he was glad that the conversation at Brookings had focused on workflow and business processes: ”I really am encouraged that we’ve moved off of what I call data blindness, where all you can think about is just data and this abstract quality that you want to get a hold of.”

McClelland’s Issue Brief “Using Information Technology to Support Better Health Care: One Infrastructure with Many Uses” (link to Brookings event page) provides an insightful perspective on Health IT and its impact on healthcare and health reform, as well as a good summary of what he described in his opening remarks.

Safeguarding Health Information: Building Assurance through HIPAA Security

2010 HIPAA Conference from NIST and OCR: 
Safeguarding Health Information: Building Assurance through HIPAA Security
May 11-12, 2010

PURPOSE:
The HHS Office for Civil Rights (OCR) enforces the HIPAA Privacy Rule, which protects the privacy of individually identifiable health information; the HIPAA Security Rule, which sets national standards for the security of electronic protected health information; the confidentiality provisions of the Patient Safety Rule, which protect identifiable information being used to analyze patient safety events and improve patient safety; and, the Breach Notification regulations requiring HIPAA covered entities and their business associates to notify individuals when their health information is breached.

“NIST’s (National Institute of Standards and Technology) mission, as a non-regulatory federal agency within the U.S. Department of Commerce, is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.

“This conference will provide a forum to discuss the current HIT security landscape, as well as practical strategies, tips, and techniques for implementing the requirements of the HIPAA Security Rule.”

AGENDA:
Click this link to view the final agenda with presentation summaries (updated May 7).

Presentations - 2010 HIPAA
Links below all open pdf versions of presentations.

Tuesday, May 11 (Day 1):

Welcoming Remarks from OCR
Susan McAndrew – Deputy Director for Privacy, HHS Office for Civil Rights

Welcoming Remarks from NIST
William Barker – Chief Cybersecurity Advisor, NIST Information Technology Laboratory

Tips and Techniques for Conducting Risk Assessments
Pat Toth – NIST
Marissa Gordon-Nguyen – HHS/OCR

Keynote Address
Georgina Verdugo—Director, HHS Office for Civil Rights
Howard Schmidt – White House Cybersecurity Coordinator

Standards and Certification Interim Final Rule
Steve Posnack – HHS/ONC
Lisa Carnahan – NIST

Panel: Breach Notification
Christina Heide – Health Information Privacy Division, HHS/OCR
Cora Tung Han – Division of Privacy and Identity Protection, Federal Trade Commission (FTC)

Security of Health Devices
Elliot Sloane – Drexel University

Security Considerations for New Media and Healthcare
Sharon Finney – Corporate Data Security Officer, Adventist Health System

Update on OCR Enforcement of the Privacy and Security Rules
Marilou King – Civil Rights Division, HHS Office of General Counsel
David Holtzman – Health Information Privacy Division, HHS/OCR

Wednesday, May 12 (Day 2):

FTC Information Security
Alain Sheer – Attorney, Division of Privacy and Identity Protection, FTC

Strategies for Developing and Implementing Contingency Plans
David Holtzman – Health Information Privacy Division, HHS/OCR
Marianne Swanson – NIST

Logging and Auditing in a Healthcare Environment
Mac McMillan – Cynergistek, Inc

Panel: HIPAA Security Compliance: An Industry Perspective
Panel Slides
Sue Miller – WEDI
Lisa Gallagher – HIMSS
Robert Tennant – MGMA
Dan Rode – AHIMA

HIE Security Architecture
John Kelly – Director, eBusiness Architecture, Harvard Pilgrim Healthcare

Security Implementation Considerations for Mobile and Wireless Technologies
Matt Sexton – Booz Allen

Encryption Standards
Matt Scholl – Group Manager, Security Management and Assurance, Computer Security Division, NIST

HIPAA Security Standards: Guidance on Risk Analysis Issued by Office of Civil Rights

HIPAA Security Standards: Guidance on Risk Analysis
DRAFT Posted 5/7/10
The Office of Civil Rights (OCR)  in the Dept of Health and Human Services issued its first guidance in a series required by HITECH on the HIPAA Security Rule. The rule, summarized in an article by Dom Nicastro for HealthLeaders Media on May 12, 2010, quotes Frank Ruelas, director of compliance and risk management at Maryvale Hospital and principal of HIPAA Boot Camp in Casa Grande, AZ, “The guidance is an effective primer in that it summarizes basic information about the required risk analysis within the security rule that has existed since the early days of HIPAA,” while it’s not a “one-size-fits all blueprint.”

The document is available on the OCR site.
Guidance document reproduced below in html text. 
PDF Version.
Footnote references are numbered in bold italics within parentheses, such as (1) , and with references at the end of the document.
“OCR encourages the public to offer feedback on this guidance. OCR staff will carefully review all public comments to determine how to improve these materials. Comments can be provided via the following e-mail address: OCRPrivacy@hhs.gov.”

Introduction

The Office for Civil Rights (OCR) is responsible for issuing annual guidance on the provisions in the HIPAA Security Rule. (1) (45 C.F.R. §§ 164.302 – 318.) This series of guidances will assist organizations (2) in identifying and implementing the most effective and appropriate administrative, physical, and technical safeguards to secure electronic protected health information (e-PHI). The guidance materials will be developed with input from stakeholders and the public, and will be updated as appropriate.

We begin the series with the risk analysis requirement in § 164.308(a)(1)(ii)(A). Conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule. Therefore, a risk analysis is foundational, and must be understood in detail before OCR can issue meaningful guidance that specifically addresses safeguards and technologies that will best protect electronic health information.

The guidance is not intended to provide a one-size-fits-all blueprint for compliance with the risk analysis requirement. Rather, it clarifies the expectations of the Department for organizations working to meet these requirements. (3) An organization should determine the most appropriate way to achieve compliance, taking into account the characteristics of the organization and its environment.

We note that some of the content contained in this guidance is based on recommendations of the National Institute of Standards and Technology (NIST). NIST, a federal agency, publishes freely available material in the public domain, including guidelines. (4) Although only federal agencies are required to follow guidelines set by NIST, the guidelines represent the industry standard for good business practices with respect to standards for securing e-PHI. Therefore, non-federal organizations may find their content valuable when developing and performing compliance activities.

All e-PHI created, received, maintained or transmitted by an organization is subject to the Security Rule. The Security Rule requires entities to evaluate risks and vulnerabilities in their environments and to implement reasonable and appropriate security measures to protect against reasonably anticipated threats or hazards to the security or integrity of e-PHI. Risk analysis is the first step in that process.

We understand that the Security Rule does not prescribe a specific risk analysis methodology, recognizing that methods will vary dependent on the size, complexity, and capabilities of the organization. Instead, the Rule identifies risk analysis as the foundational element in the process of achieving compliance, and it establishes several objectives that any methodology adopted must achieve.

Risk Analysis Requirements under the Security Rule

The Security Management Process standard in the Security Rule requires organizations to “[i]mplement policies and procedures to prevent, detect, contain, and correct security violations.” (45 C.F.R. § 164.308(a)(1).) Risk analysis is one of four required implementation specifications that provide instructions to implement the Security Management Process standard. Section 164.308(a)(1)(ii)(A) states:

RISK ANALYSIS (Required).
Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the [organization].

The following questions adapted from NIST Special Publication (SP) 800-66 (5) are examples organizations could consider as part of a risk analysis. These sample questions are not prescriptive and merely identify issues an organization may wish to consider in implementing the Security Rule:

 Have you identified the e-PHI within your organization? This includes e-PHI that you create, receive, maintain or transmit.
 What are the external sources of e-PHI? For example, do vendors or consultants create, receive, maintain or transmit e-PHI?
 What are the human, natural, and environmental threats to information systems that contain e-PHI?

In addition to an express requirement to conduct a risk analysis, the Rule indicates that risk analysis is a necessary tool in reaching substantial compliance with many other standards and implementation specifications. For example, the Rule contains several implementation specifications that are labeled “addressable” rather than “required.” (68 FR 8334, 8336 (Feb. 20, 2003).) An addressable implementation specification is not optional; rather, if an organization determines that the implementation specification is not reasonable and appropriate, the organization must document why it is not reasonable and appropriate and adopt an equivalent measure if it is reasonable and appropriate to do so. (See 68 FR 8334, 8336 (Feb. 20, 2003); 45 C.F.R. § 164.306(d)(3).)

The outcome of the risk analysis process is a critical factor in assessing whether an implementation specification or an equivalent measure is reasonable and appropriate.

Organizations should use the information gleaned from their risk analysis as they, for example:

            Design appropriate personnel screening processes. (45 C.F.R. §164.308(a)(3)(ii)(B).)
            Identify what data to backup and how. (45 C.F.R. § 164.308(a)(7)(ii)(A).)
            Decide whether and how to use encryption. (45 C.F.R. §§ 164.312(a)(2)(iv) and (e)(2)(ii).)
            Address what data must be authenticated in particular situations to protect data integrity. (45 C.F.R. § 164.312(c)(2).)
            Determine the appropriate manner of protecting health information transmissions. (45 C.F.R. § 164.312(e)(1).)

Important Definitions

Unlike “availability”, “confidentiality” and “integrity”, the following terms are not expressly defined in the Security Rule. The definitions provided in this guidance, which are consistent with common industry definitions, are provided to put the risk analysis discussion in context. These terms do not modify or update the Security Rule and should not be interpreted inconsistently with the terms used in the Security Rule.

Vulnerability

Vulnerability is defined in NIST Special Publication (SP) 800-30 as “[a] flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.”

Vulnerabilities, whether accidentally triggered or intentionally exploited, could potentially result in a security incident, such as inappropriate access to or disclosure of e-PHI. Vulnerabilities may be grouped into two general categories, technical and nontechnical. Non-technical vulnerabilities may include ineffective or non-existent policies, procedures, standards or guidelines. Technical vulnerabilities may include: holes, flaws or weaknesses in the development of information systems; or incorrectly implemented and/or configured information systems.

Threat

An adapted definition of threat, from NIST SP 800-30, is “[t]he potential for a person or thing to exercise (accidentally trigger or intentionally exploit) a specific vulnerability.”

There are several types of threats that may occur within an information system or operating environment. Threats may be grouped into general categories such as natural, human, and environmental. Examples of common threats in each of these general categories include:

           Natural threats such as floods, earthquakes, tornadoes, and landslides.

           Human threats are enabled or caused by humans and may include intentional (e.g., network and computer based attacks, malicious software upload, and unauthorized access to e-PHI) or unintentional (e.g., inadvertent data entry or deletion and inaccurate data entry) actions.

           Environmental threats such as power failures, pollution, chemicals, and liquid leakage.

Risk

An adapted definition of risk, from NIST SP 800-30, is:

“The net mission impact considering (1) the probability that a particular [threat] will exercise (accidentally trigger or intentionally exploit) a particular [vulnerability] and (2) the resulting impact if this should occur . . . . [R]isks arise from legal liability or mission loss due to—

          1. Unauthorized (malicious or accidental) disclosure, modification, or destruction of information
         2. Unintentional errors and omissions
        3. IT disruptions due to natural or man- made disasters
       4. Failure to exercise due care and diligence in the implementation and operation of the IT system.”

Risk can be understood as a function of 1) the likelihood of a given threat triggering or exploiting a particular vulnerability, and 2) the resulting impact on the organization. This means that risk is not a single factor or event, but rather it is a combination of factors or events (threats and vulnerabilities) that, if they occur, may have an adverse impact on the organization.

Elements of a Risk Analysis

There are numerous methods of performing risk analysis and there is no single method or “best practice” that guarantees compliance with the Security Rule. Some examples of steps that might be applied in a risk analysis process are outlined in NIST SP 800-30. (6)

The remainder of this guidance document explains several elements a risk analysis must incorporate, regardless of the method employed.

Scope of the Analysis

The scope of risk analysis that the Security Rule encompasses includes the potential risks and vulnerabilities to the confidentiality, availability and integrity of all e-PHI that an organization creates, receives, maintains, or transmits. (45 C.F.R. § 164.306(a).) This includes e-PHI in all forms of electronic media, such as hard drives, floppy disks, CDs, DVDs, smart cards or other storage devices, personal digital assistants, transmission media, or portable electronic media. Electronic media includes a single workstation as well as complex networks connected between multiple locations. Thus, an organization’s risk analysis should take into account all of its e-PHI, regardless of the particular electronic medium in which it is created, received, maintained or transmitted or the source or location of its e-PHI.

Data Collection

An organization must identify where the e-PHI is stored, received, maintained or transmitted. An organization could gather relevant data by: reviewing past and/or existing projects; performing interviews; reviewing documentation; or using other data gathering techniques. The data on e-PHI gathered using these methods must be documented. (See 45 C.F.R. §§ 164.308(a)(1)(ii)(A) and 164.316(b)(1).)

Identify and Document Potential Threats and Vulnerabilities

Organizations must identify and document reasonably anticipated threats to e-PHI. (See 45 C.F.R. §§ 164.306(a)(2) and 164.316(b)(1)(ii).) Organizations may identify different threats that are unique to the circumstances of their environment. Organizations must also identify and document vulnerabilities which, if triggered or exploited by a threat, would create a risk of inappropriate access to or disclosure of e-PHI. (See 45 C.F.R. §§ 164.308(a)(1)(ii)(A) and 164.316(b)(1)(ii).)

Assess Current Security Measures

Organizations should assess and document the security measures an entity uses to safeguard e-PHI, whether security measures required by the Security Rule are already in place, and if current security measures are configured and used properly. (See 45 C.F.R. §§ 164.306(b)(1), 164.308(a)(1)(ii)(A), and 164.316(b)(1).)

The security measures implemented to reduce risk will vary among organizations. For example, small organizations tend to have more control within their environment. Small organizations tend to have fewer variables (i.e. fewer workforce members and information systems) to consider when making decisions regarding how to safeguard e-PHI. As a result, the appropriate security measures that reduce the likelihood of risk to the confidentiality, availability and integrity of e-PHI in a small organization may differ from those that are appropriate in large organizations. (7)

Determine the Likelihood of Threat Occurrence

The Security Rule requires organizations to take into account the probability of potential risks to e-PHI. (See 45 C.F.R. § 164.306(b)(2)(iv).) The results of this assessment, combined with the initial list of threats, will influence the determination of which threats the Rule requires protection against because they are “reasonably anticipated.”

The output of this part should be documentation of all threat and vulnerability combinations with associated likelihood estimates that may impact the confidentiality, availability and integrity of e-PHI of an organization. (See 45 C.F.R. §§ 164.306(b)(2)(iv), 164.308(a)(1)(ii)(A), and 164.316(b)(1)(ii).)

Determine the Potential Impact of Threat Occurrence

The Rule also requires consideration of the “criticality,” or impact, of potential risks to confidentiality, integrity, and availability of e-PHI. (See 45 C.F.R. § 164.306(b)(2)(iv).) An organization must assess the magnitude of the potential impact resulting from a threat triggering or exploiting a specific vulnerability. An entity may use either a qualitative or quantitative method or a combination of the two methods to measure the impact on the organization.

The output of this process should be documentation of all potential impacts associated with the occurrence of threats triggering or exploiting vulnerabilities that affect the confidentiality, availability and integrity of e-PHI within an organization. (See 45 C.F.R. §§ 164.306(a)(2), 164.308(a)(1)(ii)(A), and 164.316(b)(1)(ii).)

Determine the Level of Risk

Organizations should assign risk levels for all threat and vulnerability combinations identified during the risk analysis. The level of risk could be determined, for example, by analyzing the values assigned to the likelihood of threat occurrence and resulting impact of threat occurrence. The risk level determination might be performed by assigning a risk level based on the average of the assigned likelihood and impact levels.

The output should be documentation of the assigned risk levels and a list of corrective actions to be performed to mitigate each risk level. (See 45 C.F.R. §§ 164.306(a)(2), 164.308(a)(1)(ii)(A), and 164.316(b)(1).)

Finalize Documentation

The Security Rule requires the risk analysis to be documented but does not require a specific format. (See 45 C.F.R. § 164.316(b)(1).) The risk analysis documentation is a direct input to the risk management process.

Periodic Review and Updates to the Risk Assessment

The risk analysis process should be ongoing. In order for an entity to update and document its security measures “as needed,” which the Rule requires, it should conduct continuous risk analysis to identify when updates are needed. (45 C.F.R. §§ 164.306(e) and 164.316(b)(2)(iii).) The Security Rule does not specify how frequently to perform risk analysis as part of a comprehensive risk management process. The frequency of performance will vary among covered entities. Some covered entities may perform these processes annually or as needed (e.g., bi-annual or every 3 years) depending on circumstances of their environment.

A truly integrated risk analysis and management process is performed as new technologies and business operations are planned, thus reducing the effort required to address risks identified after implementation. For example, if the covered entity has experienced a security incident, has had change in ownership, turnover in key staff or management, is planning to incorporate new technology to make operations more efficient, the potential risk should be analyzed to ensure the e-PHI is reasonably and appropriately protected. If it is determined that existing security measures are not sufficient to protect against the risks associated with the evolving threats or vulnerabilities, a changing business environment, or the introduction of new technology, then the entity must determine if additional security measures are needed. Performing the risk analysis and adjusting risk management processes to address risks in a timely manner will allow the covered entity to reduce the associated risks to reasonable and appropriate levels. (8)

In Summary

Risk analysis is the first step in an organization’s Security Rule compliance efforts. Risk analysis is an ongoing process that should provide the organization with a detailed understanding of the risks to the confidentiality, integrity, and availability of e-PHI.

Resources

The Security Series papers available on the Office for Civil Rights (OCR) website, http://www.hhs.gov/ocr/hipaa , contain a more detailed discussion of tools and methods available for risk analysis and risk management, as well as other Security Rule compliance requirements. Visit http://www.hhs.gov/ocr/hipaa  for the latest guidance, FAQs and other information on the Security Rule.

Several other federal and non-federal organizations have developed materials that might be helpful to covered entities seeking to develop and implement risk analysis and risk management strategies. The Department of Health and Human Services does not endorse or recommend any particular risk analysis or risk management model. The documents adherence to any or all of the standards contained in these materials prove substantial compliance with the risk analysis requirements of the Security Rule. Rather, the materials are presented as examples of frameworks and methodologies that some organizations use to guide their risk analysis efforts.

The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, is responsible for developing information security standards for federal agencies. NIST has produced a series of Special Publications, available at http://csrc.nist.gov/publications/PubsSPs.html , which provide information that is relevant to information technology security. These papers include:

 Guide to Technical Aspects of Performing Information Security Assessments (SP800115)

 Information Security Handbook: A Guide for Managers (SP800-100; Chapter 10nprovides a Risk Management Framework and details steps in the risk management process)

 An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule (SP800-66; Part 3 links the NIST Risk Management Framework to components of the Security Rule)

 A draft publication, Managing Risk from Information Systems (SP800-39)

The Office of the National Coordinator for Health Information Technology (ONC) has produced a risk assessment guide for small health care practices, called Reassessing Your Security Practices in a Health IT Environment, which is available at this link (pdf).

The Healthcare Information and Management Systems Society (HIMSS), a private consortium of health care information technology stakeholders, created an information technology security practices questionnaire, available at http://www.himss.org/content/files/ApplicationSecurityv2.3.pdf . The questionnaire was developed to collect information about the state of IT security in the health care sector, but could also be a helpful self-assessment tool during the risk analysis process.

The Health Information Trust Alliance (HITRUST) worked with industry to create the Common Security Framework (CSF), which is available at http://hitrustcentral.net/files . The risk management section of the document, Control Name: 03.0, explains the role of risk assessment and management in overall security program development and implementation. The paper describes methods for implementing a risk analysis program, including knowledge and process requirements, and it links various existing frameworks and standards to applicable points in an information security life cycle.

References

(1) Section 13401(c) of the Health Information Technology for Economic and Clinical (HITECH) Act.

(2) As used in this guidance the term “organizations” refers to covered entities and business associates. The guidance will be updated following implementation of the final HITECH regulations.

(3) The HIPAA Security Rule: Health Insurance Reform: Security Standards, February 20, 2003, 68 FR 8334.

(4) The 800 Series of Special Publications (SP) are available on the Office for Civil Rights’ website–specifically, SP 800-30 – Risk Management Guide for Information Technology Systems.(http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/securityruleguidance.html.)

(5) See NIST SP 800-66, Section #4 “Considerations When Applying the HIPAA Security Rule.” Available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/nist80066.pdf

(6) Available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/nist800-30.pdf.

(7) For more information on methods smaller entities might employ to achieve compliance with the Security Rule, see #7 in the Center for Medicare and Medicaid Services’ (CMS) Security Series papers, titled “Implementation for the Small Provider.” Available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/smallprovider.pdf.

(8) For more information on methods smaller entities might employ to achieve compliance with the Security Rule, see #6 in the Center for Medicare and Medicaid Services’ (CMS) Security Series papers, titled “Basics of Risk Analysis and Risk Management.” Available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/riskassessment.pdf .