Posts tagged with: Usability
Recently the Office of the National Coordinator for Health Information Technology (ONC) unveiled new certification and meaningful use requirements for electronic health records (EHRs), including safety-enhanced design requirements. According to these requirements, EHR vendors must include evidence of user-centered design and user test results in their certification submission, a potentially significant amount of work.
Though providers are not required to follow the Stage 2 requirements until 2014, vendors should take steps now to position their products for success and comply with these new guidelines. Here's a summary of what's involved, with excerpts from the official documentation.
NIST, in association with the ONC, has released a Electronic Health Record usability evaluation guide called Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records. This is expected to be the first step towards a series of criteria in the upcoming Meaningful Use Stage 2 program.
EHR Usability Workshop
To help vendors understand and proactively address the NIST EHR usability criteria, Macadamian is now offering a Meaningful Use Usability Workshop for developers of EMR solutions that focuses on the use error criteria established by NIST to improve patient safety. In this interactive workshop with key members of your product development team, Macadamian’s usability research experts will help you develop a formal usability assessment plan. Working together, you will explore and establish the key components of a usability action plan. These include:
A Usability Strategy: Macadamian will help you to develop a business case and concrete plan of action to meet the NIST requirements with a view of upcoming meaningful use requirements. We will walk you through the 7 key criteria of the EUP [EHR Usability Protocol] and their implications for your product[s]. We will work with you to select and adapt the appropriate usage scenarios/workflows provided by NIST for your clinical users.
Patient Safety Goals and Benchmarks: Following software usability best practices, Macadamian will help you to uncover the specific usability goals that will underpin your products’ differentiation. Employing the 8 Use Error categories defined by NIST we will describe how to set and attain achievable targets for usability. These goals will lay the foundation for the benchmarking and tracking of your product’s current usability status and future usability improvements.
Action Plan and Timelines: Macadamian will help your team determine next steps and progression timeline.
Register Today Our one-day EHR Usability Workshops are offered at our new Silicon Valley lab, or can be held on-site at your facility. For more information and to register for a custom workshop, please contact:
Director, Healthcare Division
Many of our customers are starting to build a user experience design team. The challenge is - if this is your first hire, and no one on the hiring team has a user experience design background, how do you make the right hire?
As someone who has hired dozens of interaction designers, user researchers, and visual designers over my career, I can tell you it isn’t easy. While experience certainly helps, with the following guidelines and an acute BS detector, you’ll do ok. In this two part series I’ll share with you my five tips to help you make your first UX hire a success:
One of our beliefs at Macadamian is that user experience and software quality are intertwined. To the end-customer, a poorly designed UI creates the impression that the product is of low quality, even if the code is well crafted and well tested. Likewise, poor engineering and quality control (bugs, crashes, etc) create a very poor user experience. Now, the quality and the user experience design of the software embedded in everyday objects - cars, appliances, etc. - are influencing customer's perceptions of the overall quality of the product.
Witness this year's JD Power's automotive quality ratings - the automotive industry's go-to guide on initial vehicle quality. Despite making huge advances in overall build quality, interior design, ride quality, and ergonomics, Ford dropped from a 5th place ranking to 23rd. Why? Customers complained that Ford's new in-vehicle entertainment system, Sync, and MyFord Touch, are buggy, overly complex, and hard to use. I'm one of those customers - my Ford Flex is a wonderful vehicle, but if I try to use the voice recognition to turn on the radio, Sync turns on the air conditioning. The drop in ratings shouldn't be a surpise to Ford or anyone in the industry - the same thing happened to BMW a few years ago with their much maligned iDrive system.
What's the lesson? Same as always:
- More overall attention to design - especially some concept testing with real users - would have uncovered most of the issues with the latest version of Sync.
- Err on the site of fewer features. Typically there are only a few your customers will really find valuable and use daily. Packing features into a product just because you think they are cool is so 90s.
- You always have to be willing to cut or delay a feature that simply isn't market-ready, no matter what you have invested and how much you've hyped it. It almost always comes back to bite you.
We've talked in the past about the importance of research in improving a product's user experience and usability. In the healthcare field this is especially critical. Thankfully we see more and more companies engaging in getting user feedback. Unfortunately, since research is a highly specialized skill set, we are also seeing a proliferation of mistakes and misinterpretations in some of the more technical aspects of research, being made by people who don't have specific training in usability research or a general foundation in experimental practices. Here are 3 of the most common.
Misunderstanding Statistical Significance
We often come across clients who either ask for more data points, so its "statistically significant" or believe they have a meaningful result because it is "statistically significant". While research methods like surveys demand a certain amount of rigor with regards to statistical significance, when it comes to usability testing or observation of specific behaviours, it is not at all a sufficient criteria (although can be related) for meaningful or "proper" research. So how many research participants are enough? Well, that depends...on the range of target user groups, scope of activities or tasks you want to observe, how the results will be used and how many rounds/iterations of research you will be conducting. If you are iteratively testing an application or web application and only have to worry about one or two user groups, several iterations with 5-6 users each time will detect most of the usability issues. However, if you are doing a benchmark study, you'll want to run a greater number of users, across all user groups, to ensure the results are comparable.