June 11, 2014
In my work, Iím often asked for benchmarks. As an industrial engineer, the term benchmark means a standard of comparison. For example, an organization's full-time equivalents per adjusted occupied bed may be 5.23, and that may represent the 45th percentile. Benchmarks can tell us how we stack up compared with other organizations that may have business functions similar to ours. Are we doing better, the same or worse than our peers?
There is more to benchmarking than meets the eye. First, we have to define the terms that we are benchmarking. For example, what is a full-time equivalent, and how do we calculate it? A full-time equivalent is usually used for calculations to convert worked hours into the number of full-time employees required to achieve those hours. I've used worked hours divided by 2,080. The 2,080 number comes from 40 hours per week multiplied by 52 weeks per year. Some people use other numbers that account for vacations and holidays.
Another question is whether to include salaried hours worked or salaried hours paid? Since salaried staff are paid the same amount regardless of the number of hours worked, most benchmarks will use paid hours for salaried staff instead of actual hours worked. A clear definition is the essential basis to benchmarking.
There are different types of benchmarks as well. Financial, operational, satisfaction and quality are a few that come to mind. Each has many potential sources for data. For example, benchmarking financial and operational metrics may be done through systems such as University Healthcare Consortium for academic medical centers, Children's Hospital Association for pediatric organizations, and Thomson Reuters Action O-I. Which system is best depends on the type of organization. For example, it probably wouldn't be a good idea to benchmark a 20-bed critical access hospital against an academic medical center.
This brings me to the next layer of the benchmarking onion, which is who should we compare ourselves to? According to its website, UHC represents 120 academic medical centers and 299 academic medical center affiliates. Choosing to benchmark with UHC is just the first step. Next, you have to select a compare group for the organization. For example, perhaps the organization is a pediatric academic medical center, which would narrow the compare group to about 50 facilities or less. Considering multiple compare groups is another strategy, especially for organizations contemplating new markets or service lines.
The fourth step is interpretation. Once the data has been defined, a system selected, and a compare group identified, the data is received. While interpreting the data doesn't require a degree in statistics, it does require a bit of thought and understanding of what went into the data. As an example, an organization may set a goal to be above the 90th percentile in all "key" metrics. However, being above the 90th percentile isn't always a good thing. In our above example of FTEs/AOB, from a financial perspective this number should be lower, not higher. Metrics such as patient satisfaction should be higher, so a goal for the 90th percentile may be appropriate.
Not only do we need to know where we currently stand, how we compare to others, and the goal we want to accomplish, we also have to create a plan to get there. At this point, finding who is performing well and contacting those organizations are good ways to figure out how they are doing it. There's no need to reinvent the wheel. Let's share our successes, as a rising tide will lift all boats.
Amanda Mewborn is an industrial engineer, registered nurse and lean black belt who works as a senior healthcare operational planner at Perkins+Will. She focuses on improving healthcare efficiency, quality and experience.
Quality News Today is an ASQ member benefit offering quality related news
from around the world every business day.