Ideas for effective communication of statistical results
by Christine M. Anderson-Cook
Recently, I found myself in an all-day meeting, presenting and listening to other presentations that summarized findings from many studies. Most of the talks had a similar mandate: Describe the results of a statistical analysis using data collected through sampling or a designed experiment.
The presentations’ goals were to share findings and equip a team of managers to make decisions about recommended next steps for a complex project. As one of the presenters, I found it to be a challenge to prepare, and I felt constrained by the tight presentation timelines. It required careful thought to tell a coherent and relevant story that would be most helpful to the decision makers.
Other presenters also seemed to struggle with how to deliver the message right. As I reflected on a hectic day, several general themes emerged based on what had gone well and not so well. Because many of us find ourselves in these types of situations as part of our professional work, here are several recommendations for effectively presenting statistical results:
Determine in advance what type of information is expected and tailor your message to match these expectations.
In some presentations, the overall goal is to provide a definitive result (something along the lines of, "Based on this study, it is clear that this is the logical next step, and this is why"), while in other presentations, the goal is to provide information that facilitates an informed educated discussion (something such as "There are several logical alternatives that all might make sense depending on priorities. Here are the pros and cons of each …").
As a wise colleague once told me, never go into a meeting without knowing what message you want the audience to hear, and tailor the presentation to match that message. In several of the presentations, there were awkward moments when one of the decision-makers asked, "So what should we do?" and the presenter clearly had never thought about what the answer might be. A standard adage of excellence in communication is to start preparing with the question, "What should the audience take away from the talk?" and build everything around that message. This cornerstone of a good presentation is particularly relevant in this context.
Connect to the practical implications of the analysis.
While the statistical details of the analysis are interesting to statisticians, everything that you present to decision makers should connect to the message they are most interested in: what decision to make. If the details do not naturally provide a context for the decision or caveats for how the decision might be affected, they should likely be omitted. What to discuss and how to present it should all pass the relevance test.
Prepare your primary script, but be prepared to adapt and respond to audience feedback.
Having just said that preparation is key, it is also essential to be comfortable with a shift in focus if questions arise. Having back-up materials with additional details for anticipated diversions can allow for timely discussion of key questions that allow decision makers to hear details they need to allow them to act on the results presented.
For this to happen, it is important to think through what triggers are in the main slides that could spawn additional discussion. Common sidebars include statistical technical details, issues with assumptions, details about how the data were collected and any unusual occurrences and connections to other studies. The ability to pull up a relevant slide to fill in a necessary element can allow the planned message to continue on track.
Ensure that the analysis matches the study’s goals.
Clearly, this point must be considered much earlier than at the presentation stage, but a clear understanding of the mandate of the analysis will help to ensure that a suitable statistical analysis strategy is selected. For example, if we are comparing two populations, standard hypothesis testing1 assumes that the population characteristics of interest are the same until there is sufficient evidence that they are different. Alternatively, an equivalence test2 assumes that the characteristics are different until there is enough evidence to treat them as equivalent.
Choosing the one that makes the most sense for evaluating the two populations makes a critical difference with how we proceed. The type I and II errors characteristics and the role of sample size vary dramatically under the two testing strategies. Everything goes more smoothly with presenting results if the right analysis strategy has been selected.
Clearly describe how the data were collected, what analysis was used and assumptions for the method in the context of this particular study.
It’s tempting to jump right to the results of an analysis, but it is important to talk about the pedigree of the data.3 Often, there are nuances about how the data were collected that make an important difference when we want to interpret the results. Perhaps a classic example of this comes from survey sampling: Small differences in the sampling frame (the list of items from which the sample was selected) can substantially impact which segments of the target population may have been systematically missed.
The fundamentals of the analysis, as well as the assumptions of the approach, should all be itemized and discussed. Rarely do we have an ideal match of all analysis assumptions with what was present in our data. Discussing the mismatches between the two—and what is known about the robustness of the results when assumptions are violated—should be included to provide some context for possible issues with the analysis.
Any limitations of the analysis also should be discussed. Framing the assumptions of the method in the context of the application and providing visualizations of their assessment are keys for the decision-makers having a fair opportunity to evaluate any serious issues.
Be strategic about the sequence of presentation of material.
Remember getting tests back in school? The instructor sometimes would hand back the tests and then talk through the solutions. Alternatively, they first might talk through the solutions and then return the tests. Human nature is such that order makes a difference.
In the first case, many students would not actually hear much of the discussion about the solutions because they were so distracted by seeing their test score that they stopped listening. Analogous to getting a test back, decision-makers are primed to hear the bottom line. Hence, it is beneficial to talk about caveats, assumptions and fundamentals of the test before they are absorbed in processing what the results of the analysis mean for the path forward.
It is natural to first talk about the background and not leave these elements to a post-result footnote. When the overall recommendation or result is delivered, the limitations and caveats should be included so that the bottom line is still connected to any reservations you may have about the match of data to test.
Interpret the results in the context of the study without statistical jargon.
More than once in my all-day meeting, I heard the overall conclusion delivered as "And so we reject the null hypothesis." End of story. This presentation of results is predicated on a high level of comfort with the analysis setup—something that may represent a bold assumption for many managers—and also may contribute to relegating the statistician to the bearer of analysis results, instead of being considered a full team member. The results should be clearly stated—minus jargon—with careful interpretation of what the conclusions mean for the decision under consideration.
Discuss other factors and their potential impact on conclusions.
Rarely does a single study contain all of the information and contributing context for making the decision, and results often contain some shades of gray—not just a completely clear outcome. While there may be a single statistical conclusion, other mitigating factors such as cost, complications with implementing solutions and logistical constraints should all be presented to complement the statistical result of the study.
Not only will presenting the broader context provide a more balanced and realistic view of the decision to be made, it also will likely garner respect from the managers about the presenter’s grasp of the complexity of the choices available and how the analysis fits into the big picture.
Effective presentation of statistical results to those with less statistical training—including managers and decision makers—requires planning, anticipation and thoughtful delivery. In addition, talking about the statistics in an approachable and down-to-earth manner will make the message clearer. Hopefully, some of the ideas in this column might help to showcase your efforts and analyses.
- Douglas C. Montgomery and George C. Runger, Applied Statistics and Probability for Engineers, fourth edition, Wiley, 2007, pp. 354-360.
- Stefan Wellek, Testing Statistical Hypotheses of Equivalence and Noninferiority, second edition, Chapman Hall, 2010, pp. 119-126.
- Ronald Snee and Roger Hoerl, "Inquire on Pedigree," Quality Progress, December 2012, pp. 66-68.
Christine M. Anderson-Cook is a research scientist in the statistical sciences group at Los Alamos National Laboratory in Los Alamos, NM. She earned a doctorate in statistics from the University of Waterloo in Ontario. Anderson-Cook is a fellow of both ASQ and the American Statistical Association.