Meta-summaries for COVID-19 vaccine safety research

November 21th, 2022

By Spencer Williams

How can we improve people's awareness and trust of covid vaccine safety research?

It goes without saying that the COVID-19 pandemic has been a severe public health crisis, and even though safe/effective vaccines are available, a lack of trust in those vaccines has led to reduced uptake, which in turn, has led to excess hospitalization and death [1]. To address this, we wanted to design ways of summarizing the state of research on covid vaccine safety, in an informative and trustworthy way.

To do that, we needed to better understand the needs of vaccine-hesitant people. Past research has emphasized the importance of empathy, respect, and "facts over exhortations" [2] when communicating about vaccines, so we aimed to design research summaries that provided what various vaccine-hesitant groups wanted to know about, in an easy-to-understand, straightforward way. To do this, we began with a set of co-design interviews (n = 22 vaccine-hesitant individuals recruited from MTurk), came up with a list of design requirements to meet those needs, and iteratively designed vaccine research "meta-summaries" that provided transparent overviews of vaccine safety research, along with relevant metascientific information.

Here, I'll walk through the outcomes of those interviews and co-design sessions, our design itself, and it's impact.

Co-design interviews

For details on the methods you can check out the full paper, but in general, we started with a quick interview about participants' experiences looking for information on the vaccines, showed a few wireframe prototypes of potential research summaries, and had them sketch their own ideal designs.

First, we found that while some of our participants (a mix of fence-sitters and more hardline vaccine skeptics) saw the value in scientific studies as information sources, there were a number of barriers to using those studies to answer their questions about vaccine safety:

  • Papers were hard to access (they were often paywalled)
  • Papers were difficult to understand for non-experts
  • Papers were overly-detailed or "boring,"" making it hard to extract key takeaways

In terms of participants' information needs, there were two main categories. First, participants who already had some trust of scientific institutions were mainly concerned with the quantity and consistency of vaccine research studies. How much data is there? Do the studies all tell the same story? This weight-of-evidence information was considered useful in making judgements about vaccine safety overall.

However, for participants who had less trust of science, they were also concerned about whether the underlying studies were credible in the first place. For them credibility signifiers could include things like funding source (pharmaceutical companies vs. government agencies vs. universities vs. non-profits...), country (some felt US-based studies were untrustworthy), or given author's or lab's previous record of research (are they a domain expert, do they usually publish results in one direction or another...?). These participants needed to know the studies themselves could be trusted before worrying about the quantity or consistency of data.

By combining these findings with more low-level design details based on our participants sketches, we developed a list of design requirements for meta-summaries of vaccine safety research:

  • Provide simple, concise, text-based summaries of the information.
  • Provide interactions for details-on-demand, to provide deeper insights for those who want to interrogate the literature and ensure credibility.
  • Visually convey the quantity of research.
  • Visually convey the consistency of research.
  • Provide key metascientific signals of credibility (e.g. funding source).
  • Signal that the research displayed is representative of the full body of COVID-19 vaccine safety research.

A meta-summary of COVID-19 vaccine safety research

Based on those design requirements, we developed interactive meta-summaries of COVID-19 vaccine safety research. We created two versions: one that summarized vaccine safety research ("safety-only") , and that included additional information about vaccine efficacy ("full version"). In general, these designs followed our requirements above, providing both high-level messages about the quantity and consistency of research, highlighting credibility signifiers (i.e. funding source), and providing details on demand for users who wanted to query the papers more deeply.

(If the figma links go down at some point, see the full paper for details on these designs.)

Importantly, we were also careful to frame our design as providing useful information, but not telling users what we think they should do with that information. This approach has been shown to reduce psychological reactance, a phenomenon where people react negatively to messages if they feel their freedom is being threatened. Our messaging was adapted from prior work [3].

Evaluation

We tested both of our designs against a control condition using information from the CDC page about vaccine safety research (see full paper). Testing with a sample of unvaccinated participants on MTurk (n = 863), we found that those who saw our experimental conditions felt there was more vaccine safety research than they'd thought (p = .006, eta-squared = .012), that the research was more consistent than they'd thought (p < .001, eta-squared = .022), and that the research showed vaccines were safer than they'd thought (p < .001, eta-squared = .034), compared to those who saw the CDC version. Moreover, participants also trusted our meta-summaries more than the CDC information (p = .035, eta-squared = .008). In general, these were small but (usually) highly-significant effects.

However, there was no effect of our meta-summaries on participants' concern about covid vaccine safety overall, their trust of science overall, or their intention to get vaccinated

Finally, we ran a follow-up study 1 week after deploying our first evaluation, where we asked participants about how they may have thought about or used the information they'd over the course of that week. Overall, they were more likely to have discussed our meta-summaries with other people after reading them, compared with the CDC version (see below table for logistic regression).

Variable Beta SE p OR
Intercept -1.28 0.17 p < .001
Safety only 0.51 0.24 p = .031 1.67
Full version 0.40 0.24 p = .098 1.49

Closing thoughts

Overall, we were able to build meta-summaries of covid vaccine safety research that met the needs of vaccine-hesitant people moreso than the then-current CDC page. Our approach improved people's awareness and understanding of that body of research, in a way they felt was more believable and accurate. There was even a trickle-down effect, where participants were more likely to talk to others about it too.

However, our meta-summaries didn't end up convincing vaccine-hesitant people to get vaccinated yet, suggesting a need for future work. One problem was that our summaries didn't significantly affect participants' trust of the underlying science, which could be a key component of trust in this case.

One promising future avenue for this kind of research could be to better empower people to assess the scientific literature. Feelings of powerlessness have been shown to be significantly associated with conspiracy beliefs [4], suggesting that interventions to empower people may reduce their susceptibility to that kind of thinking. By guiding skeptical audiences to critically-but-competently evaluate the studies provided, or on how to use the metascientific information we provide as useful heuristics (e.g. for research quantity or credibility), it may be possible to further increase trust and understanding of science using a metascience-based approach as we have

In general, we have shown that direct summaries of scientific research can be a useful way of communicating about vaccine safety. Past work has suggested that scientific consensus messaging should focus more on why scientific consensus exists, rather than just explaining that it does (e.g. "97% of scientists believe in anthropogenic climate change") [5]. By providing a more transparent look at the weight of scientific evidence, we hope to build public understanding of controversial and/or important scientific topics in an understandable and trustworthy way.

References

[1] Cuadros, D. F., Miller, F. D., Awad, S., Coule, P. & MacKinnon, N. J. Analysis of vaccination rates and new COVID-19 infections by US county, July–August 2021. JAMA Netw. Open 5(2), e2147915–e2147915 (2022)

[2] Palmedo, P. C., Rauh, L., Lathan, H. S. & Ratzan, S. C. Exploring distrust in the wait and see: Lessons for vaccine communication. Am. Behav. Sci. 1, 27642 (2021)

[3] Richards, A. S., Bessarabova, E., Banas, J. A. & Bernard, D. R. Reducing psychological reactance to health promotion messages: Comparing preemptive and postscript mitigation strategies. Health Commun. 37(3), 366–374 (2020)

[4] Douglas, K. M., Sutton, R. M. & Cichocka, A. The psychology of conspiracy theories. Curr. Dir. Psychol. Sci. 26(6), 538–542 (2017)

[5] Landrum, A. R., & Slater, M. H. (2020). Open questions in scientific consensus messaging research. Environmental Communication, 14(8), 1033-1046.