Article Text

Download PDFPDF

Informed Health Choices media intervention for improving people’s ability to critically appraise the trustworthiness of claims about treatment effects: a mixed-methods process evaluation of a randomised trial in Uganda
  1. Daniel Semakula1,2,
  2. Allen Nsangi1,2,
  3. Andrew Oxman3,
  4. Claire Glenton3,
  5. Simon Lewin3,
  6. Sarah Rosenbaum3,
  7. Matt Oxman3,
  8. Margaret Kaseje4,
  9. Astrid Austvoll-Dahlgren5,
  10. Christopher James Rose3,
  11. Atle Fretheim3,
  12. Nelson Sewankambo1
  1. 1 Makerere University College of Health Sciences, Kampala, Uganda
  2. 2 Institute of Health and Society, Faculty of Medicine, Universitetet i Oslo, Oslo, Norway
  3. 3 Norwegian Institute of Public Health, Oslo, Norway
  4. 4 Tropical Institute of Community Health and Development, Kisumu, Kenya
  5. 5 East and South, Regional Centre for Child and Youth Mental Health and Child Welfare, Oslo, Norway
  1. Correspondence to Andrew Oxman; oxman{at}online.no

Abstract

We developed the Informed Health Choices podcast to improve people’s ability to assess claims about the effects of treatments. We evaluated the effects of the podcast in a randomised trial.

Objectives We conducted this process evaluation to assess the fidelity of the intervention, identify factors that affected the implementation and impact of the intervention and could affect scaling up, and identify potential adverse and beneficial effects.

Setting The study was conducted in central Uganda in rural, periurban and urban settings.

Participants We collected data on parents who were in the intervention arm of the Informed Health Choices study that evaluated an intervention to improve parents’ ability to assess treatment effects.

Procedures We conducted 84 semistructured interviews during the intervention, 19 in-depth interviews shortly after, two focus group discussions with parents, one focus group discussion with research assistants and two in-depth interviews with the principal investigators. We used framework analysis to manage qualitative data, assessed the certainty of the findings using the GRADE-CERQual (Grading of Recommendations, Assessment, Development and Evaluations-Confidence in the Evidence from Reviews of Qualitative Research) approach, and organised findings in a logic model.

Outcomes Proportion of participants listening to all episodes; factors influencing the implementation of the podcast; ways to scale up and any adverse and beneficial effects.

Results All participants who completed the study listened to the podcast as intended, perhaps because of the explanatory design and recruitment of parents with a positive attitude. This was also likely facilitated by the podcast being delivered by research assistants, and providing the participants with MP3 players. The podcast was reportedly clear, understandable, credible and entertaining, which motivated them to listen and eased implementation. No additional adverse effects were reported.

Conclusions Participants experienced the podcast positively and were motivated to engage with it. These findings help to explain the short-term effectiveness of the intervention, but not the decrease in effectiveness over the following year.

  • process evaluation
  • fidelity
  • podcast
  • barriers
  • facilitators
  • scaling-up
  • adverse effects
  • critical appraisal
  • evidence-informed decision-making
  • edutainment
  • health communication
  • media interventions

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • The study employed multiple methods both quantitative and qualitative which allowed us to understand the findings better.

  • Numerous interviews of different kinds (eg, short postepisode evaluation interviews, in-depths interviews and focus group discussions) enabled us to have rich data from which to draw conclusions.

  • We were not able to interview participants who dropped out of the main trial. There is a possibility that those who dropped out might have had different experiences.

Background

Claims about what we should do to improve or maintain our health are abundant in mass media and elsewhere. Some are about the effects of contemporary medicines and surgical interventions, while others are about other types such as traditional alternative therapeutic, and palliative interventions. For example, there are numerous unfounded claims in the media that vaccines cause autism and a host of adverse effects, claims about herbal remedies having no adverse effects on account of being ‘natural’, and claims that using antiretroviral drugs harms more than it helps. Most people lack the aptitude necessary to critically appraise the trustworthiness of claims about the benefits and harms of treatments.1–4 For example, many people trust in their own or acquaintances’ lived experiences with health and illness more than research evidence5 and many commonly overestimate the benefits and underestimate the harms of treatments.6 7 Individuals who are unable to critically assess treatment claims are prone to making inappropriate health choices or use interventions inappropriately. Indeed, many people make decisions based on untrustworthy claims every day. For example, because of exaggerated and unfounded fears about purported side effects, there is vaccine hesitancy and non-vaccination in many parts of the world.8–10 Acting on unreliable claims can result in unnecessary suffering and death,11 and plenty of resources wasted on ineffective and sometimes harmful treatments.12 Conversely, failure to act on trustworthy information results in inefficient use of effective health services.13 A recent study revealed that patients who chose against treatments of known effectiveness and safety profiles experienced comparatively reduced survival rates.14 Unfortunately, many programmes simply tell people what to do, without empowering them to critically appraise health-related information. People need to be supported to develop the skills necessary to critically assess the trustworthiness of claims about treatment effects and to make informed health choices.

To respond to this need, the Informed Health Choices (IHC) project15 16 developed and evaluated materials to enable people understand and apply Key Concepts that are necessary for critically appraising claims about treatment effects and making informed health choices.15 16 By ‘treatment’ we mean any action intended to maintain or improve the health of individuals or communities.

As part of the IHC project, we prepared a podcast (box 1) to help improve people’s ability to assess the trustworthiness of claims about treatment effects.17 It was designed for the parents of primary school children. Each episode comprises a story (radio theatre) about a treatment claim, a message about one Key Concept that is important for assessing that claim, an explanation and an example illustrating the concept. The podcast was developed iteratively, using a human-centred design approach.18 We used feedback from the target audience on early versions to ensure that they experienced the podcast positively. The development process is described elsewhere.17

Box 1

The Informed Health Choices podcast

The Informed Health Choices (IHC) podcast was designed to teach the parents of primary school children to assess claims about treatment effects and to make informed health choices. Each episode included a short story with an example of a treatment claim, a simple explanation of a concept used to assess that claim, another example of a claim illustrating the same concept and its corresponding explanation. In each story, there was a question about the trustworthiness of a claim, which was resolved by applying the relevant Key Concept. 15 All episodes had a conclusion with a take-home message emphasising the concept. The examples used in the podcast were for claims about treatments for health conditions such as malaria, diarrhoea and HIV/AIDS, which were of interest to our target audience at the time;. We also included claims about some common practices, such as contraception, which were of interest to our audience at the time.

The topics and claims were identified from scanning recent mass media reports and interviewing parents. There are eight main episodes in the series covering the nine Key Concepts (listed below). Each episode lasted about 5 min. One of the episodes (episode one) covered two closely related Key Concepts (1 and 9 below). Two additional episodes introduced the podcast and summarised the key messages from the first eight episodes, respectively. The final structure, content, presentation of the content in each episode was developed using a human-centred design approach.17 This involved many iterations informed by feedback from various stakeholders, including parents in our target audience, on the appropriate content to be included and the presentation of this content in each episode. Each episode of the podcast was produced in two languages: English and Luganda. Parents had an option of listening to the podcast in either of the two languages according to their preferences.

­

The nine Key Concepts included in the podcast: 17 46

  • Treatments may be harmful People often exaggerate the benefits oftreatments and ignore or downplay potential harms. However, few effectivetreatments are 100% safe. (Included in Episode 1)

  • Personal experiences or anecdotes (stories abouthow a treatment helped or harmed someone) are an unreliable basis for determiningthe effects of most treatments. (Included in Episode 3)

  • A treatment outcome maybe associated witha treatment, but not caused by the treatment. (Included in Episode 4)

  • How widely or how long a treatment is used is not a reliable indicator of how beneficial or safe it is. Treatments that have not been properly evaluatedbut are widely used or have been used for a long time are often assumed towork. Sometimes, however, they may be unsafe or of doubtful benefit. (Included in Episode 5)

  • Opinions of experts or authorities do not alone provide a reliable basis for deciding on the benefits and harms of treatments. Doctors, researchers, patient organisations andother authorities often disagree about the effects of treatments. This may bebecause their opinions are not always based on systematic reviews of faircomparisons of treatments. (Included in Episode 6)

  • Evaluating the effects of treatments depends on making appropriate comparisons. If a treatment is not compared to somethingelse, it is not possible to know what would happen without the treatment, so itis difficult to attribute outcomes to the treatment. (Included in Episode 2)

  • Comparisons of treatements must be fair. Apart from the treatments being compared, the comparison groups need to be similar at the beginning of a comparison (ie, ‘like needs to be compared with like’). (Included in Episode 7)

  • The results of single comparisons of treatments (trials) can be misleading. A single comparison of treatments rarelyprovides conclusive evidence and results are often available from othercomparisons of the same treatments. These other comparisons may have differentresults or may help to provide more reliable and precise estimates of theeffects of treatments(Included in Episode 8)

  • Because treatments can have harmful effects aswell as beneficial effects, decisions should not be based on considering only their benefits.Rather, they should beinformed by the balance between the benefits and harms of treatments. Costsalso need to be considered. (Included in all Episodes)

You can download the English version of the podcast via Soundcloud, or listen to it here: https://www.youtube.com/watch?v=_QVdkJIdRA8&list=PLeMvL6ApG1N0ySWBxPNEDpD4tf1ZxrBfv

­

Checklist

We also made a checklist summarising the key messages from the podcast.

In a randomised trial, we evaluated the effects of the IHC podcast on parents’ ability to assess claims about the benefits and harms of treatments.19 In a linked trial, we assessed the effectiveness of IHC primary school resources in improving the ability of children in the fifth year of primary school (age 10–11) to assess treatment claims.20 Participants in the podcast trial and the process evaluation were parents of primary school children in schools in the central region of Uganda which participated in the IHC primary school resources trial. Results from both trials initially showed a large improvement in participants’ ability to assess the trustworthiness of treatment claims. However, follow-up assessments (described elsewhere) revealed that parents’ critical appraisal skills decayed substantially over the following year,21 whereas the children’s or their teachers’ ability did not.22 In that study, skills retention (or decay) was assessed by comparing the scores in the intervention group initially after the intervention and in the same group a year later. These results are reported in greater detail elsewhere.21 The overall goal of the process evaluation was to provide information that could be used to explain the results observed in the trials (impact) and identify other effects not reported in the trial. Whereas randomised trials are useful in answering questions about the effect of an intervention, they may not provide sufficient evidence about how an intervention works in a specific setting, why it causes the effects or not and why interventions might work differently in different contexts. This is even more relevant when considering complex interventions like the IHC media resources which have multiple interacting components. A process evaluation done alongside a randomised trial can provide useful evidence about the implementation process and other factors that contribute to explaining the effects of an intervention.23 24 Some of the text in the background and methods sections of this manuscript reproduces information we have reported in the protocol for this study available elsewhere.18 We reuse it here only to provide clarity to a reader who may not find that information accessible.

The specific objectives of this process evaluation were to:

  1. Assess the fidelity of the intervention (whether it was delivered and used as intended).

  2. Identify factors affecting the implementation and impact, and potentially scaling up of the intervention.

  3. Identify other potential adverse and beneficial effects of the intervention.

The second objective above combines the second and third objectives in the study protocol.18

Methods

As described in detail in the study protocol, this was a multimethod study using qualitative data and quantitative data.18 Our approach is summarised in figure 1. The podcast trial employed 29 research assistants who visited the participants and played the podcast episodes at the participants’ preferred listening venue and time. Participants in the trial could choose whether to listen to the podcast in English or Luganda. At each visit, the research assistants played one or two episodes of the podcast. In addition, all participants were given the complete podcast on MP3 players to play at their convenience. In the podcast group, 288 out of 334 (86%) participants completed the trial. In the control group, which listened to a series of public service announcements about health issues, which were delivered in the same way, 273 out of 341 (80%) completed the trial. Data for the process evaluation were collected from participants in the intervention group who completed the trial. The research assistants recorded when each participant completed listening to each episode, and the number of times each participant reported independently listening to each episode.

Figure 1

Schematic overview of the process evaluation.

Frameworks underlying this process evaluation

We used three frameworks to guide the collection and analysis of the data. We adapted Carroll and colleagues’ framework for implementation fidelity25 to explore factors related to fidelity (table 1). We developed a framework for factors that could affect the implementation, impact or scaling up the intervention (table 2) by reviewing relevant frameworks for health promotion activities, mass media campaigns, health innovations, health education and guideline implementation;26–31 and the framework that we used in the process evaluation of the IHC primary school resources.32

Table 1

Considerations for assessing fidelity of the podcast

Table 2

Factors that could affect the impact of the podcast

We developed a list of potential adverse and beneficial effects for the third framework (table 3). That list was based on pilot and user testing of the podcast and the IHC primary school resources, discussions with other researchers about potential benefits and harms, and wider discussions about the benefits and harms of interventions to promote evidence-informed decision-making.

Table 3

Potential adverse and beneficial effects of the podcast

Qualitative data collection

We included participants who chose to listen to the podcast in either English or Luganda. To capture the opinions, views and experiences of a wide range of participants, we purposively sampled parents according to education level (primary, secondary and tertiary), and whether their children were in a school that was in the intervention or control arm of the IHC primary school trial.33

We used a variety of methods to collect data, including brief semistructured interviews during the intervention, in-depth post-intervention interviews, observations and focus group discussions. We pretested all data collection tools and research assistants received training on methods for qualitative data collection. We conducted mock interviews among investigators and research assistants to familiarise ourselves with the interview questions and to ensure consistency among interviewers and across questions.

Post-episode and post-intervention interviews with parents

At the end of each visit, the research assistants conducted brief semistructured interviews with parents. Using an episode evaluation form,33 they asked them for their immediate perceptions about the episode. After participants had listened to all of the episodes, we conducted in-depth interviews with some of them. These in-depth interviews were recorded and transcribed.

Observations

The research assistants delivering the podcast recorded observations made at each visit in a study log, which were discussed at weekly meetings. The principal investigators also kept a notebook where they recorded observations from field visits, informal consultations, weekly meetings and other contacts with participants and research assistants during and after the trial.

Focus group discussions with parents and research assistants

We conducted a series of focus group discussions, with four to six participants in each group. Each group was moderated by a facilitator using a guide33 and assisted by an observer who took notes. These were also recorded and transcribed. We conducted one focus group discussion with the research assistants to explore their experiences delivering the podcast and their interactions with parents.

Interviews with the lead investigators

DS and AN were responsible for implementing the intervention. Given the importance of their role in the trial and the process evaluation, two of the other investigators (CG and SL) interviewed them to explore their thoughts and experiences and how these may have influenced decisions they made in the process evaluation.

In total, we conducted 84 brief semistructured interviews at the end of visits during the intervention; 20 in-depth postintervention interviews; two focus group discussions with parents; one focus group discussion with research assistants and two in-depth interviews with the principal investigators. The number of interviews was largely pragmatic. We made a judgement, based on the emerging data, about whether more interviews or focus groups were needed. In making this judgement, we considered the variation in issues emerging from the interviews and focus groups, and the extent to which we are able to explain these variations. We planned not to conduct more than 30 in-depth interviews and six focus group discussions, mainly because of time and resource constraints.34 35

Data analysis

To assess fidelity, we computed the proportion of participants who listened to each episode, among those who completed the IHC podcast trial evaluation tool. We used logistic regression to explore the relationship between listening frequency and participants’ scores on the test used as the primary outcome measure in the trial.

To analyse the qualitative data, we used a framework thematic analysis approach, guided by the three frameworks described above.36 This approach includes four stages: familiarisation, coding, charting and interpretation of the data. We applied all three frameworks to the data described above. Two of the investigators (DS and AN) independently read and reread the transcripts from the interviews, focus groups and observations. They then coded the data until all the transcripts had been reviewed. For each framework, the definitions and boundaries of each of the frameworks’ factors were discussed among the investigators, and the frameworks were revised in line with categories that emerged from the data. We then charted the data by writing a summary that distilled the findings for each framework factor. Finally, using the summarised data, we explored the range and nature of findings, grouping them into broader themes and looked for possible explanations.

We summarised the key findings and assessed our confidence in each important finding using the GRADE-CERQual approach, a transparent method for assessing the confidence in evidence from reviews of qualitative research.37 The full form of GRADE is: Grading of Recommendations, Assessment, Development and Evaluations, while that of CERQual is: Confidence in the Evidence from Reviews of Qualitative Research. When applying the GRADE-CERQual approach, we assess four components: methodological limitations, data adequacy, coherence and relevance as explained below.

  • Methodological limitations: ‘The extent to which there are concerns about the design or conduct of the primary studies that contributed evidence to an individual review finding’.

  • Data adequacy: ‘An assessment of how clear and cogent the fit is between the data from the primary studies and a review finding that synthesises that data. By “cogent”, we mean well supported or compelling’.

  • Coherence: ‘An overall determination of the degree of richness and quantity of data supporting a review finding’.

  • Relevance: ‘The extent to which the body of evidence from the primary studies supporting a review finding is applicable to the context (perspective or population, phenomenon of interest, setting) specified in the review question’.

Although CERQual has been designed for findings emerging from qualitative evidence syntheses, several components of the approach are suitable for assessing findings from a single study with multiple sources of qualitative data.

We used a logic model to organise the findings of the process evaluation with the findings of the trial. Firstly, DS and AN organised the findings into chains of events that might have led to the outcomes of the trial and additional outcomes that were explored (table 3). Findings and outcome measures were categorised as attributes of the intervention, effect modifiers, intermediate outcomes, and observed and potential effects. We organised these elements into chains of events, discussed them and revised them iteratively until there was agreement on a final model.

Patient and public involvement

We had an advisory panel made of members of the public who deliberated and advised on different aspects of the study implementation. In the design of the intervention (the IHC podcast), the public provided feedback which we used to improve the design of the podcast. Some participants helped in the recruitment by inviting their colleagues to recruitment meetings. The results of these studies will be disseminated to each group of parents at the schools where they were recruited from.

Results

The main findings, including our confidence in each finding, are summarised in table 4, and organised into a logic model in table 5.

Table 4

Summary of the main qualitative findings

Table 5

Logic model for the factors influencing implementation and effect of the intervention

Fidelity

Almost all participants (99.7%) who completed the trial listened to all the episodes as intended (online supplementary additional file 1). They listened to the podcast on their own an average of 2.2 times per day (SD 1.1) for an average of 4.6 days (SD 2.1) (figures 2 and 3). Participants’ scores on the test used to measure their ability to assess the trustworthiness of treatment claims were associated with the number times per day (OR 1.3; 95% CI 1.2 to 1.4; p<0.001) and the number of days (OR 1.2; 95% CI 1.2 to 1.3; p<0.001) that they listened to the podcast on their own (figure 4).

Supplemental material

Figure 2

Times per day that participants listened to the podcast.

Figure 3

Number of days that participants listened to the podcast.

Figure 4

Explanatory factors: test score by listening frequency.

Factors affecting the implementation, impact and scaling up of the intervention

Findings related to the intervention

All those interviewed described the podcast as valuable. They reported that it was informative and improved their knowledge about assessing health information, and their confidence in challenging wrong beliefs and claims about treatments. Some noted it gave them confidence to discuss health issues with health workers, while others described how it taught them to be more careful in making choices about treatments.

Almost all those interviewed described the podcast as clear. They attributed this to the language—including the dialect, the vocabulary, the presentation style, the familiar setting of the scenarios and illustrations used and the organisation of the content within each episode. Some participants noted that being able to listen to the podcast in Luganda was helpful because it was the language they understood best. They also noted that technical jargon was introduced and discussed in a manner that made it accessible to people with limited or no formal education, and to people without prior experience with the conditions being discussed. They mentioned that within each episode, the organisation of the content made it easier to follow the explanations and key messages, while the friendly demeanour of the characters in the stories made the podcast more understandable and enjoyable. The detailed explanations, the reiteration of the key messages at the end of each episode, and the length of the episodes all reportedly contributed to the podcast’s clarity.

All participants reported that the length of the podcast in terms of number of episodes was appropriate. While the episodes varied in length, most participants described the episodes as being of appropriate length, although some participants expressed discontent with episodes they perceived to be short. Research assistants, on the other hand, observed that participants sometimes became tired and seemed bored when they listened to long episodes.

Almost all participants reported that they were able to listen to all the episodes because the research assistants were diligent in visiting and playing the episodes to them. Some noted that the research assistants played back the episodes whenever the participants needed to listen to them again, while others reported asking the research assistants questions about the project, to which they got timely responses.

Participants reported that the organisation of the episodes made it easy for them to follow. This included how each episode could stand alone with a complete message; how the series starts with easier concepts and how the series includes recaps of previous episodes. These were described as attributes that made it easier for them to learn.

The majority of participants reported listening to at least two episodes per week for about 7 weeks using the portable media players. Most said this listening pattern was appropriate, even for those who had busy schedules. Some participants reported continuing to listen to the podcast on their own until they completed the test.

Potential effect modifiers

The participants’ education level ranged from no formal education to tertiary education, with the majority having completed no more than primary school. Most of those who had tertiary education were teachers.

Some participants reported that there were some messages that they would have understood better if they had had more knowledge of mathematics. Specifically, they reported difficulty understanding the Key Concept that small studies can be misleading. However, many of the parents whom we interviewed reported that their level of education did not have a big influence on how they understood the general message of the podcast. This is consistent with there not being a clear association between level of education and the size of effect immediately after the parents listened to the podcast.19

The schools facilitated meetings between parents and the research team. They provided meeting venues and took time off their school programme for the teachers and parents to meet the research team. This collaboration likely gave credibility to the project. It also facilitated engagement and recruitment of parents for the podcast trial. Some schools encouraged children to share with their parents what they were studying, which reportedly raised their parents’ curiosity and interest in the project. A good number of parents, who attended meetings with the research team, mentioned that they were eager to learn more about what their children were learning, and how they too could learn. However, many parents of children in the primary school trial, especially fathers, did not attend any meetings; and many who did declined to participate. The reasons for this are uncertain.

Almost all participants showed positive attitudes towards learning, science and critical thinking. A few participants, however, expressed discomfort with having to think a lot about treatment options and making choices.

Most participants reported that the portable media players and MP3 players facilitated listening at their convenience, and they did not have major problems using them. Some participants found it helpful for the research assistant to operate the portable media players for them.

Participants were informed that they would be listening to health messages. Some reported that they expected to listen to messages about common health conditions and how to manage them, rather than to messages about how to assess the trustworthiness of treatment claims. Nonetheless, we observed that most participants seemed to understand the purpose of the podcast after listening to it, and most listened to the entire podcast.

Some participants voiced strong beliefs about which treatments are effective, mostly from their own personal experiences, and remained steadfast in those beliefs even after listening to the IHC podcast, even when they were in conflict with a podcast message. Despite having such conflicting beliefs, participants continued to listen to the episodes until the end of the trial.

“Some claims are trustworthy. For example, if a child gets a burn, you simply have to get cooking oil, apply it to the burn wounds, then apply sugar. If you instead apply cold water the child’s burns wounds will develop blisters. If the blisters rupture, we usually apply ash from burnt sisal. That’s all you need to do, and the child will get better.”

Intermediary effects

Almost all participants described the podcast as appropriate for them. They described the stories, examples (conditions, treatments and claims) and the explanations as appropriate and relevant. A few participants—mostly those who had strong beliefs about the examples of treatments—reported that some of the content was inappropriate. For example, some participants argued that using cold water as a first aid treatment of burns was not right and that this example was not helpful.

Participants described the podcast as credible. They attributed this to the quality of the content, the research team and the podcast’s source (Makerere University, the largest and oldest medical school and health research institution in the country).

Most participants reported that they found it easy to listen to the podcast. They reported that it did not take a lot of time and they could listen to it at their own convenience, even while doing other daily activities.

Participants said the podcast was entertaining, informative and engaging. They described how the use of stories made the podcast attractive and non-threatening, and made the explanations easier to understand. Some noted that after listening to one message, the content of that message enticed them to listen to the next one. Participants said the quality of production was good, the content of the episodes was engaging and the song and stories were memorable.

Participants described several motivations for listening: that the podcast was valuable, entertaining and enjoyable and that the information in each episode was relevant and applicable to their lives. Some participants referred to their love for science and health information, or to their personal position and responsibilities in society. Some said they were learning new information and gaining new skills to enable them to understand health claims.

”What motivated me personally was that I was getting exposed to what I had not known before. Also, there was some information we were relying on which now I know was hearsay, but the episodes gave us new knowledge to reflect on what we were hearing.”

During the intervention, participants heard other messages on the radio or television, or by word of mouth. Their engagement with the podcast does not seem to have been affected by other competing messages. On the other hand, some participants reported listening more critically to other health messages.

​”The other messages did not interfere with my listening. I had already learned through these messages how someone should arrive at a decision of what treatments to use, so every time we heard a message over the radio we compared what they were saying to what the IHC message said. We started asking if the messages on the radio were trustworthy or whether they were just interested in selling their medicines. So, we compared using the knowledge and skills we learned from the IHC messages.“

Most participants mentioned that the messages in the IHC podcast were not necessarily in conflict with other messages. However, they said the IHC podcast was different because it included sufficient background information that enabled one to learn how to make choices. In contrast, some other health messages were viewed as aiming to convince people to use their intervention:

“The difference was that in these (IHC) messages they would give us the good side and the bad side of using certain treatments and encourage us to decide on our own. Other health messages give you only the good side, that if you use this (treatment) you will get cured. These messages taught me that everything can have a good side and bad side. This led me to start thinking more deeply about certain information that we are always being given by others. Why do they only talk about the good side?”

Beneficial and adverse effects

As noted above, a potential effect of listening to the IHC podcast for some parents was being more critical and aware of unreliable health advice. Additionally, some participants reported having learnt to question more, and to think more critically about claims unrelated to health. Some participants mentioned that scientific information could potentially be in conflict with cultural or religious beliefs. However, no participant reported experiencing these conflicts as a result of listening to the IHC podcast. We elicited other potential additional effects using the probes in table 3. However, no other potential beneficial or adverse effects were reported by participants or observers in the trial.

Discussion

Factors that facilitated implementation and effectiveness

The podcast intervention had a large effect initially, with almost twice as many parents in the intervention group having a passing score on the test used to measure their ability to assess treatment claims, compared with the parents in the control group.19 After 1 year, the proportion of parents with a passing score on the same test decreased by one third.21 We found a number of factors that help to explain the initial effectiveness of the intervention. However, because we collected data for the process evaluation during and shortly after the trial, our findings do not help to explain the subsequent decrease.

Almost all participants who completed the study listened to all the episodes. This was, at least in part, because research assistants delivered the podcast to the parents on portable media players and listened to the podcast with them. In addition, providing the participants with MP3 players enabled the participants to listen to each episode more than once, and most participants did so. This almost certainly contributed to the initial effectiveness of the intervention. Moreover, we found associations between the number of times per day and the number of days that participants listened to the podcast and their initial test scores, suggesting a dose–response relationship.

More passive dissemination of the podcast likely would be less effective. On the other hand, the cost of passive dissemination would be substantially less, and the effectiveness would likely be the same for those who choose to listen to the entire podcast.

Participants valued the podcast because it provided them with new knowledge and skills for assessing health information. They also felt that it was clear, understandable and well organised. Although some participants found some of the episodes too long and confusing, most found the length of the episodes appropriate. They also found the duration of the intervention (about 7 weeks) and the intensity (about two episodes per week) suitable. All these attributes of the intervention are likely to have contributed to its initial effectiveness.

Parents were motivated to participate by headteachers and teachers, whom they trusted. Some parents, whose children were in intervention schools in the IHC primary school trial,20 were motivated by wanting to learn what their children were learning.

For the most part, participants education level did not appear to affect their motivation, how they experienced the podcast, or the initial effectiveness of the intervention.19 However, it may have affected retention of what was learnt. Participants with tertiary education retained more of what they learnt than those with primary or no formal education. Many of the participants with tertiary education were teachers, and this might partially explain that finding. A large proportion of teachers in both intervention and control schools had passing scores on the test initially and after 1 year,22 compared with the parents overall.21

Participants had positive attitudes towards learning new information, science and critical thinking. Their positive attitudes likely contributed both to their participating in the trial and to the effectiveness of the intervention. Parents without similar attitudes would be less likely to listen to the podcast and less likely to benefit from listening.

Intermediary effects of the intervention, which contributed to its effectiveness, are largely related to the participants’ experience of the podcast. They found the podcast to be relevant, engaging, credible, easy to listen to and entertaining. These factors motivated them to listen to the podcast and to learn. Moreover, for at least some of the participants, it motivated them to think more critically about treatment claims that they encountered.

Factors that impeded implementation and effectiveness

We identified three factors that may have impeded implementation of the intervention and its effectiveness. First, few or no parents attended meetings or were recruited to participate at some schools. Although this did not affect the effectiveness of the intervention among participants, it is a major impediment to scaling-up the intervention.

Second, many participants had prior beliefs about treatments that were in conflict with the key messages of the IHC podcast. Some of those beliefs persisted after listening to the episodes. Frequently, these conflicting beliefs were based on personal experiences using a treatment (anecdotal evidence). This finding is similar to what was found in the process evaluation of the IHC primary school intervention.38 In that evaluation, conflicting beliefs of the children were often based on personal experiences, whereas conflicting beliefs of teachers were more often based on tradition (treatments that had been widely used for a long time). It is uncertain whether those with conflicting beliefs were less likely to answer questions related to those Key Concepts (Box 1) correctly than those who did not have conflicting beliefs. However, strongly held beliefs may be resistant to change and this could make it difficult to learn new concepts that are in conflict with those beliefs.39–41

Third, some participants expected to hear messages about the causes and management of common health conditions, rather than messages about how to critically assess the trustworthiness of treatment claims. This could have influenced how they perceived and understood the IHC messages. While this was a problem during the development and early phase of the trial, most participants understood the purpose of the podcast after listening to it, and most listened to the entire podcast.

Factors that might influence scaling up

We identified the following factors that could facilitate scaling up the use of an educational podcast to enable parents to assess the trustworthiness of treatment claims:

  • A well-designed podcast may appeal to many people in the target audience and be convenient.

  • Introducing the IHC podcast through primary schools that are using the IHC primary school resources may be an effective strategy for disseminating the podcast to many parents and others in the community.

  • Ensuring that the podcast is relevant (by using claims that are relevant to the target audience to illustrate the Key Concepts) and that it is entertaining and easy to listen to (by pilot and user testing it) can help to motivate people in the target audience to listen to it.

We identified the following factors that could impede scaling up use of the podcast:

  • Delivery of the podcast by research assistants, which likely contributed to the effectiveness of the intervention, is not feasible on a large scale.

  • Providing parents with portable media players and MP3 players also likely contributed to the effectiveness of the intervention. Access to these devices may limit dissemination of the podcast.

  • The ability to reach parents through schools may depend on how much interest and enthusiasm is shown by head teachers and teachers. This, in turn, may depend on effective outreach to introduce the IHC podcast together with the IHC primary school resources into schools.

  • Many people in the target audience (parents of primary school children) did not attend recruitment meetings and many of those who did attend chose not to participate in the trial. This might be due to many parents not being interested initially in learning about health, science and critical thinking; busy work schedules; or problematic relationships between parents and school authorities.

Potential beneficial and adverse effects of the podcast

Some participants reported that listening to the IHC podcast led them to become more critical and aware of health advice that was given without a basis. This is consistent with the finding in the 1-year follow-up study that parents in the podcast group were more likely to have been sceptical of the last treatment claim that they had heard.21 However, the proportion of participants who responded that they thought about the basis for that claim that they heard was lower in the podcast group than in the control group. The reasons for this are unclear, and it is uncertain how many participants became more critical of treatment claims initially. Nonetheless, whatever effect the intervention had on participants disposition to think critically about treatment claims initially, the intervention appears unlikely to have had a long-term beneficial effect on the disposition of most participants.

Some participants mentioned that there might be a potential for scientific information to conflict with traditional cultural and religious beliefs. However, we did not observe any conflicts, and no participant reported having experienced any as a result of listening to the podcast.

Results in relation findings from other studies

We found only one systematic review that explored factors that influence impact of interventions for improving critical thinking. Abrami and colleagues found instructional intervention for critical thinking can have a positive effect and that the content, style of teaching (pedagogy) and collaboration among learners can influence the impact.42 Our findings are consistent with those of Abrami and colleagues. Although our study did not allow for collaboration among learners as part of the intervention, we found that the nature of content, and how the intervention was delivered likely influenced the impact of the intervention.

Strengths and limitations

Strengths of this study include the use of multiple methods, including a survey of all of the participants in the intervention arm who completed the trial, observations, brief interviews, in-depth interviews and focus group discussions. We used the CERQual approach to make explicit judgements about our confidence in each finding (table 4), and our confidence in most of the findings was moderate or high. We organised those findings in a logic model (table 5), which helps to explain the initial findings of the trial, as well as potential facilitators and impediments to scaling up use of the podcast.

The largely positive findings reflect the value of the iterative, human-centred design approach that we used to develop the podcast.17 43 44 The design of the podcast identified and addressed problems with how people in our target audience experienced earlier versions of the podcast, resulting in a podcast that participants in the trial experienced positively.

An important limitation of this study is that all of the data were collected before the results of the 1-year follow-up study were available. Consequently, we did not ask questions specific to why the ability of participants to think critically about treatment claims decreased substantially after 1 year. We used other available data form the nature of the intervention and how it was implemented to explain this observation. A better way would have been to interview participants about skills decay after the analysis of data from the follow-up evaluation.

Another important limitation of this study is that the investigators were responsible for both developing and evaluating the intervention. This could have led us to emphasise participants’ positive experiences of the intervention when collecting and analysing the data. In addition, the participants were aware that the lead investigators (DS and AN) were responsible for the intervention itself. Therefore, there may have been a social desirability bias, due to participants providing responses that would be pleasant to the investigators.45 We tried to address these biases by publishing the protocol for the process evaluation in advance,18 critically reviewing our interpretation of the data, facilitating reflection by interviewing the lead investigators and making it clear to the participants that we were evaluating the podcast and not them. Nonetheless, we cannot rule out that our interests as developers of the intervention influenced the findings of this process evaluation.

Our use of the GRADE-CERQual approach to assess our confidence in findings from an individual study rather than findings from a systematic review was novel, but worked reasonably well. However, the fact that we applied the approach to our own data was a limitation. In future assessments, we recommend that external assessors are involved.

Conclusions

The findings of this process evaluation support the value of the human-centred design approach used to develop the podcast, which contributed to the initial effectiveness of the podcast. However, they do not help to explain the decrease in the effectiveness of the intervention after 1 year. Future research should explore factors that may lead to the decay in the effectiveness of similar interventions over time and strategies to improve retention.

Acknowledgments

We are grateful to all of the parents who participated in the trial and particularly those who participated in the design and user testing of the intervention, the interviews and focus group discussions. We are also grateful to the research assistants for their observations and participation. We would like to thank Linda Biesty, Patricia Healy and Vanesa Ringle for feedback on a draft of this report. We are grateful for support for this research from the Global Health and Vaccination Research (GLOBVAC) programme of the Research Council of Norway, and to the English National Institute for Health Research for supporting Iain Chalmers and the James Lind Initiative. This work was also partially supported by a Career Development Award from the DELTAS Africa Initiative grant # DEL-15-011 to THRiVE-2. The DELTAS Africa Initiative is an independent funding scheme of the African Academy of Sciences (AAS)’s Alliance for Accelerating Excellence in Science in Africa (AESA) and supported by the New Partnership for Africa’s Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust grant # 107742/Z/15/Z and the UK government. The views expressed in this publication are those of the author(s) and not necessarily those of AAS, NEPAD Agency, Wellcome Trust or the UK government. We are also grateful to Martin Mutyaba, Esther Nakyejwe, Margaret Nabatanzi, Hilda Mwebaza, Peter Lukwata, Rita Tukahirwa, David simbwa, Adonia Lwanga, Enock Steven Ddamulira and Solomon Segawa for their help with data management; and all the research assistants who helped with data collection and entry. We would also like to thank the Informed Health Choices advisory groups for their support and advice in implementing this project.

References

Footnotes

  • Twitter @Dansemakula, @AllenNsangi

  • Contributors DS is the principal investigator. He drafted the protocol with input from all the other coauthors and was responsible for planning and data collection. DS, AN, AO, AAD, CG, SL, MK, MO, SR, AAD, AF and NS participated in the planning of the study. DS and AN collected the data and led the data analysis; CJR conducted the quantitative data analysis; AO, CG, SL, MK, MO, SR, AAD, AF and NS participated in the analyses, interpretation and organisation of findings. DS wrote the first draft of the manuscript. All of the authors reviewed and commented on earlier drafts and contributed to the final manuscript. All authors reviewed and approved the final version of the manuscript.

  • Funding This trial was funded by the Research Council of Norway, Project number 220603/H10.

  • Competing interests DS is a medical doctor, epidemiologist and health services researcher. AN is a social scientist. Having developed the intervention and interviewed participants about it, it is possible that our involvement in both processes might have in some way influenced how we asked questions or our interpretation of the responses. It is not known at this point whether indeed this occurred and the effect this might have on the participants’ responses.

  • Patient and public involvement statement Included within the text of the manuscript

  • Patient consent for publication Consent for publication of findngs in reports and or presentations was sought as part of the informed consent process for study participation. Additional consent for publication of participant-identifiable material was not required.

  • Ethics approval Participants who were invited to participate in the process evaluation were informed of the purpose of their participation before written permission was obtained. Participants in the trial consented for both the initial assessment and the 1-year follow-up at the beginning of the study. Only consenting participants were included. The study was approved by Makerere University Institutional Review Board and the Uganda National Council of Science and Technology as part of the Supporting Informed Healthcare Choices in Low-income Countries Project (Grant no. ES498037).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information.