Knowledge user survey and Delphi process to inform development of a new risk of bias tool to assess systematic reviews with network meta-analysis (RoB NMA tool) ================================================================================================================================================================ * Carole Lunny * Areti Angeliki Veroniki * Brian Hutton * Ian White * JPT Higgins * James M Wright * Ji Yoon Kim * Sai Surabi Thirugnanasampanthar * Shazia Siddiqui * Jennifer Watt * Lorenzo Moja * Nichole Taske * Robert C Lorenz * Savannah Gerrish * Sharon Straus * Virginia Minogue * Franklin Hu * Kevin Lin * Ayah Kapani * Samin Nagi * Lillian Chen * Mona Akbar-nejad * Andrea C Tricco ## Abstract **Background** Network meta-analysis (NMA) is increasingly used in guideline development and other aspects of evidence-based decision-making. We aimed to develop a risk of bias (RoB) tool to assess NMAs (RoB NMA tool). An international steering committee recommended that the RoB NMA tool to be used in combination with the Risk of Bias in Systematic reviews (ROBIS) tool (i.e. because it was designed to assess biases only) or other similar quality appraisal tools (eg, A MeaSurement Tool to Assess systematic Reviews 2 [AMSTAR 2]) to assess quality of systematic reviews. The RoB NMA tool will assess NMA biases and limitations regarding how the analysis was planned, data were analysed and results were presented, including the way in which the evidence was assembled and interpreted. **Objectives** Conduct (a) a Delphi process to determine expert opinion on an item’s inclusion and (b) a knowledge user survey to widen its impact. **Design** Cross-sectional survey and Delphi process. **Methods** Delphi panellists were asked to rate whether items should be included. All agreed-upon item were included in a second round of the survey (defined as 70% agreement). We surveyed knowledge users’ views and preferences about the importance, utility and willingness to use the RoB NMA tool to evaluate evidence in practice and in policymaking. We included 12 closed and 10 open-ended questions, and we followed a knowledge translation plan to disseminate the survey through social media and professional networks. **Results** 22 items were entered into a Delphi survey of which 28 respondents completed round 1, and 22 completed round 2. Seven items did not reach consensus in round 2. A total of 298 knowledge users participated in the survey (14% respondent rate). 75% indicated that their organisation produced NMAs, and 78% showed high interest in the tool, especially if they had received adequate training (84%). Most knowledge users and Delphi panellists preferred a tool to assess *both* bias in individual NMA results *and* authors’ conclusions. Response bias in our sample is a major limitation as knowledge users working in high-income countries were more represented. One of the limitations of the Delphi process is that it depends on the purposive selection of experts and their availability, thus limiting the variability in perspectives and scientific disciplines. **Conclusions** This Delphi process and knowledge user survey informs the development of the RoB NMA tool. * evidence-based practice * health care quality, access, and evaluation * health services research * methods #### WHAT IS ALREADY KNOWN ON THIS TOPIC * The development of new tools to inform evidence-based medicine requires the feedback of knowledge users and experts. * The purpose of the knowledge user survey and Delphi process was to ask respondents about the structure of a proposed tool for assessing biases in an network meta-analysis (NMA), and about the concepts related to bias in NMAs to potentially include in the tool. #### WHAT THIS STUDY ADDS * The majority of knowledge users and Delphi respondents preferred a tool to assess the bias in an individual NMA’s results and in authors’ conclusions. * Delphi respondents agreed to potentially include 15 out of 22 concepts about bias in NMAs. #### HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY * This Delphi and knowledge user surveys inform the development of a new risk of bias NMA tool. ## Introduction Guidance about how to systematically develop quality and bias assessment tools is well established,1 ,2 multistaged and includes the involvement of knowledge users and experts. The benefits of engaging knowledge users in tool development is a key factor associated with knowledge translation and the reduction of research waste.3–8 Specifically, the benefits include: greater public acceptance9; identifying and prioritising topics for research10; providing feedback on the tool’s usability10; wider dissemination, uptake and communication of findings10 and increased likelihood of impact.10 11 Identifying an external group of experts to obtain a multitude of perspectives will produce a more valid tool than a judgement given by an individual expert, or a group of experts heavily involved in the development process. Engaging with knowledge users and experts during development ensures that new tools will be relevant and applicable. The risk of bias in network meta-analysis (RoB NMA) tool project aims to develop the first tool to assess risk of bias in a review with network meta-analyses (NMAs). We intended the RoB NMA tool to be used in combination with ROBIS12 (which we recommend as it was designed to assess biases specifically) or other similar tools (eg, AMSTAR 213) to assess quality of systematic reviews. The RoB NMA tool will assess NMA biases and limitations regarding how the analysis was planned, data were analysed and results were presented, including the way in which the evidence was assembled and interpreted. Our proposed RoB NMA tool has several uses. It can help knowledge users: (i) decide whether to believe the results from a single NMA and (ii) help choose between NMAs based on their risk of bias. Development of the tool follows five stages. In the first stage, we conducted and published a methodological review to identify items related to bias in NMAs14; second, the steering group made conceptual decisions about the type of tool that will be developed, and refined the items into concepts from the methodological review; third, a knowledge user survey was developed to solicit feedback on the structure of the tool from potential users and fourth, expert opinion was obtained through a Delphi survey to select and define the concepts. The final and future stage will involve compiling the items into a tool; and conducting pilot testing to refine the items in the tool. In this paper, we report on the knowledge user survey and Delphi process. We define ‘knowledge user’ as an individual who is likely to be able to use research results to make informed decisions about health policies, programmes and/or practices.15 A knowledge user can be, but is not limited to, a practitioner, a policy maker, an educator, a decision maker, a healthcare administrator, a community leader or an individual in a health charity, patient group, private sector organisation or media outlet.15 Our definition of experts is based on an individual’s scientific/professional expertise, in our case in methods for NMAs, bias in systematic reviews with or without NMAs and risk of bias tool development. The purpose of the survey was to ask knowledge users about the structure of our proposed tool and about their potential use of the tool in evidence-informed practice, policymaking, guideline development or research. We also aimed to conduct a multiround Delphi process to solicit expert opinion about concepts to potentially include in the tool. ## Methods ### Management of the project At the start of our project to develop a risk of bias tool for NMAs, we first convened a steering group of nine experts in NMA, bias and tool development (online supplemental appendix A).12 The steering group is responsible for the management of the project and has executive power over all decisions related to the proposed tool, which is still under development. ### Supplementary data [[bmjebm-2022-111944supp001.pdf]](pending:yes) ### Protocol We uploaded our study protocol on the Open Science Framework at [https://osf.io/da4uy/](https://osf.io/da4uy/). The knowledge user survey complied with the Checklist for Reporting Results of Internet E-Surveys (online supplemental appendix B).16 Important definitions are found in box 1. Box 1 ### Important definitions #### Network meta-analysis (NMA) We adopted a broad definition of an NMA as a method that aims to, or intends to, synthesise simultaneously the evidence from multiple studies investigating more than two healthcare interventions of interest. We used the Cochrane Handbook definition of an NMA: ‘*Any set of studies that links three or more interventions via direct comparisons forms a network of interventions. In a network of interventions there can be multiple ways to make indirect comparisons between the interventions. These are comparisons that have not been made directly within studies, and they can be estimated using mathematical combinations of the direct intervention effect estimates available’.* 37 A network is composed by at least three nodes (interventions or comparators) and these are connected (graphically depicted as lines/edges) when at least one study compares the underlying two interventions—that is the direct comparisons. Reviews that intend to compare multiple treatments with an NMA but then find that the expectations or assumptions are violated (eg, underlying assumptions of the method are not met), and hence an NMA is not possible or optimal, are also considered in our definition. #### NMA risk of bias assessment A risk of bias assessment would evaluate limitations in the way in which the NMA analysis was planned, analysed and presented. If these methods are inappropriate, the validity of the findings can be compromised.38 Our tool aims either/or to assess the biases in the individual results of the NMA, and the authors’ conclusions. #### Bias in results of an NMA NMA of effect estimates from primary studies can result in overestimation or underestimation of the effects of specific intervention comparisons.39 40 For example, Chaimani *et al* conducted a network meta-epidemiological study and found that, in the majority of the 32 networks they analysed, small studies tended to exaggerate the true effect estimate of the intervention, possibly due to small-study effects and publication bias.41 Our tool will focus on the results of an NMA (eg, network characteristics (including geometry, effect modifiers)).42 This is the approach taken in tools such as the Cochrane Risk of Bias 2 tool for assessing risk of bias in randomised trials.43 #### Bias in the conclusions of an NMA Bias may be introduced when interpreting the NMA results to draw conclusions. Conclusions may include ‘spin’ (eg, biased misrepresentation of the evidence, perhaps to facilitate publication) or (erroneous) misinterpretation of the evidence.44 Ideally, potential biases identified in the results of the NMA might be addressed appropriately when drawing conclusions. Similarly, a well-conducted systematic review draws conclusions that are appropriate to the included evidence and can therefore be free of bias even when the primary studies included in the review have high risk of bias. A cross-sectional survey design was used for the knowledge user survey using Qualtrics.17 18 Unique site visitors were identified via IP address and personal information was collected on a voluntary basis from respondents (no incentives were offered). Knowledge users and experts were identified using a purposive sampling strategy. ## Knowledge user survey ### Design An English-language survey with 15 (12 closed and 10 open-ended) questions was developed by the investigative team (online supplemental appendix C). Five authors piloted the survey and modified it iteratively to improve content validity. Respondents were allowed to skip questions they did not wish to answer. The knowledge user survey ran from June 28 to 1 August 2021. There were two parts to the survey: (1) demographic information and information about whether the knowledge users’ organisation used or produced NMAs; (2) purpose of the RoB NMA tool, namely whether knowledge users preferred to assess the bias in the results, the authors’ conclusions of an NMA or both. Further sections asked about interest and engagement in development, piloting, dissemination and training. ### Email list development We created an email list of journal editors publishing NMAs, using one bibliometric study of NMAs.19 From this list, we extracted the journal names, and names of authors of NMAs. We also developed a list of organisations and institutions producing NMAs (online supplemental appendix D). We also included in the email list respondents from a UBC Methods Speaker Series on evidence synthesis methods ([https://www.ti.ubc.ca/2022/01/01/methods-speaker-series-2022/](https://www.ti.ubc.ca/2022/01/01/methods-speaker-series-2022/)). ### Dissemination All potential survey respondents were sent an email describing the purpose of the study, requesting their participation and providing a link to the survey (online supplemental appendix E). A knowledge translation plan was followed to disseminate and advertise the survey (online supplemental appendix F). We used twitter cards (ie, advertisements with pictures) and targeted hashtags to increase awareness of the survey (see the Twitter Campaign in online supplemental appendix G). In addition, we advertised through the e-newsletters of Knowledge Translation Canada, SPOR Evidence Alliance and Therapeutics Initiative. ### Data analysis Questionnaires that were terminated early, where respondents did not go through all questionnaire pages, were included in analyses, but those that were entirely blank were excluded. We measured the time respondents took to fill in a questionnaire regardless of whether it was complete. Descriptive statistics were calculated for each closed response question including count, frequency, with denominators taken as the number who provided a response to the question. One researcher coded the open-ended questions independently by identifying themes. Respondents’ comments on questions 9 and 10 were merged as they were similar in nature. ## Delphi process ### Design Our published methodological review to identify items related to bias in NMAs14 identified 22 items related to bias in NMAs. These items were reworded into concepts by the steering committee because past Delphi groups focused on the wording of the item, and not its main idea. The concepts and questions about the structure of the tool (domains, signalling questions, rating scales) were entered into a Qualtrics survey platform. Delphi panellists were asked to rate concepts based on a 5-point Likert scale of importance from 1 (not important—should be dropped as a concept to consider) to 5 (very important—must be included) or unable to score.20 21 If respondents did not provide a rating, the concept was recorded as missing. Respondents were asked to comment on whether they preferred to modify or reword the concepts.22 Free-text comment boxes allowed experts to provide additional comments. Non-responders or those failing to complete each round were sent three email reminders.21 The respondents completed two survey rounds to reach a high level of agreement, defined as at least 70% scored 4 or above on the 5-point Likert scale21 23 (table 1). After round 1, we generated reports of group versus individual responses. Respondents were also provided with anonymised free-text comments from the last round.24 View this table: [Table 1](http://ebm.bmj.com/content/28/1/58/T1) Table 1 Decision criteria for inclusion, exclusion and further consideration of potential concepts* ### Data analysis We reported the number of respondents completing each round. An overall response rate was calculated as well as summary statistics for each concept. The qualitative data from the free-text questions was analysed through thematic analysis by one author (CL) and read by one or more of the other coauthors. The steering committee used their executive power to decide whether concepts excluded by the Delphi panel should be retained or not. ## Results ### Knowledge user survey #### Recruitment results A total of 2821 emails were sent out to advertise the survey, 87 failed to reach the recipients due to incorrect addresses, resulting in 2734 emails that reached the intended individual (online supplemental appendix H). Most respondents completed the survey through our Qualtrics email survey link (n=390, response rate=14%) and 27 completed the survey through an anonymous link distributed over social media and e-newsletters (n=27). After consolidating duplicates (using IP addresses, n=33) and blank responses (n=86), a total of 298 responses were included in the analysis. Of the 298 respondents, 252 (85%) answered all the survey questions and 46 (15%) completed half of the questions. The mean time to complete the survey was 2.27 min (SD 1.33). #### Characteristics of respondents Of the 298 respondents, 136 (45.6%) self-identified as a systematic review expert, 122 (40.9%) as a guideline developer, 98 (32.9%) as a healthcare professional (table 2). Half of the respondents had primary affiliations at a university (50.0%). Most respondents resided in North America (40.6%) and/or Europe (33.9%) (table 2). Three-quarters (75.1%) of respondents indicated that their organisation produced systematic reviews with NMAs, but only 54.2% of knowledge users said they used an NMA in their work (table 2). View this table: [Table 2](http://ebm.bmj.com/content/28/1/58/T2) Table 2 Characteristics of knowledge user respondents and familiarity with NMAs #### Interest and type of tool preferred Most knowledge users (84%) reported they would use the RoB NMA tool if they received adequate training on how to use it (figure 1). When asked about their level of interest in our tool, 182/298 (61.1%) had high interest, 53/298 (17.8%) had low interest and only one person had no interest. Many respondents said they would use the RoB NMA tool’s bias assessment when conducting an overview of reviews, health technology assessment (HTA) or guideline; and to distinguish between NMAs at higher or lower risk of bias. ![Figure 1](http://ebm.bmj.com/https://ebm.bmj.com/content/ebmed/28/1/58/F1.medium.gif) [Figure 1](http://ebm.bmj.com/content/28/1/58/F1) Figure 1 (A) Use of a risk of bias (RoB) network meta-analysis (NMA) tool in knowledge users work and (B) interest in a RoB NMA tool. The left figure (A) depicts the proportion of responses to the question of whether knowledge users would use our proposed RoB NMA tool to assess the NMA analysis results, the authors’ conclusions or both results and conclusions. The right figure (B) shows the proportion of responses to the question about interest in a tool for appraising RoB in NMAs. When we asked knowledge users about the type of tool that might be useful to them or their organisation, half of the respondents (145/298) reported they preferred a tool to assess *both* the bias in individual NMA results and authors’ conclusions (figure 2). ![Figure 2](http://ebm.bmj.com/https://ebm.bmj.com/content/ebmed/28/1/58/F2.medium.gif) [Figure 2](http://ebm.bmj.com/content/28/1/58/F2) Figure 2 Flow chart of emails sent and responses. Open-ended questions are summarised in online supplemental appendix I tables 1–4. We also report in online supplemental appendix I table 4 respondents’ interest in dissemination and engagement activities. The majority of respondents (153/231; 66%) said they would want to read the final study reports, receive updates (147/231; 64%) and receive training in using the new tool (140/231; 61%). ### Delphi survey #### Recruitment results The steering committee invited 53 experts to participate in the Delphi surveys, and 19 emails failed for various reasons, resulting in 37 emails that reached the intended individual. Of these, 28 completed round 1 and 22 completed round 2 (flow chart in online supplemental appendix J). The response rate of panellists participating in our study was 28/37 (75.7%) in round 1 and 22/28 (78.6%) in round 2. #### Characteristics of round 1 respondents Of the 28 round 1 respondents, 15 (53.7%) self-identified as statisticians, 10 (35.7%) as academics, 4 (14.3%) as systematic review specialists or scientists, epidemiologists or graduate students/postdoctoral researchers (table 3). More than half of the respondents had a primary affiliations at a university (68%). Most respondents resided in Europe (53.6%) or North America (39.2%) (table 3). Most (96.4%) respondents indicated that their organisation produced systematic reviews with NMAs. View this table: [Table 3](http://ebm.bmj.com/content/28/1/58/T3) Table 3 Characteristics of Delphi round 1 respondents and expertise in NMAs #### Rating of concepts Of the 22 concepts, 7 did not reach consensus in round 2 (indicated in red in table 4). Table 4 lists all concepts in the left column that respondents rated from strongly disagree to strongly agree. The second column indicates whether the concept was included based on 70% of agreement (agree and strongly agree combined). The next columns indicate the number of responses over the denominator (number of people who answered) for each rating, percentage responses and the group median. The list of concepts in table 4 is not intended to be used to assess biases in NMAs, but to inform the development of items to be included in our tool. View this table: [Table 4](http://ebm.bmj.com/content/28/1/58/T4) Table 4 Rating of concepts by Delphi respondents in round 1 and 2 #### Structure of the RoB NMA tool When asked about the structure of the RoB NMA tool, the majority of respondents agreed that a domain-based structure (25/28; 89.3%) with signalling questions (20/28; 71.4%) was preferred. The domain-based structure would be similar to that used in Cochrane Risk of Bias tool and the ROBIS tool. Signalling questions flag aspects of study design related to the potential for bias and aim to help reviewers judge risk of bias. They also agreed (19/28; 67.9%) that the steering committee should provide guidance on how to produce a risk of bias assessment for NMAs, outcomes within a network or authors’ conclusions of NMAs. When asked about their preference for a tool to assess the risk of bias in NMA results and/or the authors’ conclusions, the majority of respondents (15/28; 53.6%) preferred a tool to assess bias in both results and conclusions, one-third (10/28; 35.7%) preferred a tool to assess bias in the results only and a minority (3/28; 10.7%) preferred to assess only the NMA authors’ conclusions. ## Discussion A majority of knowledge users responded that they had high interest in the RoB NMA tool if they received adequate training on how to use it and said they would use the tool to distinguish between NMAs at higher or lower risk of bias, and to assess an NMA in an overview of reviews, HTA or guideline. Delphi respondents articulated a clear preference for a tool that is domain-based with signalling questions which would be used to assess biases in the results and the authors’ conclusions. Seven out of 22 concepts did not reach consensus by the Delphi group, and these concepts and accompanying comments will be reviewed and considered by the steering committee for eligibility in the tool. The tool is still under development and the list of concepts is not intended to be used to assess biases in NMAs. Respondents also indicated the need for guidance on how to use the tool to assess biases in the NMA. These results highlight the necessity for clear and easy to understand elaboration and explanation materials plus training, and perhaps the development of more structured guidelines for reaching domain-based risk of bias judgements (eg, algorithms).25 Many knowledge users erroneously thought the RoB NMA tool’s final assessment would be used in an evaluation of the certainty of the evidence (eg, CINeMA (Confidence in Network Meta-Analysis26) or Grading of Recommendations, Assessment, Development and Evaluations27), even though we clearly stated that our proposed tool is intended for the assessment of the potential biases in an NMA. Only the quantitative results of an NMA (ie, the analysis) are used in a certainty of the evidence evaluation. We aimed to engage knowledge users and NMA experts early in the tool development for multiple reasons. Engagement would ensure that our tool will be relevant, useable and accepted.9 The Delphi expert responses provided us with feedback on which concepts would be most relevant, and the knowledge user responses emphasised the desire for future training10 and desire to help us with dissemination and communication of findings.10 11 ### Implications of this study Knowledge user5 7 28–31 and Delphi12 32–35 surveys have been successfully used to inform the development of other types of tools, systematic reviews and guidelines. Online surveys have been used to evaluate the reliability and face validity of tools, and the use of the Cochrane risk of bias tool in practice.25 However, we are not aware of similar published surveys conducted prior to the development of a risk of bias or quality appraisal tool, targeted specifically at knowledge users. ### Strengths and limitations A strength of our research was that we developed a protocol which we followed in the conduct of the research ([https://osf.io/da4uy/](https://osf.io/da4uy/)). We aimed to engage knowledge users and NMA experts early in the tool development for multiple reasons. Engagement would ensure that our tool will be relevant, useable and accepted.9 The responses provided us with feedback on which concepts are most relevant, training needs10 and future dissemination of findings. We combined newsletter, email distribution lists and social media to reach a wide range of knowledge users from across the globe. We attempted to maximise the response rate by sending email reminders and repeating messages through social media. Response bias in our sample is a major limitation as knowledge users working in high-income countries were more represented, and respondents (ie, systematic reviewers (45.6%) or academics (40.9%)) may have been more likely to have responded to a survey about a new tool to assess the bias in NMAs. A limitation is we did not ask knowledge users to define what their role was and whether they considered themselves: (i) decision makers; (ii) purchasers of services/pharma products; (iii) professional service providers; (iv) evidence generators or (v) advocates of health promotion. Another limitation is that our targeted emails and social media advertisement may have missed important knowledge users that use NMAs (eg, members of the Canadian Institutes of Health Research, Drug Safety and Effectiveness Network, Methods and Applications Group for Indirect Comparisons Group). A strength of the Delphi process was that by performing the surveys online, experts from around the world were able to participate. The response rate of Delphi panellists participating to our study was high in both rounds. All comments were thoroughly read by one of the authors (CL), and part of them were read by one or more of the other coauthors. In addition to a full feedback report, a summary of the comments in round 1 was also given in the next round to panellists. Delphi processes have many limitations, one of which is that they depend too much on the purposive selection of ‘experts’ and their availability, thus raising the question of whether all relevant perspectives and scientific disciplines have been taken into consideration. Our Delphi panel consisted of a small double digit number, which may risk collecting certain thought collectives and may be an issue of reliability.36 ### Future research The results of the survey will inform a new tool to assess biases in NMAs. Our tool is not targeted at authors of NMAs, as it does not outline methods that should be followed to conduct an NMA. It is targeted at knowledge users such as healthcare providers, policymakers, physiotherapists, who want to determine if the results of an NMA can be trusted to be at low risk of bias. The steering committee will use the results of the knowledge user survey to determine preferences around the tools structure, and the Delphi process to choose and refine the concepts in the tool. Concepts will be reworded into signalling questions and categorised into domains. The new tool will then be pilot tested with different knowledge user groups: patients, healthcare providers and researchers. Further research will involve reliability and validity testing. ## Conclusions The surveys provided feedback from knowledge users and experts on their preferences for the structure, focus and concepts included in a proposed RoB NMA tool, which is under development. Both knowledge users and Delphi panellists preferred a tool to assess both the bias in individual NMA results and authors’ conclusions. The majority of knowledge users had high interest in the tool and reported they would use it if they received adequate training. ## Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information. The datasets used and/or analysed during the current study are available freely at [https://osf.io/da4uy/](https://osf.io/da4uy/). ## Ethics statements ### Patient consent for publication Not applicable. ### Ethics approval Approval from the University British Columbia Ethics Board was obtained for both the survey and the Delphi process, and consent was implied when respondents completed the online survey. ## Acknowledgments We acknowledge the contribution of our collaborators: Sofia Dias, Penny Whiting, Adrienne Stevens, Bob Nakagawa, Gayatri Jayaraman, Georgia Salanti, Karla Soares-Weiser, Larry Mróz, Lucy Turner, Nicole Mittmann, Nicolette McGuire and Matthew Tunis. We also acknowledge our Delphi panellists: Simon Turner, Dimitris Mavridis, Virginia Chiocchia, Ian Shrier, Adriani Nikolakopoulou, Dan Jackson, Richard Riley, Becky Turner, Bruno Roza da Costa, Petros Pechlivanoglou, Steve Kanters, Ferran Catala-Lopez, Tianjing Li, Matt Page, Chris Cameron, Anna Chaimani, Kerry Dwan, Audrey Beliveau, Kristian Thorlund, Jonathan Sterne, Cinzia del Giovane, Guido Schwarzer, Joanne McKenzie, Lehana Thabane, Theodoros Papakonstantinou, Isabelle Bourtron, Sharon Strauss and Jenn Watt. ## Footnotes * Twitter @carole_lunny, @JennAnnWatt, @vminogue2 * Contributors CL conceived of the study; ACT, AAV, BH, CL, IW, JPTH, and JMW contributed to the design of the study; CL drafted the survey; CL, LC and SST inputted the questions into Qualtrics; CL, SSi and SST analysed the data; CL wrote the draft manuscript; ACT, AAV, BH, CL, IW, JPTH, and JMW revised the manuscript; all authors edited the manuscript; and all authors read and approved the final manuscript. ACT is the guarantor. * Funding Funding was received from a 2020 CIHR Project Grant (2021-2024; ID 451190). ACT currently holds a Tier 2 Canada Research Chair in Knowledge Synthesis. IW was supported by the Medical Research Council (programme MC_UU_00004/06). BH has previously received honoraria from Eversana for the provision of methodological advice related to the conduct of systematic reviews and meta-analysis. JPTH is a National Institute for Health Research (NIHR) Senior Investigator (NF-SI-0617-10145), is supported by NIHR Bristol Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol, is supported by the NIHR Applied Research Collaboration West (ARC West) at University Hospitals Bristol and Weston NHS Foundation Trust, and is a member of the MRC Integrative Epidemiology Unit at the University of Bristol. * Competing interests AAV was an Associate Editor for the journal, but was not involved with the decision or peer-review process. * Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. * Provenance and peer review Not commissioned; externally peer reviewed. * Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/) This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/). ## References 1. Moher D , Schulz KF , Simera I , et al . Guidance for developers of health research reporting guidelines. PLoS Med 2010;7:e1000217.[doi:10.1371/journal.pmed.1000217](http://dx.doi.org/10.1371/journal.pmed.1000217) pmid:http://www.ncbi.nlm.nih.gov/pubmed/20169112 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1371/journal.pmed.1000217&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=20169112&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 2. Whiting P , Wolff R , Mallett S , et al . A proposed framework for developing quality assessment tools. Syst Rev 2017;6:204.[doi:10.1186/s13643-017-0604-6](http://dx.doi.org/10.1186/s13643-017-0604-6) pmid:http://www.ncbi.nlm.nih.gov/pubmed/29041953 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/s13643-017-0604-6&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=29041953&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 3. Bullock A , Morris ZS , Atwell C . Collaboration between health services managers and researchers: making a difference? J Health Serv Res Policy 2012;17 Suppl 2:2–10.[doi:10.1258/jhsrp.2011.011099](http://dx.doi.org/10.1258/jhsrp.2011.011099) pmid:http://www.ncbi.nlm.nih.gov/pubmed/22572710 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 4. Hanney SR , Gonzalez-Block MA , Buxton MJ , et al . The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2003;1:1–28.[doi:10.1186/1478-4505-1-2](http://dx.doi.org/10.1186/1478-4505-1-2) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/1478-4505-1-1&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=12646072&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 5. Innvaer S , Vist G , Trommald M , et al . Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy 2002;7:239–44.[doi:10.1258/135581902320432778](http://dx.doi.org/10.1258/135581902320432778) pmid:http://www.ncbi.nlm.nih.gov/pubmed/12425783 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1258/135581902320432778&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=12425783&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 6. Lavis J , Davies H , Oxman A , et al . Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy 2005;10 Suppl 1:35–48.[doi:10.1258/1355819054308549](http://dx.doi.org/10.1258/1355819054308549) pmid:http://www.ncbi.nlm.nih.gov/pubmed/16053582 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 7. Vokó Z et al . Similarities and differences between stakeholders' opinions on using HTa information across five European countries. Improving the Use of an Economic Decision Support Tool 2018;14:149. 8. Vokó Z , Cheung KL , Józwiak-Hagymásy J , et al . Similarities and differences between stakeholders’ opinions on using Health Technology Assessment (HTA) information across five European countries: results from the EQUIPT survey. Health Res Policy Syst 2016;14:1–17.[doi:10.1186/s12961-016-0110-7](http://dx.doi.org/10.1186/s12961-016-0110-7) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 9. Richards C , Carter C , Sherlock K . Practical approaches to participation. Macaulay Institute, 2004. 10. Cottrell E , Whitlock E , Kato E , et al . Defining the benefits of stakeholder engagement in systematic reviews. Rockville, MD: Agency for Healthcare Research and Quality (US), 2014. 11. Deverka PA , Lavallee DC , Desai PJ , et al . Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res 2012;1:181–94.[doi:10.2217/cer.12.7](http://dx.doi.org/10.2217/cer.12.7) pmid:http://www.ncbi.nlm.nih.gov/pubmed/22707880 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.2217/cer.12.7&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=22707880&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 12. Whiting P , Savović J , Higgins JPT , et al . ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016;69:225–34.[doi:10.1016/j.jclinepi.2015.06.005](http://dx.doi.org/10.1016/j.jclinepi.2015.06.005) pmid:http://www.ncbi.nlm.nih.gov/pubmed/26092286 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1016/j.jclinepi.2015.06.005&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=26092286&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 13. Shea BJ , Reeves BC , Wells G , et al . AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008.[doi:10.1136/bmj.j4008](http://dx.doi.org/10.1136/bmj.j4008) pmid:http://www.ncbi.nlm.nih.gov/pubmed/28935701 [FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE4OiIzNTgvc2VwMjFfMTYvajQwMDgiO3M6NDoiYXRvbSI7czoxOToiL2VibWVkLzI4LzEvNTguYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 14. Lunny C , Tricco AC , Veroniki A-A , et al . Methodological review to develop a list of bias items used to assess reviews incorporating network meta-analysis: protocol and rationale. BMJ Open 2021;11:e045987.[doi:10.1136/bmjopen-2020-045987](http://dx.doi.org/10.1136/bmjopen-2020-045987) pmid:http://www.ncbi.nlm.nih.gov/pubmed/34168027 [Abstract/FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiYm1qb3BlbiI7czo1OiJyZXNpZCI7czoxMjoiMTEvNi9lMDQ1OTg3IjtzOjQ6ImF0b20iO3M6MTk6Ii9lYm1lZC8yOC8xLzU4LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 15. CIHR. Knowledge user engagement. Canadian Institute for Health Research, 2016. 16. Eysenbach G . Improving the quality of web surveys: the checklist for reporting results of Internet E-Surveys (cherries). J Med Internet Res 2004;6:e34.[doi:10.2196/jmir.6.3.e34](http://dx.doi.org/10.2196/jmir.6.3.e34) pmid:http://www.ncbi.nlm.nih.gov/pubmed/15471760 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.2196/jmir.6.3.e34&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=15471760&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 17. Gupta K . A practical guide to needs assessment. John Wiley & Sons, 2011. 18. Dillman DA . Mail and Internet surveys. 18. 2nd edition. Hoboken, New Jersey: John Wiley & Sons Inc, 2007. 19. Ban JK , Tadrous M , Lu AX , et al . History and publication trends in the diffusion and early uptake of indirect comparison meta-analytic methods to study drugs: animated coauthorship networks over time. BMJ Open 2018;8:e019110.[doi:10.1136/bmjopen-2017-019110](http://dx.doi.org/10.1136/bmjopen-2017-019110) pmid:http://www.ncbi.nlm.nih.gov/pubmed/29961001 [Abstract/FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiYm1qb3BlbiI7czo1OiJyZXNpZCI7czoxMToiOC82L2UwMTkxMTAiO3M6NDoiYXRvbSI7czoxOToiL2VibWVkLzI4LzEvNTguYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 20. Tricco AC , Zarin W , Antony J , et al . An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol 2016;70:61–7.[doi:10.1016/j.jclinepi.2015.08.012](http://dx.doi.org/10.1016/j.jclinepi.2015.08.012) pmid:http://www.ncbi.nlm.nih.gov/pubmed/26327490 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 21. Pandor A , Kaltenthaler E , Martyn-St James M , et al . Delphi consensus reached to produce a decision tool for selecting approaches for rapid reviews (Starr). J Clin Epidemiol 2019;114:22–9.[doi:10.1016/j.jclinepi.2019.06.005](http://dx.doi.org/10.1016/j.jclinepi.2019.06.005) pmid:http://www.ncbi.nlm.nih.gov/pubmed/31185276 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 22. Page MJ , McKenzie JE , Bossuyt PM , et al . Mapping of reporting guidance for systematic reviews and meta-analyses generated a comprehensive item bank for future reporting guidelines. J Clin Epidemiol 2020;118:60–8.[doi:10.1016/j.jclinepi.2019.11.010](http://dx.doi.org/10.1016/j.jclinepi.2019.11.010) pmid:http://www.ncbi.nlm.nih.gov/pubmed/31740319 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 23. Stevens A et al . Developing PRISMA-RR, a reporting guideline for rapid reviews of primary studies (protocol. Oxford, UK: EQUATOR Network, 2018. [https://www.equator-network.org/wp-content/uploads/2018/02/PRISMA-RR-protocol.pdf](https://www.equator-network.org/wp-content/uploads/2018/02/PRISMA-RR-protocol.pdf) 24. Boulkedid R , Abdoul H , Loustau M , et al . Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS One 2011;6:e20476.[doi:10.1371/journal.pone.0020476](http://dx.doi.org/10.1371/journal.pone.0020476) pmid:http://www.ncbi.nlm.nih.gov/pubmed/21694759 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1371/journal.pone.0020476&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=21694759&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 25. Savović J , Weeks L , Sterne JAC , et al . Evaluation of the Cochrane collaboration's tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation. Syst Rev 2014;3:37.[doi:10.1186/2046-4053-3-37](http://dx.doi.org/10.1186/2046-4053-3-37) pmid:http://www.ncbi.nlm.nih.gov/pubmed/24731537 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/2046-4053-3-37&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=24731537&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 26. Nikolakopoulou A , Higgins JPT , Papakonstantinou T , et al . Cinema: an approach for assessing confidence in the results of a network meta-analysis. PLoS Med 2020;17:e1003082.[doi:10.1371/journal.pmed.1003082](http://dx.doi.org/10.1371/journal.pmed.1003082) pmid:http://www.ncbi.nlm.nih.gov/pubmed/32243458 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1371/journal.pmed.1003082&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=32243458&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 27. Puhan MA , Schünemann HJ , Murad MH , et al . A grade Working group approach for rating the quality of treatment effect estimates from network meta-analysis. BMJ 2014;349:g5630.[doi:10.1136/bmj.g5630](http://dx.doi.org/10.1136/bmj.g5630) pmid:http://www.ncbi.nlm.nih.gov/pubmed/25252733 [FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNDkvc2VwMjRfNS9nNTYzMCI7czo0OiJhdG9tIjtzOjE5OiIvZWJtZWQvMjgvMS81OC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 28. Haddaway NR , Kohl C , Rebelo da Silva N , et al . A framework for stakeholder engagement during systematic reviews and maps in environmental management. Environ Evid 2017;6:11.[doi:10.1186/s13750-017-0089-8](http://dx.doi.org/10.1186/s13750-017-0089-8) 29. Helbig N , Dawes S , Dzhusupova Z , et al . Stakeholder engagement in policy development: observations and lessons from international experience. In: Policy practice and digital science. Springer, 2015: 177–204. 30. Sultan S , Siedler MR , Morgan RL , et al . An international needs assessment survey of guideline developers demonstrates variability in resources and challenges to collaboration between organizations. J Gen Intern Med 2021:1–9.[doi:10.1007/s11606-021-07112-w](http://dx.doi.org/10.1007/s11606-021-07112-w) pmid:http://www.ncbi.nlm.nih.gov/pubmed/34545466 31. Sibbald SL , Kang H , Graham ID . Collaborative health research partnerships: a survey of researcher and knowledge-user attitudes and perceptions. Health Res Policy Syst 2019;17:92.[doi:10.1186/s12961-019-0485-3](http://dx.doi.org/10.1186/s12961-019-0485-3) pmid:http://www.ncbi.nlm.nih.gov/pubmed/31775829 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 32. Booth A , Clarke M , Ghersi D , et al . Establishing a minimum dataset for prospective registration of systematic reviews: an international consultation. PLoS One 2011;6:e27319.[doi:10.1371/journal.pone.0027319](http://dx.doi.org/10.1371/journal.pone.0027319) pmid:http://www.ncbi.nlm.nih.gov/pubmed/22110625 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1371/journal.pone.0027319&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=22110625&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 33. Harrison R , Jones B , Gardner P , et al . Quality assessment with diverse studies (QuADS): an appraisal tool for methodological and reporting quality in systematic reviews of mixed- or multi-method studies. BMC Health Serv Res 2021;21:144.[doi:10.1186/s12913-021-06122-y](http://dx.doi.org/10.1186/s12913-021-06122-y) pmid:http://www.ncbi.nlm.nih.gov/pubmed/33588842 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 34. Mokkink LB , Boers M , van der Vleuten CPM , et al . COSMIN risk of bias tool to assess the quality of studies on reliability or measurement error of outcome measurement instruments: a Delphi study. BMC Med Res Methodol 2020;20:1–13.[doi:10.1186/s12874-020-01179-5](http://dx.doi.org/10.1186/s12874-020-01179-5) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/s12874-020-01082-z&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 35. Whiting P , Rutjes AWS , Reitsma JB , et al . The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003;3:1–13.[doi:10.1186/1471-2288-3-25](http://dx.doi.org/10.1186/1471-2288-3-25) [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1186/1471-2288-3-1&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=12515580&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 36. Niederberger M , Spranger J . *Delphi technique in health sciences: a* MAP. Front Public Health 2020;8:457.[doi:10.3389/fpubh.2020.00457](http://dx.doi.org/10.3389/fpubh.2020.00457) pmid:http://www.ncbi.nlm.nih.gov/pubmed/33072683 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 37. Higgins JP , Thomas J , Chandler J , et al . Cochrane Handbook for systematic reviews of interventions. John Wiley & Sons, 2019. 38. Savović J , Turner RM , Mawdsley D , et al . Association Between Risk-of-Bias Assessments and Results of Randomized Trials in Cochrane Reviews: The ROBES Meta-Epidemiologic Study. Am J Epidemiol 2018;187:1113–22.[doi:10.1093/aje/kwx344](http://dx.doi.org/10.1093/aje/kwx344) pmid:http://www.ncbi.nlm.nih.gov/pubmed/29126260 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1093/aje/kwx344&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=29126260&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 39. Mavridis D , Welton NJ , Sutton A . A selection model for accounting for publication bias in a full network meta-analysis. Stat Med 2014;33:5399–412.[doi:10.1002/sim.6321](http://dx.doi.org/10.1002/sim.6321) pmid:http://www.ncbi.nlm.nih.gov/pubmed/25316006 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1002/sim.6321&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 40. Mavridis D , Efthimiou O , Leucht S , et al . Publication bias and small-study effects magnified effectiveness of antipsychotics but their relative ranking remained invariant. J Clin Epidemiol 2016;69:161–9.[doi:10.1016/j.jclinepi.2015.05.027](http://dx.doi.org/10.1016/j.jclinepi.2015.05.027) pmid:http://www.ncbi.nlm.nih.gov/pubmed/26210055 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 41. Chaimani A , Vasiliadis HS , Pandis N , et al . Effects of study precision and risk of bias in networks of interventions: a network meta-epidemiological study. Int J Epidemiol 2013;42:1120–31.[doi:10.1093/ije/dyt074](http://dx.doi.org/10.1093/ije/dyt074) pmid:http://www.ncbi.nlm.nih.gov/pubmed/23811232 [CrossRef](http://ebm.bmj.com/lookup/external-ref?access_num=10.1093/ije/dyt074&link_type=DOI) [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=23811232&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 42. Cote MP , Lubowitz JH , Brand JC , et al . Understanding network meta-analysis (NMa) conclusions requires scrutiny of methods and results: introduction to NMa and the geometry of evidence. Arthroscopy 2021;37:2013–6.[doi:10.1016/j.arthro.2021.04.070](http://dx.doi.org/10.1016/j.arthro.2021.04.070) pmid:http://www.ncbi.nlm.nih.gov/pubmed/34225990 [PubMed](http://ebm.bmj.com/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Febmed%2F28%2F1%2F58.atom) 43. Sterne JAC , Savović J , Page MJ , et al . Rob 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898.[doi:10.1136/bmj.l4898](http://dx.doi.org/10.1136/bmj.l4898) pmid:http://www.ncbi.nlm.nih.gov/pubmed/31462531 [FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNjYvYXVnMjhfMi9sNDg5OCI7czo0OiJhdG9tIjtzOjE5OiIvZWJtZWQvMjgvMS81OC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 44. Boutron I , Ravaud P . Misrepresentation and distortion of research in biomedical literature. Proc Natl Acad Sci U S A 2018;115:2613–9.[doi:10.1073/pnas.1710755115](http://dx.doi.org/10.1073/pnas.1710755115) pmid:http://www.ncbi.nlm.nih.gov/pubmed/29531025 [Abstract/FREE Full Text](http://ebm.bmj.com/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoicG5hcyI7czo1OiJyZXNpZCI7czoxMToiMTE1LzExLzI2MTMiO3M6NDoiYXRvbSI7czoxOToiL2VibWVkLzI4LzEvNTguYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9)