Responses

Original research
Retrospective comparison of approaches to evaluating inter-observer variability in CT tumour measurements in an academic health centre
Compose Response

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests

PLEASE NOTE:

  • A rapid response is a moderated but not peer reviewed online response to a published article in a BMJ journal; it will not receive a DOI and will not be indexed unless it is also republished as a Letter, Correspondence or as other content. Find out more about rapid responses.
  • We intend to post all responses which are approved by the Editor, within 14 days (BMJ Journals) or 24 hours (The BMJ), however timeframes cannot be guaranteed. Responses must comply with our requirements and should contribute substantially to the topic, but it is at our absolute discretion whether we publish a response, and we reserve the right to edit or remove responses before and after publication and also republish some or all in other BMJ publications, including third party local editions in other countries and languages
  • Our requirements are stated in our rapid response terms and conditions and must be read. These include ensuring that: i) you do not include any illustrative content including tables and graphs, ii) you do not include any information that includes specifics about any patients,iii) you do not include any original data, unless it has already been published in a peer reviewed journal and you have included a reference, iv) your response is lawful, not defamatory, original and accurate, v) you declare any competing interests, vi) you understand that your name and other personal details set out in our rapid response terms and conditions will be published with any responses we publish and vii) you understand that once a response is published, we may continue to publish your response and/or edit or remove it in the future.
  • By submitting this rapid response you are agreeing to our terms and conditions for rapid responses and understand that your personal data will be processed in accordance with those terms and our privacy notice.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

Other responses

  • Published on:
    Approaches for assessing agreement in continuous measurements in a multi-observer setup, comment on “Retrospective comparison of approaches to evaluating inter-observer variability in CT tumour measurements in an academic health centre”
    • Jens Borgbjerg, Consultant radiologist, MD Akershus University Hospital, Norway
    • Other Contributors:
      • Martin Bøgsted, Professor, Senior Biostatistician, PhD
      • Heidi Søgaard Christensen, Post Doc, PhD

    We read with interest the article by Woo and colleagues, evaluating sensitivity of statistical methods for detecting different levels of interobserver variability in CT measurements of cancer lesions [1]. It is increasingly recognized that in order to evaluate the efficacy of medical imaging there is a need to conduct multi-observer studies in which proper statistical analysis is a critical component [2]. Thus, the study by Woo et al. is a welcome addition to the literature, and the authors are commended for providing open access to data.
    The authors supplemented an observed dataset based on the diameters of 10 CT lesions measured by 13 observers by generating two additional datasets of increased and decreased measurement variability, respectively.
    These three datasets were used to compare three statistical approaches 1) intraclass correlation coefficient (ICC), and what the authors refer to as 2) outlier counts (score) from standard Bland-Altman plotting with limits of agreement, and 3) outlier score from Bland-Altman plotting with fixed limits of agreement.

    We have a few comments.

    We ardently agree with the authors that although the widely used ICC accommodates a multi-observer setup, it is not an ideal method for evaluating interobserver variability; the ICC reveals little about the degree of discrepancy nor supply information to investigate whether the variability may change with the magnitude of measurements (e.g. to reveal that the diameter...

    Show More
    Conflict of Interest:
    None declared.