Validation of the Fresno test of competence in evidence based medicine
BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7384.319 (Published 08 February 2003) Cite this as: BMJ 2003;326:319
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Fresno's test[1] may well be objective and standardised but does
it measure the EBM skills medical educators should be teaching? I work as
both a General Practioner (GP) and a Public Health Consultant, but don't
often get asked either / or questions of the type posed here.
As a GP, I was concerned about John only presenting with enuresis at
age 11. I wanted to involve the team (what did the health visitor or
social services know about the family?), and could think of more options
than drugs or alarms. Since John's inconvenience and embarrassment were
the real presenting problem, I would want to involve him in the planning
process. Addressing these issues requires effective gathering and
assessment of evidence but in quite different ways from those measured
by Fresno.
In my Public Health role, I have recently led seminars developing
transparent and rigorous "rules" about how evidence could be used to
support local funding decisions. Primary Care Trust staff discussed
traditional hierarchies of evidence, but remembered that there were not
many RCTs to support or disprove large amounts of NHS provision let
alone the (often social) interventions that tackle major health threats
like inequalities and child poverty. I'm not sure how we'd score on
Fresno as we agreed that we would (in this order):
*Follow NICE, Cochrane etc recommendations ¡V if they existed;
* Look closely at local precedents;
*Look hard for evidence of harm (to strongly recommend against funding)
or no-harm (if there was some reason to consider funding an unproven
intervention);
*Tailor our approach to evidence, depending on the type of intervention
whilst still considering design and quality of studies as well as
triangulation between different sources;
*Always consider exceptional individual circumstances;
*Think explicitly about the values underpinning our decisions;
*Review the process, and the decisions.
Finally, and maybe most importantly, Fresno doesn't even start on my
real learning problems with EBM--getting the time to do it, and knowing
when its "good enough". I had an immediate answer in my head to Lydia's
contraceptive problem: look at the clear WHO medical eligibility criteria
for contraceptives (on the web). I feel confident that this gives the
"best available" answer for the patient and clinician to consider. It
also frees me up to go on to another question. But is that good enough?
1: Validation of the Fresno test of competence in evidence based
medicine
Kathleen D Ramos, Sean Schafer, and Susan M Tracz
BMJ 2003; 326: 319-321.
Competing interests:
None declared
Competing interests: No competing interests
The Fresno test
The Fresno test of competence in evidence based medicine appears to
be an impressive development.(1) Ramos et. al. assert that it has content
reliability, high inter-rater and internal reliability, good
discriminatory ability, construct validity and avoids floor and ceiling
effects. We believe that the authors should be more circumspect in their
claims.
Supporting data from only two raters was reported and they were
intimately involved in the development of the test. We cannot assume that
other raters will achieve comparable levels of inter-rater reliability,
internal reliability, discriminatory ability, absence of floor and ceiling
effects or acquire the ability to distinguish experts and novices. Levels
of inter-rater reliability, internal consistency and discrimination are
intimately dependent on the population which has taken the test and are
all likely to be lower with a more homogenous group of evidence based
medicine learners. Furthermore ‘the test presently has only one set of
clinical vignettes and one set of numeric examples’.(1) It cannot be
assumed that any of these key attributes can be maintained in subsequent
versions of the test which will need to be developed since the original
tested version and its scoring rubrics have been published.(2) Finally,
the authors state that the ‘best use of the Fresno test is to measure
change in knowledge after instruction’ yet do not present any sensitivity
data to support this statement.
In short, Ramos et. al. cannot yet reassure educators and learners
that the Fresno test is indeed a ‘simple, reliable and valid tool for
assessing knowledge and skill in … evidence-based medicine’.(1)
References
1. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of
competence in evidence based medicine. BMJ 2003;326:319-21.
2. Ramos, K. D., Schafer, S., and Tracz, S. M. Test and validation
process details. http://bmj.com/cgi/content/full/326/7384/319/DC1. (date
accessed 14-2-2003)
Competing interests:
None declared
Competing interests: No competing interests