Intended for healthcare professionals

Feature BMJ Interview

“We know where to probe,” says Mike Richards, the new chief inspector of hospitals

BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f5557 (Published 18 September 2013) Cite this as: BMJ 2013;347:f5557
  1. Nigel Hawkes, freelance journalist
    Author affiliations
  1. nigel.hawkes1@btinternet.com

Mike Richards, England’s first chief inspector of hospitals, tells Nigel Hawkes why the new inspection regime introduced after the Mid-Staffordshire scandal is bigger and better

Like Windows software, healthcare inspection in England is a never ending iteration of the same basic idea. Just as Windows Vista replaced the unlamented Windows XP, so the new look Care Quality Commission replaced its earlier (and equally unlamented) version, which itself replaced the Healthcare Commission, which had changed its name from the Commission for Healthcare Audit and Inspection, which replaced the Commission for Healthcare Improvement. All this in little more than a dozen years: so much energy, so much reinvention, so many unrealised expectations.

Cynics might wonder whether inspection works, or at least why its practitioners are so regularly kicked out in favour of others who seem to do no better. But Mike Richards, the first chief inspector of hospitals, is having none of it. “It can work, and it will work,” he insisted in an interview at the headquarters of the Care Quality Commission in London. “We have the process to make it work, on which we’ll build, and we have the people too.” He also believes, which some have doubted, that he has all the financial resources he will need.

By December 2015, when he is committed to having completed inspections of all acute and mental health trusts in England, he promises a comprehensive view of what care has been like for patients in every hospital, plus volumes of data to back it up. “My task is to give a robust but fair assessment of hospitals on behalf of patients and the public,” he says. “The question they want answered is, ‘If my mother or my wife or my child had to be admitted to a particular hospital, would care there be good?’”

Return to ratings

The new approach to inspection has many of the same features as that of the Commission for Healthcare Improvement (CHI), which operated between 2001 and 2004. Richards acknowledges the similarities but says there are differences too. CHI used star ratings—three, two, one, or zero in descending order. The new ratings will also come in four flavours: outstanding, good, requires improvement, and inadequate. So far, so familiar. But star ratings fell out of fashion because they were felt to be too crude, incapable of capturing variation within a trust. The new system aims to get round that by giving a separate rating to each of five domains of a hospital’s performance—whether it is safe, effective, caring, responsive, and well led—and to each core service it provides: emergency departments, maternity, paediatrics, acute medical and surgical pathways, critical care, end of life care, and outpatients.

The decision to go back to ratings was taken by the Care Quality Commission before Richards was appointed, under pressure from health secretary Jeremy Hunt. The post of chief inspector of hospitals was a recommendation of the Francis report into the Mid Staffordshire care scandal, which the government accepted after the commission had decided to implement ratings.

What makes the new inspection regime different, Richards argues, is the process it will use. This was developed by Bruce Keogh, the NHS medical director, to review 14 English trusts identified as in need of inspection because of raised mortality rates.1 The first phase is data gathering. “There is now a considerable body of national data on hospital performance,” Richards says. “It comes from the trusts’ own data, from clinical commissioning groups, from the royal colleges, and the General Medical Council. We’ll go out to all these organisations, and we’ll go on inspections with a really significant amount of data. We’ll know where to probe.”

Ian Kennedy, when chair of the Healthcare Commission, made similar claims, but Richards says there are now “far more data” than Kennedy ever had. He cites as an example the junior doctors’ survey conducted by the GMC, which gets a 98% response rate for the simple reason that doctors cannot proceed to the next stage of training until they complete it. In 2012, for the first time, this survey included questions about patient safety, with one in 20 doctors saying that they had concerns about it in the hospital where they worked. “If we see that from the junior doctors’ survey and the patient survey there’s a problem with caring at a particular hospital, we’ll focus on that,” Richards said.

How will it work?

A document published by the Care Quality Commission lists 118 indicators it has identified for monitoring quality of care.2 They include avoidable hospital acquired infections, under-reporting of death and serious harm notifications, never events, deaths in low risk procedures, mortality alerts, and outliers for a number of conditions, hospital standardised mortality ratios, and summary hospital level mortality indicators, waiting times, cancelled operations, delays in discharges, staff and inpatient surveys, staff sickness rates, bed occupancy rates, and complaints made by patients and public. These come from many sources, including Hospital Episode Statistics, the Health Protection Agency, the GMC, the National Reporting and Learning System, and the health analysis company Dr Foster.

For quantitative data, indicators that fall between 1.6 and 2 standard deviations above the mean are identified as a “risk,” and those more than 2 standard deviations above as “elevated risk.” For qualitative data such as complaints, comments from the public, or the output of inspections, indicators for which the number of events is within two deviations of the mean are identified as “expected,” while those events above two deviations are identified as “risk.” A risk score is calculated for each trust by adding together the number of risk and elevated risk indicators, with elevated risk indicators scoring double.

Richards believes that this mass of data will provide his inspection teams with key lines of inquiry (known familiarly as KLOE, pronounced Chloe) to inform their work. He plans big inspection teams of 20-25 people, including doctors, nurses, managers, patients, and carers as well as commission inspectors. “Big teams will mean we’re only on site for a relatively short time,” he says. “Doctors and nurses will come from other trusts, and that will be possible because they’ll only be away from their normal work for three or four days at a time. We’ll need a large pool of people to draw from, and it’s vital for hospitals to release staff to do it.”

Most inspections will be planned, which means hospitals will be forewarned. This is necessary, Richards says, in order to include listening events, where members of the public are invited to offer opinions and relate their experience of the hospital, and to arrange focus groups with, say, junior doctors or nurses. He hopes that these will provide clues the inspection team can follow up. “Junior doctors might say that they have worried about the care of declining patients or that they can’t get in touch with consultants at weekends,” he says. “Then we’ll pursue that. We’ll come back at the weekend and check. Each bit of the visit will dictate the next bit.”

For the inspection itself the team will break up into smaller groups of around five to visit different departments, before reconvening to share information. Richards does not believe that hospitals forewarned of the arrival of the inspection team will be able to cover up their defects. “I don’t think they can,” he says. “We may choose to go back for an unannounced visit to the same wards to check on that—it very much depends on what we find.” Such surprise inspections would not involve the whole team.

The final stage is the publication of a report, together with all the data gathered. It sounds like an awful lot of work? “That’s entirely true,” he says. “There are 160 acute trusts, so to inspect them all by December 2015 means we’ll have to do 80 a year, or roughly speaking two a week allowing for holidays. It’s pretty intensive.” He has previously said that he will need “a small army” of healthcare professionals for the teams. Has it proved difficult to recruit them? “Not at all, but it’s early days. In six to 12 months’ time the test will come. I hope the BMJ will encourage doctors both senior and junior to volunteer. For those people, it will be a very good learning experience from which they’ll gain a lot, and so will the hospitals they work for.”

To bring in the new system, Richards has been given £25m (€30m; $39m) in addition to what the commission was already spending on inspections. Is it enough? “I’m confident it is. We’ll have the money to do what we’ve planned to do.”

The NHS management blogger Roy Lilley has argued that inspecting hospitals is like driving a car by looking through the rear view mirror. It either finds nothing wrong, in which case the money spent on the inspection has been wasted, or it discovers poor care too late to do anything about it.

Richards disagrees. “Our reports will be current, what’s happening on the wards today. We may also get an impression of the direction of travel. I’ve spoken to consultants who have said ‘three years ago I was on the point of leaving, but not now. It’s not right, but the direction of travel is right.’ There’s also some evidence that staff surveys may actually be predictors of what happens to patients much later—if so, it’s an early warning score.” He does not accept that inspection reports will be outdated before they are published and promises to re-inspect hospitals early if they have done poorly.

The whole process begins on 16 September with an inspection at Croydon University Hospital, one of six identified with high risk ratings. The others in this category in the first of the programme are Barking, Havering and Redbridge; Barts Health Trust; South London Healthcare; Royal Bournemouth and Christchurch; and Nottingham University Hospitals (whose chief executive, ironically, is Peter Homa, a former chief executive of CHI) . Wave one also includes six hospitals with the lowest risk rating and six with a range of risk points falling in between. “By December 2015 we’ll have done every hospital,” Richards says. “Then we’ll start again.”

Career highlights

  • A cancer specialist, Richards was a fellow at Bart’s before becoming a consultant at Guy’s and then clinical director of Cancer Services at Guy’s and St Thomas’ (1991-99)

  • In 1999 he was appointed “cancer czar”—national cancer director at the Department of Health

  • He led the NHS Cancer Plan, which introduced cancer networks and set targets for speed of referral and access to treatments. The plan includes peer review visits by expert teams to assess standards of care, a model now borrowed for the new CQC inspection process

  • In 2012 he set up a review of breast cancer screening that concluded that for each breast cancer death prevented, three women will be overdiagnosed and treated unnecessarily

  • Before joining CQC, he was director at NHS England responsible for reducing premature mortality across all conditions

Notes

Cite this as: BMJ 2013;347:f5557

Footnotes

  • Competing interests: I have read and understood the BMJ Group policy on declaration of interests and have no relevant interests to declare.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References