Background to Clinical Error
The United States was the first country to investigate clinical error systematically. In 1984, Brennan et al. looked at 30,121 admissions to New York hospitals and found 1,200 (4%) adverse events overall (Table 1.1). Adverse events were deemed preventable in 58%; 3% of adverse events led to permanent disability and 14% led to death. Extrapolating these figures and others (including the Utah and Colorado figures) to the whole of the USA, there may be between 44,000 and 98,000 Americans dying in hospital per year due to clinical error, at an approximate cost of $17–29 billion (Kohn et al. 1999). This exceeds the mortality rate for breast cancer and severe trauma in the USA.
In Australia, Wilson et al. (1995) reviewed 14,179 admissions in New South Wales hospitals and showed an adverse event occurring in 17% of admissions, of which 51% were preventable, 14% caused permanent disability and 5% resulted in death. When extrapolated to all Australian admissions, this equates to 18,000 deaths per year at an approximate cost of 4.7 billion Australian dollars.
In 2001 in the UK, Vincent et al. looked at 1,014 hospital records retrospectively and showed that adverse events had occurred in 11% of patients, 48% of which were from preventable clinical error, producing 6% permanent disability and 8% deaths. This was extrapolated to calculate the cost of extra bed days alone, which was approximately £1 billion. More recently, Sari et al. reviewed case notes retrospectively and found 9% of admissions had at least one adverse event, of which 31% were preventable, with 15% of clinical errors producing disability for greater than 6 months and 10% contributing to deaths. They also noted an increased mean length of stay of 8 days associated with error. The report showed that there had been little improvement from 2001 to 2007 (Sari et al. 2007).
From the 1980s onwards there was an increase in literature regarding clinical error, and the National Health Service (NHS) started to highlight quality of care, developing systems to improve patient care and safety. McIntyre and Popper (1983) published ‘The critical attitude in medicine: the need for a new ethics’, encouraging doctors to look for errors and to learn from mistakes. Anaesthetics was the first specialty to start to look at error, notably in systems involving equipment, led by Cooper in 1984. This developed the concept of looking at both psychological and environmental causes, which was furthered during the 1990s, in both anaesthetics and obstetrics, in the USA (Cooper 1994). Gaba and his colleagues advanced this idea, developing crisis resource management in anaesthesiology which provided structure and advocated utilising simulation to practise untoward events. He wrote: ‘No industry, in which human life depends on the skilled performance of responsible operators, has waited for unequivocal proof of the benefits of simulation before embracing it,’ (Gaba et al. 1994).
Leape took this a step further and broadened the concept to include all aspects of medical and nursing care. He discussed the common problem of ‘blame culture’, which exists in health services, and argued that this had to change. He further maintained that the change would only occur if the medical fraternity accepted that psychology and human factors play a big role in clinical error (Leape 1994). To this end, the UK, USA and other developed countries have made concerns regarding patient safety public. To Err is Human was published by the Institute of Medicine in the USA (Kohn et al. 1999), and An Organisation with a Memory: Learning from Adverse Events in the NHS (Department of Health 2000) was published in the UK. Both of these documents encouraged learning from other high-risk industries, for example aviation, nuclear and carbon fuels, which have well-developed systems for training and learning from error within their environment.
The House of Commons Health Committee Report on Patient Safety in 2009 highlighted 850,000 adverse events reported in the NHS in England, with deaths in 3,500 patients. Concomitant to the increasing statistical evidence for clinical error, there has been growing awareness that clinical error is not only due to error in knowledge or skill (i.e. technical error) but also involves non-technical error, otherwise known as human factors.
The ‘Patient Safety First’ initiative1 describes the process of developing a positive safety culture by providing an open and just environment where staff are comfortable discussing safety issues, and are treated fairly if they are involved in an incident. In these circumstances, the development of a reporting culture does not imply blame. The initiative argues that there needs to be a culture open to learning and information sharing, thus enabling all to learn from errors and prevent further episodes.
Human Factors
The Elaine Bromiley case was highlighted in a ‘Patient Safety First’ document (Carthey et al. 2009). She was a fit, young woman who went for a routine ENT operation and the anaesthetist experienced the ‘can’t intubate, can’t ventilate’ situation. This developed into a catalogue of errors as attempts to intubate her failed despite emergency equipment being brought in by the nurses, and, tragically, because of a chain of human errors, she never regained consciousness. Elaine Bromiley was married to an airline pilot who could not believe, during the investigation of his wife’s death, that medical staff had no training in human factors. He has subsequently become the founder of the Clinical Human Factors Group and was involved in writing the ‘Patient Safety First’ document. The Department of Health film made with the help of Elaine’s husband, Martin, is available at http://www.institute.nhs.uk/safer_care/general/human_factors.html. The film is a valuable and powerful introduction to the subject.
What Are Human Factors?
Human factors are systems, behaviours or actions that modify human performance. They can be attributed to the individual, a team of individuals or the way these individuals interact with the working environment. Human factors operate on two levels.
Level 1: How Humans Work in a Specified System or Environment (Includes Ergonomics)
Human factors and ergonomics is a discipline in itself and looks at how people interact with systems. It combines looking at human capabilities and the design of systems to maximise safety, performance and ability to work together in harmony. This specialist field was first developed in the Second World War with the development of aviation medicine and psychology. Since then, technology has advanced significantly and so has the field of human factors and ergonomics.2 It is now multidisciplinary, including psychologists, engineers, IT specialists and physiologists and is essential for any industry that requires man and machine to work together. This subject is not explored in this book but anyone interested can look at Karwowski (2006) for more information.
Level 2: Non-Technical Skills, Which Are Cognitive, Social and Personal
These specific aspects will be developed in the chapters of this book and include:
- cognition and error
- situation awareness
- leadership and teamwork
- personality and behaviour
- communication and assertiveness
- decision making
- effects on human behaviour: tiredness and fatigue
On an individual level, you will work on the level 2 aspects, which will enable you to understand the interactions with the level 1 features (specific systems and the environment). These two levels have been shown to relate closely when it comes to errors. It has been shown that an adverse event can be a catalogue of sequential errors which line up with potentially disastrous consequences. This was recognised in James Reason’s Swiss cheese model of accident causation (Reason 2000) (Figure 1.1).
(from Reason 2000).