Friday, April 12, 2013

Why do we resist new thinking about safety and systems?

Something I have been thinking about for a while is the way that we look at safety and systems - the unstated assumptions and core beliefs. The paradigm and the related shared ideas about safety are little different now to what they were 20 or 30 years ago. New thinking struggles to take root. We continue to explain adverse events in complex systems as 'human error'. We continue to blame people for making 'errors', even when the person is balancing conflicting goals under production pressure (such as this case). We continue to try to understand safety by studying very small numbers of adverse events (tokens of unsafety), without trying to understand how we manage to succeed under varying conditions (safety). It is a bit like trying to understand happiness by focusing only on rare episodes of misery. It doesn't really make sense. We are left with what Erik Hollnagel calls 'Safety I' thinking as well as what Sidney Dekker calls 'old view' thinking. The paradigm has a firm hold on our mindsets - our self-reinforcing beliefs. The paradigm is our mindset.

Why are we so resistant to change? Over at Safety Differently, Sidney Dekker recently posted a blog called 'Can safety renew itself?', which resonates with my recent thinking. Dekker asks "Is the safety profession uniquely incapable of renewing itself?" He makes the case that the safety profession is inherently conservative and risk averse. But these qualities stifle innovation, which naturally requires questioning the assumptions that underlie our practices, and taking risks.

It is something that I can't help but notice. When it comes to safety and systems, we seem much more comfortable dismissing new thinking than even challenging old thinking. Our skepticism is reserved for the new, while the old is accepted as 'time-served'. So we are left with old ideas and old models in a state of safety stagnancy. Our most widely accepted models of accident causation are still simple linear models. Non-linear safety models are dismissed as 'too complicted', or 'unproven'. There is not the same determination to question whether assumptions about linear cause-effect relationships really exist in complex systems. The 'new view', which sees human error as a symptom, not a cause, is often seen as just excuses. But we are less willing to think about whether human error is a viable 'cause'. We are even less willing to question whether 'human error' is even a useful concept, in a complex, underspecified system where people have to make constant adjustments and trade-offs - and failures are emergent. The concept of 'performance variability' is seen by some as wishy washy. But the good outcomes that arise from it are not considered further. Proposals to reconsider attempts to quantify human reliability in complex systems are dismissed. But there is not the same urge to critique the realism of the source data and the sense behind the formulae that underlie 'current' (i.e. 1980s) human reliability assessment (HRA) techniques. There are plenty of reviews of HRA, but they rarely seem to question the basic assumptions of the approach or its techniques.

Why is this? Dekker draws a parallel with the argument of Enlightenment thinker Immanuel Kant, regarding self-incurred tutelage as a mental self-defence against new thinking.
"Tutelage is the incapacity to use our own understanding without the guidance of someone else or some institution... Tutelage means relinquishing your own brainpower and conform so as to keep the peace, keep a job. But you also help keep bad ideas in place, keep dying strategies alive." 
Little Johnny came to regret asking awkward questions about Heinrich's pyramid.
The reason, according to Kant, is not a lack of intellect, but rather a lack of determination or courage.  This rings true. But I think we need to be a bit more specific. Why do we resist new thinking in safety? A few things spring to mind. They fall under the categories of personal barriers and system barriers.

Personal Barriers

I have almost certainly fallen prey to nearly all of these at some point, and so speak I from experience as well as observation. If new thinking strikes any of these nerves, I try to listen - hard as it may be.
  1. LACK OF KNOWLEDGE. This is the most basic personal barrier, and seems to be all too common. Lack of knowledge is not usually through a lack of ability or intellect, but a lack of time or inclination. For whatever reason, many safety practitioners do not seem to read much about safety theory. The word theory even seems to have a bad name, and yet it should be the basis for practice, otherwise our science and practice is populist, or even puerile, rather than pragmatic (this drift is evident also in psychology and other disciplines). Some reading and listening is needed to challenge one's own assumptions. Hearing something new can be challenging because it is new. For me personally, several system safety thinkers and systems thinkers have kept me challenging my own assumptions over the years.
  2. FEAR. Fear is the reason most closely related to Kant's, as cited by Dekker. Erik Hollnagel has cited the fear of uncertainty referred to by Nietzsche in 'Twilight of the Idols, or, How to Philosophize with a Hammer': "First principle: any explanation is better than none. Because it is fundamentally just our desire to be rid of an unpleasant uncertainty, we are not very particular about how we get rid of it: the first interpretation that explains the unknown in familiar terms feels so good that one "accepts it as true."" What old thinking in safety does, is provide an quick explanation that fits our mental model. A look at the reporting of accidents in the media nearly always turns up a very simple explanation: human error. More specifically for safety practitioners, when you have invested decades in a profession, there can be little more threatening than to consider that your mindset or (at least some of) your fundamental assumptions or beliefs may be faulty. If you are an 'expert' in something that you think is particularly important (such as root cause analysis or behavioural safety), it is threatening to be demoted to an expert in something that may not be so valid after all. Rethinking one's assumptions can create a cognitive dissonance, and may have financial consequences.
  3. PRIDE. If you are an 'expert', then there is not much room left to be a learner, or to innovate. Being a learner means not 'knowing', and instead being curious and challenging one's own assumptions. It means experimenting, taking risks and making mistakes. Notice how children learn? When left to their own devices, they do all of these things. Let's quit being experts (we never were anyway). Only by learning to be a learner can we ever hope to understand systems.
  4. HABIT. It seems to me that our mindsets about safety and systems are self-reinforced not only by beliefs, but by habits of thought, language and method. We habitually think in terms of bad apples or root causes, and linear cause-effect relationships. We habitually talk about "human error", "violation", "fault", "failure", etc - it is ingrained in our safety vocabulary. We habitually use the old methods that we know so well. It is a routine, and a kind of mental laziness. To make new roads, we need to step off these well-trodden mental paths.
  5. CONFORMITY. Most people naturally want to conform. We learn this from a young age, and it is imprinted via schooling. As Dekker mentions, you want to fit in with your colleagues, boss, and clients to keep the peace and keep a job. But fitting in and avoiding conflict is not how ideas evolve. Ideas evolve by standing out.
  6. OBEDIENCE. In some environments, you have to think a certain way to get by. It is not just fitting it, it is being told to, and having to - if you are to survive there. This is especially the case in command and control cultures and highly regulated industries where uniformity is enforced. If you work for a company which specialises in safety via the old paradigm, you have little choice - to obey or leave.

System Barriers

As pointed out by Donnella Meadows in 'Thinking in Systems', "Paradigms are the sources of systems". Barriers to new thinking about safety are built into the very structures of the organisations and systems that we work within, with and for, and so are the most powerful. These barriers breed and interact with the personal barriers, setting up multiple interconnected feedback loops that reinforce the paradigm itself.
  1. GOALS. Goals are one of the most important parts of any system because they represent the purpose of the system and set the direction for the system. They also reinforce the paradigm out of which the system arises. Safety goals are typically expressed in terms of unsafety, and so are the quantification of these goals, relating to accidents and injuries, such as target levels of safety or safety target values. Some organisations even have targets regarding errors. Such goals stifle thinking and reinforce the existing mindset.
  2. DEMAND. Market forces and regulation can be powerful suppressor of new safety thinking. Not knowing any different, and forced by regulation, internal and external clients demand work that rests on old thinking. The paradigm, and often the approach, is often specified in calls for tender. Old thinking is a steady cash cow.
  3. RULES & INCENTIVES. Rules regarding safety emerge from and reinforce the existing paradigm of safety. Rules limit and constrain safety thinking, and the products of thinking. Examples include regulations, standards and management systems with designed-in old thinking. Within the systems in which we work are various incentives - contracts, funding, prizes, bonuses, and publications - as well as punishments, that reinforce the paradigm.
  4. MEASURES. What is measured in safety has a great influence on the mindset about safety. A reading of Deming reveals that if you use a different measure, you get a  different result. We typically measure adverse events and other tokens of unsafety as the only measures of safety. What we need is measures of safety - of the system's ability to adapt, reorganise and succeeed under varying conditions.
  5. METHODS. Changing the paradigm inevitably means changing some methods. Most existing methods (especially analytical techniques), as well as databases, are based on old paradigm thinking. A common challenge to new thinking is that there is a lack of techniques. There is a general expectation that the techniques should just be there. For corporations, changing methods costs, especially those that are computerised and those that are used for comparing over time.
  6. EDUCATION. Training and education at post-graduate level (which is typically where safety concepts are encountered) are influenced heavily by demand. And demand is still to be rooted in old paradigms and models. Reflecting new thinking in safety courses either means not meeting demand or creating a conflict of paradigms within the course. This is not fun for an educator. 
So how to change the paradigm? This is the point at which I find myself caught up in a web of circular arguments. Since systems emerge from paradigms, changing the safety paradigm is probably the hardest thing to change about a system. But it seems that changing the system is necessary to change the paradigm and yet you can't change the system without changing paradigm. Maybe this is why "there is, perhaps, something indelibly conservative about the safety profession", as Sidney Dekker mentions.

Can the paradigm be changed directly? Donella Meadows seems to thinks it can.
"Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. You keep pointing at the anomalies and failures in the old paradigm. You keep speaking and acting, loudly and with assurance, from the new one. You insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather, you work with active change agents and with the vast middle ground of people who are open-minded." (p. 164).
This 'pointing out' is what protagonists such as Sidney Dekker, Erik Hollnagel, David Woods, and others have been doing (along with others on the Systems Thinking side, more generally, such as John Seddon and others). An awakening can occur in an instant or over a period of acceptance, but it takes time for this to spread to many people, especially when demand is lacking. The vast middle ground is important to gain a critical mass of new thinking, but perhaps more critical is the mindset of those who set system goals. These, in turn, trigger demand, rules and incentives, measures, methods and education.

Changing the safety paradigm is a slow process, but one that is possible, and has happened many times over in most disciplines. But it will take a bold step to pull the system levers, and this step - more than anything - needs courage.

Sunday, March 17, 2013

John Locke and the unintended consequences of targets

With the huge evidence of the destructive effect of targets in complex systems such as healthcare, policing, and education, I wondered: 1) how recent is this problem, and 2) when did we first become aware of how top-down, arbitrary numerical targets distort and suboptimise systems, leading people to cheat, game, fiddle and manipulate the system in order to meet or get around the target. When I say "when did we first become aware", I am not implying that we are generally aware of their toxic effects now - targets still seem to be taken for granted, and even when their effects become clear, people argue either they were the wrong targets, or that there were too many or not enough, and stick with the target concept as they don't know what else to do.

While the UK's mass experiment with top-down, arbitrary targets in public services began in the 1990s, some bright spark worked out the pitfalls of this kind of thinking over 300 years earlier; English philosopher John Locke - one of the most influential Enlightenment Thinkers (and seemingly a System Thinker).
John Locke (1632-1704): Enlightened Systems Thinker
In 1668-1669, the House of Lords' committee held hearings on a Bill to lower interest to an arbitrary fixed rate of 4%. The House heard testimony from members of the King's Council of Trade, and the English merchant and politician Josiah Child had a position on this Council. Child was an proponent of mercantilism, a protectionist economic doctrine involving heavy regulation and colonial expansion. The idea of the (sort of) target was to benchmark with cashed-up Holland (oh, you thought benchmarking started with Xerox?), but Child treated the maximum rate of interest as the cure for many economic and social ills. Another member of the Council of Trade, and also a member of the Lords' committee, was Lord Ashley. Ashley opposed the Bill and enlisted Locke's help via an as-yet-unpublished manuscript, Some of the Consequences that are like to follow upon Lessening of Interest to Four Percent (1668).
Josiah Child 1630-1699: Command and Control Thinker
Locke urged the defeat of the target, which was enshrined in the Bill. Being a canny Systems Thinker, he argued that the law would distort the economic system and people would find ways to circumvent it; the target would ultimately have unintended consequences and leave the economy and nation worse off. It seemed to work; the target was killed off in 1669. But in the early 1690’s, Child still wanted to arbitrarily lower interest rates, and the London merchants supported him. When Bills were introduced in 1691, Locke revised his 1668 memorandum and published it as Some Considerations of the Consequences of the Lowering of Interest and the Raising the Value of Money (1691). Locke warned that,  
“the Skilful, I say, will always so manage it, as to avoid the Prohibition of your Law, and keep out of its Penalty, do what you can. What then will be the unavoidable Consequences of such a Law?” 
Locke had a fair idea about these unintended consequences. He listed several, concerning the discouragement of lending and difficulty of borrowing, prejudice against widows and orphans with inheritance savings, increased advantage for specialist bankers, brokers and merchants, money hived offshore, and perjury:
"1. It will make the Difficulty of Borrowing and Lending much greater; whereby Trade (the Foundation of Riches) will be obstructed.
2. It will be a Prejudice to none but those who most need Assistance and Help, I mean Widows and Orphans, and others uninstructed in the Arts and Managements of more skilful Men; whose Estates lying in Money, they will be sure, especially Orphans, to have no more Profit of their Money, than what Interest the Law barely allows.
3. It will mightily encrease the Advantage of Bankers and Scriveners, and other such expert Brokers: Who skilled in the Arts ofputting out Money according to the true and natural Value, which the present State of Trade, Money and Debts, shall always raise Interest to, they will infallibly get, what the true Value of Interest shall be, above the Legal. For Men finding the Convenience of Lodging their Money in Hands, where they can be sure of it at short Warning, the Ignorant and Lazy will be forwardest to put it into these Mens hands, who are known willingly to receive it, and where they can readily have the whole, or a part, upon any sudden Occasion, that may call for it.
4. I fear I may reckon it as one of the probable Consequences of such a Law, That it is likely to cause great Perjury in the Nation; a Crime, than which nothing is more carefully to be prevented by Lawmakers, not only by Penalties, that shall attend apparent and proved Perjury; but by avoiding and lessening, as much as may be, the Temptations to it. For where those are strong, (as they are where Men shall Swear for their own Advantage) there the fear of Penalties to follow will have little Restraint; especially if the Crime be hard to be proved. All which I suppose will happen in this Case, where ways will be found out to receive Money upon other Pretences than for Use, to evade the Rule and Rigour of the Law: And there will be secret Trusts and Collusions amongst Men, that though they may be suspected, can never be proved without their own Confession." 
Locke knew that top-down, arbitrary numerical targets distort and suboptimise systems, and lead people to cheat, game, fiddle and manipulate the system in order to meet or get around the target. Is this ringing any bells?

It is worth reading at least some of his 'letter', in its delightful Early Modern English, but note that Locke lived a few years before twitter. Unhindered by a 140 character restriction, Locke went for a 45,000 word argument. But it worked. The 4 percent target was killed again in the House of Lords.

Over 300 years later, we seem unable to grasp that arbitrary, top-down targets always have unintended consequences, which are often worse than the possible intended consequences.

Thursday, February 7, 2013

"So you have an under-reporting problem?" System barriers to incident reporting.

The reporting of safety occurrences and safety-relevant issues and conditions is an essential activity in a learning organisation. Unless people speak up, be it concerns about unreliable equipment, unworkable procedures, or any human performance issue, trouble will fester in the system. In my experience in safety investigation, safety culture and human factors across industries, one of the clearest signs of trouble in a safety-related organisation is a reluctance among staff to report safety issues.

Non-reporting can be hard to detect, especially when managers are disconnected from the work. There may be a built-in motivation not to be curious about a lack of reports: under-reporting gives the illusion that an organisation has few incidents or safety problems, and this may give a reassurance of safety (while a preoccupation with failure, or 'chronic unease', might be true for those who have worked in high reliability organisations). Where it is discovered that relevant events or issues are not being reported, too often this is seen as a sign of a person, or team, gone bad - ignorant, lazy or irresponsible. This might be the case, but only if ten or so other issues have been discounted. I have tried to distill these below, along with some relevant Safety Culture Discussion Cards.


"So you have an under-reporting problem?" Questions for the curious.

1. Is the purpose of reporting understood, and is it consistent with the purpose of the work and  organisation?  
Yes / No / Don't Know
This is the first and most fundamental thing. As Donella Meadows noted in her book Thinking in Systems, "The least obvious part of the system, its function or purpose, is often the most crucial determinant of the system's behaviour". The purpose of reporting may be completely unknown, vague, ambiguous, or (most likely) seemingly inconsistent with the work or with the purpose of at least part of organisation (e.g. a department or division), and its related goals. The purpose of reporting may relate primarily to monitoring and compliance with rules, regulations, standards or procedures. The purpose may relate to checking against organisational goals and numerical targets (e.g. relating to equipment failures, 'errors', safety outcomes, etc). In both cases, there is probably little perceived value to the reporter (and may be little value to the organisation) and incompatible purposes, and hence a disincentive to report. From a more useful viewpoint, the purpose of reporting may be viewed in terms of learning and improving how the work works. Things like demands, goal conflicts, performance variability and capability, flow and conditions become the things of interest. In other words, the purpose of reporting should be compatible with the purpose of the work itself. See Card 1c, Card 1g and Card 4c.

2. Are people trated fairly (and not blamed or published) when reporting?  
Yes / No / Don't Know
An unjust culture is probably the most powerful reason not to report. If people are blamed or punished for their good-will performance, others will not want to report. The effect has a long shelf-life; these particular stories live on in the organisation, serving as disincentives long after the incident. Punishment may take several froms: inquisitorial investigation interview, 'explain yourself' meeting with the boss, formal warning, public admonishment and shaming, non-renewal of contract, loss of job, prosecution...even vigilante revenge attack. A culture of fear may be cultivated by designed organisational processes. The independent Rail Safety and Standards Board (RSSB) estimated that up to 600 RIDDOR incidents were not reported between 2005 and 2010 due to pressure from Network Rail. One key reason was that contractors were under pressure to meet accident targets - a clear disincentive to report, built into the system. The fears of staff were reasonable and took various forms similar to those listed above. In the case of possible punishment by external organisations, a question mark arises over internal support by the organisation. In these cases, if the organisation is not supportive (legally and emotionally), there is motivation not to report. See Card 3h, Card 3d and Card 3g and Card 3f.

3. Are there no incentives to under-report?  
Yes / No / Don't Know
Messages from organisations that accidents, incidents, hazards, etc., 'must be reported' are cancelled out immediately by institutional perverse incentives not to report. They are often linked to targets of various kinds (either clearly related to safety or not), league tables which count safety occurrences, bonuses, prizes, etc. Many incentives combine reward and punishment and so are devastatingly effective in preventing reporting. The US OSHA notes in a recent whistleblower memo regarding safety incentive and disincentive policies and practices that, "some employers establish programs that unintentionally or intentionally provide employees an incentive to not report injuries". Perverse incentives were identified by RSSB in Network Rail's (at least as it operated then) Safety 365 Challenge, in which "staff and departments were rewarded for having an accident-free year with gifts of certificates and branded fleeces and mugs. But failure to get a certificate could lead to staff and departments being downgraded..." (as reported here). Anson Jack, the RSSB's director of risk, noted that the initiatives together with the culture at Network Rail led to unintended consequences of under-reporting. See Card 1f and Card 4h.

4. Do people know how to report, have good access to a usable reporting method?  
Yes / No / Don't Know
It is easy to assume that people have the relevant information and instruction on how to report, but especially with a complicated reporting system or form, it's worth asking whether people really understand how to report. Complicated forms and unusable systems are off-putting, as is asking for help from colleagues or a supervisors (especially in a stressed environment). Even when people know how to report, if the reporting system is hard to access or requires excessive or seemingly irrelevant input, then you have accessibility and usability barriers in the system. See Card 3j.

5. Are people given sufficient time to make the report?  
Yes / No / Don't Know
Reporting incidents and safety issues needs time to think and time to write. Ideally, the report allows the person to tell the story of what happened, not just tick some boxes, and allows then to reflect on  the system influences. The time provided for the activity will send a message to the person about how important it is. If people have to report in their breaks from operational duty or after hours, then under-reporting has to be expected. See Card 2h.

6. Do people have appropriate privacy and confidentiality when reporting?  
Yes / No / Don't Know
Reporting safety related issues and events can be sensitive in many ways. The issue may be serious, with possible further consequences or may cause some embarrassment, awkwardness or ill-feeling. If people have to go to their manager's office or to a public PC in the café, expect them to be put off. Beyond privacy, confidentiality is crucial. Incident reporting programmes that have switched to confidential reporting programmes have seen significant increases in reports, and not just because of a reduction in fear of retribution. As Dekker & Laursen (2007) reported, with non-confidential, punitive reporting systems, people may actually be very willing to report - at a very superficial level, focusing on the first story ('human error') not the second story (systemic vulnerabilities). Even with confidential systems, the first question that comes to mind for some is not about what, how or why, but rather who. Even confidential reporting systems often require identifying details (such as a name, or else date, time, shift, location, etc), which might be used to try to identify the reporter. Confidentiality shouldn't be an issue in a culture which is fair, open and values learning, but it seems to remain key to encouraging reporting. See Card 3l and Card 3f. 

7. Are the issues or occurrences investigated by independent, competent, respected individuals?  
Yes / No / Don't Know
If the investigator is not independent (and is instead subject to interference), if he or she lacks training and competency in investigation, or is simply not respected, then under-reporting will occur. The location of the investigation function within the organisation will be relevant; independence must be in reality, not just on an organisation chart and in a safety management manual. Ideally, investigators would be carefully selected and would have chosen the role, rather than being forced to do it. Once they are in the role, training in human factors and organisational factors is useful, but even more useful is a systems thinking and humanistic approach, with values including empathy, respect and genuineness. See Card 3n and Card 2a.

8. Are reporters actively involved and informed at every stage of the investigation?  
Yes / No / Don't Know
The best investigations, in my experience, involve reportees (and others involved) properly in the investigation. The reporter, despite sharing the same memory and cognitive biases as all of us (including investigators), is essential to tell their story, make sense of the issues, and think about   possible fixes. How those involved understand the event (including different and seemingly incompatible versions of accounts, which will naturally arise from different perspectives) gives valuable information. Whether those involved are seen as co-investigators or subjects will affect the result and the likelihood of reporting. During and after the investigation, a lack of feedback is probably the most common system problem; often the result of flaws in the safety management system or an under-resourced investigation team. More generally, people need to see the results of investigations in order to trust them. Not providing access to reports may confirm fears about reporting. On the other end of the scale, overwhelming people with batches of reports and forcing them to read lengthy and sometimes irrelevant reports will not help. See Card 3k, Card 3m and Card 8a.

9. Does anything improve as a result of investigations, and are the changes communicated properly?  
Yes / No / Don't Know 
The vast majority of organisational troubles and opportunities for improvement are due to the design of the system (94% if you accept Demming's estimate, p. 315), not the individual performance of the workers. If occurrence reports lead to no systemic changes, it seems nearly pointless to report. Often, individuals have reported the same issue before, to no effect. This teaches them that reporting is pointless, going back to Question 1. Even if system changes are made following reports, not communicating to the wider population, via communication channels that they use, means that people may not know they changes have happened, or that they resulted from reporting. See Card 3i, Card 3c, Card 6d and Card 6h.

10. Is there a local culture of reporting, where reporting is the norm and encouraged by colleagues and supervisors?  
Yes / No / Don't Know
People naturally want to fit in. If colleagues and supervisors discourage reporting, as is sometimes the case, then individuals will be uncomfortable, and will have to balance feelings of responsibility against a need to get along with colleagues. The answer to this last question will nearly always be dependent on the answers to the previous questions, though. See Card 3a and Card 3o.

If your organisation has a problem with under-reporting, the chances are there are a few No's and Don't Know's in the answers to the above. In nearly all cases, under-reporting is a system problem. If you're not sure, and want to find out, ask those who could report about the what gets in the way of reporting (for other people, of course). The Safety Culture Discussion Cards might help.




Wednesday, January 30, 2013

Using the Safety Culture Discussion Cards: Tips for SWOT analysis from a user

David Thompson, a Human Factors Specialist from NATS, UK, has provided some feedback on the use of the Safety Culture Discussion Cards for a SWOT analysis. David organised a session involving six groups, each with a facilitator, and each tackling one element of safety culture. The facilitators distributed cards around the table and the groups discussed each of the topics on the cards (in terms of Safety within NATS) to identify the Strengths, Weaknesses, Opportunities and Threats. Facilitators then recorded the highlights on a flip chart. At the end of the 30 minute session – each facilitator was asked to give the away day audience a brief summary of their topic.



David summarises as follows:
We used the Eurocontrol Safety Culture cards to facilitate a SWOT group discussion within the Directorate of Safety, NATS. The cards proved a very useful basis upon which to stimulate discussion into the varied themes covered. The cards themselves can be employed in a number of simple ways, with helpful examples provided. The whole purpose of which is to get people talking about safety! I see no reason why these cards cannot be used to explore different safety themes within any safety critical organisation. Whilst the cards provide a great platform to discuss safety, for the cards to have residual value, it is important to consider how any safety concerns raised could be managed beyond the horizon of the immediate discussion.

The exercise was organised so that each group had the cards for one element of safety culture:


One useful insight related to the fact that there are different numbers of cards in each element. For instance, the element 'Just culture, Reporting and Learning' has many more cards that 'Responsibility'. One facilitator commented:
The various categories work well, although some topics are in my view more interesting to discuss than others. There are more cards in some areas than others, I don’t think this is a problem; it’s just how the material falls across the categories. But one must be mindful of this, as certain categories may result in longer discussions than others.

A facilitator suggested that it may be better sometimes to give each group a more random set of cards, and perform a SWOT analysis with these, so that the outputs could be compared between groups:
Because we were doing a SWOT analysis into the different category areas, each table’s outputs were different. In all honesty, perhaps the best strategy would have been to randomly assign the cards across the different groups, which would have provided a more homogenised output. When we went round the room one table at a time, this would have allowed a peer comparison as to if we touched on the same topics. Although having said that, there were several common themes identified particularly in the area of ‘Threats’.

The SWOT approach allows a balance between positive and negative safety issues and so helps avoid falling into the trap of seeing safety culture only in a negative light.

Monday, November 26, 2012

Using the Safety Culture Discussion Cards to help understand textual data

What we call our data are really our own constructions of other people’s constructions of what they and their compatriots are up to’ (Geertz, 1973)

Probably the most common approach to trying to understand safety culture is via safety climate questionnaires, usually comprising a set of items with a Likert-scale to indicate the level of agreement with each item. Unfortunately, such questionnaires alone do little, if anything, to help understand the meanings that people ascribe to their values, beliefs and behaviour, and so do not explain why we do things, why we do things in the way that we do them, or the conflicts between what we say and what we do. To gain a deeper understanding, a qualitative, interpretive approach is more fruitful, not necessarily to supplant questionnaires, but at least to supplement them. Prior to interactive methods such as focus groups and interviews, one source of data from the questionnaire itself can be a useful starting point to an interpretive approach - the free-text comments written by the respondents.


I recently used the Safety Culture Discussion Cards to help analyse several hundred typed/written unstructured comments from a safety culture questionnaire - a fairly large amount of textual data. Many of the comments were several paragraphs long and referred to a variety of issues, and were mostly very interesting, well thought out and well-written. Making sense of rich textual data is never easy. But a common approach to understanding is via 'content analysis' (Krippendorff, 2004), or textual analysis. This often involves reading the text and applying a set of codes or categories to try to understand the data.

In this case, I decided to try to use the Safety Culture Discussion Cards to help code the data. The aim was to get a detailed understanding of the issues that questionnaire respondents were motivated to comment on - the specific issues, the way the writers related issues to each other, and the number of times that each issue was mentioned. An assumption was that issues mentioned more often by respondents reflect concerns that are important to them.

The cards cover most relevant aspects of safety culture but are (deliberately) not mutually exclusive, so this had to be kept in mind during the analysis. Prior to and during the coding, it was necessary to remove or combine cards as appropriate in order to achieve some satisfactory level of mutual exclusivity.



I started the analysis by reading all of the comments very carefully, and coding pieces of text within each comment using the eight elements of safety culture covered by the cards (Management Commitment; Resourcing; Just Culture, Reporting & Learning; Risk Awareness & Management; Teamwork; Communication; Responsibility; Involvement). Because a person's comment could cover all sorts of issues, it is not possible to apply just one element code to each comment. Even a particular sentence within a comment could cover two or more issues, such as 'Management Commitment' and 'Resourcing'. So at this stage, a sentence or paragraph could be coded using one or more elements.

The next stage was to re-read the comments and now apply more specific codes to the various pieces of text. The specific codes relate to the codes on the safety culture discussion cards, from 1a to 8e, noting also where the text was positive/favourable or negative/unfavourable in nature, or sometimes both. Since some of the cards overlap, where a piece of text could be coded using more than one card (and the cards could not reasonably be mutually exclusive) the codes were combined.

The final stage involved rechecking the use of the codes for each comment to ensure consistency and calculating the usage of each code. (An even more rigorous application of this method would involve having independent coders repeat the exercise with all or some of the text, as I and Amy Chung did when analysing comments relating to HF/Ergonomics practitioners' views on barriers to research application; see Chung and Shorrock, 2010.) This allowed the relative frequency of each issue to be determined, and gave an impression of the perceived pertinence of the various issues.

The frequency of each element as well as the top 20 issues were calculated. The quantitative data, combined with discussion of the actual content of the comments, added substantially to the data received from the Likert-scale standard questionnaire items.

A final interesting output from this exercise is the ability to the the cards to visualise the narratives in the comments by mapping the relationships between issues and the possible meanings emerging. This will be the subject of a different blog entry. The exercise also revealed a few issues that are not covered by the existing cards, as well as the issues that are covered by the cards but that were not mentioned at all by the commenters. This is useful feedback for the further development of the cards.

References

Geertz, C. (1973). The interpretation of cultures: Selected essays. Basic Books.
Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Thousand Oaks, CA: Sage.
Chung, A.Z.Q. and Shorrock, S.T. (2011). The research-practice relationship in ergonomics and human factors - surveying and bridging the gap. Ergonomics, 54(5), 413-429.

Wednesday, November 7, 2012

Five questions about boredom, fatigue and vigilance

Below are five questions posed by a safety colleague, and the brief responses.

1. How different are boredom and fatigue?
Both affect our ability to pay attention - to notice something that may need attention - but they are different in terms of their causes and can occur completely independently. A person can be bored during a period of low activity, but not fatigued. Prolonged boredom, tends to result in fatigue, but so can high workload, lack of sleep or disruption to sleep patterns, or stress. Other than sleep or rest, there is little that you can do to manage fatigue effectively while on position, while more can be done to tackle boredom and stay in the loop. So preventing and managing fatigue is a key priority to ensure that people remain able to deal with unusual events.

2. Is low workload more dangerous then high workload?
Attention is stretched by both 'overload' and 'underload'. Both require hard work and can be stressful, particularly if there are safety consequences when something is missed. Which is more dangerous will depend on the situation and the person (for instance personality, experience and levels of stress and fatigue), but skilled professionals tend to cope better with higher workload up to the point of overload, when performance degrades more dramatically.

3. How to remain aware and vigilant for unusual situations?
Ask colleagues - people develop different visual and mental strategies that may not be obvious from the outside. But applied research using eye movement tracking gives some tips in terms of scanning. So-called "active scanning" can help to counteract degraded vigilance under low workload situations. With active scanning, people scan displays proactively in sequences or cycles depending on the traffic situation, linking specific information from different information sources. The scanning is more strategic, and helps to anticipate developing situations.

4. When are we most and least vigilant?
In a non-shiftwork environment we could highlight some times of day when we are least alert, especially during the very early morning hours, but shiftwork is a fact of life for many workers working a 24-hour operation. What we can say is that we are most vigilant when well rested, engaged and interested in the activity, not distracted (e.g. TV, radio, visitors) or preoccupied with other thoughts, well hydrated, and well supported by colleagues and supervisors.

5. Is the theoretical human performance knowledge adding value?
Yes, but not nearly as much as it should. So much is known about human performance that it seems that policy and practice are decades behind. But so much that is published is irrelevant to complex systems and activities, does not offer solutions, and technology and practices change fast and do not wait for research to catch up. Much theoretical knowledge in human factors comes from sterile experimental environments, normally focusing on one issue (e.g. vigilance) while 'controlling' (or ignoring) some of the most relevant real-life issues that interact to shape performance in the real world (e.g. motivation, risk, teamwork, supervision, background shift-fatigue). The hard part for practitioners is evaluating what aspects of the research are relevant, piecing them together and drawing out practical implications. With this in mind, the most directly useful human performance knowledge is gained by spending time with end users, listening to and observing them at work, and working with end users and other stakeholders to find solutions to human performance issues.

Wednesday, October 3, 2012

Should the Institute of Ergonomics and Human Factors be more of a campaigning organisation? Yes.

Published in 'The Ergonomist', Newsletter of the Institute of Ergonomics and Human Factors, October 2012, p. 4

In September's The Ergonomist, the President of the Institute of Ergonomics and Human Factors asked whether the IEHF should become more of a campaigning organisation. Assuming that we want to be a relevant organisation, then the answer must be 'yes'. While we have many interesting research findings and effective applications, we rarely seem to communicate our impact in the world.

It is a sad state of affairs that the 'impact' of publicly-funded research is judged primarily by the citation of journal articles by one's fellow researchers and oneself. It is equally sad that we have so few press releases, white papers, blogs or videos of our impactful theories, findings or applications. We seem to put most effort into forms of communication that are least visible to policy makers, decision makers and the public. Perhaps this is why we are still too anonymous to the wider world.

We cannot be content with only writing to each other via Old Media or speaking to each other in closed conferences if we want to make a visible difference. The research article or technical report should not be the end of the line for any of us. If we think that our discipline is important, then we need to to be confident and decisive in our messages and campaigns, and clever in how we convey them.

It is great to see that our engagement with social media is growing (e.g. LinkedIn, twitter) and that we have had recent public exhibitions. But we need more involvement. We need to be prolific not in how much we write, but in the effectiveness of our communication with decision makers, those affected by our work, and the world at large.

We need to put more effort into the usability of our communication with the world. Think Wordpress, Blogger, twitter, pinterest, Google+, flickr, picaso, Amazon, LinkedIn, Experience Project, e-petitions...as well as letters, magazines, and face-to-face, of course. None of us has 'time' for this, except the time that we prioritise for it. As Jon mentioned, raising awareness isn't just a job for the IEHF. It is for all of us to ensure that our research and practice remains relevant to the world and has broad impact. 

Steve Shorrock