Lessons from BP

6 November 2010

David Mosey, the author of NEI’s Reactor Accidents, (still available on www.getthatmag.com) defines institutional failure as “the absence or malfunction of some corporate activity necessary for safety as the result of human failure in activities which may not be acknowledged as important to safety and which occur far from the man-machine interface.”

It is scary to recognise that no industry is exempt from this. But if we humbly realise we can be fallible, then we open ourselves up to learning from others.

Take the Deepwater Horizon drilling platform. Its blow-out on 20 April lead to a catastrophic Gulf of Mexico oil spill, and killed 11 people. A BP report pieced together a picture of what happened.

Although nuclear power generation is very different from offshore exploratory drilling, I hear in the report the same ignorance, misjudgments, chaos that I remember from Reactor Accidents.

At the time of the accident the rig crew was carrying out final activities before temporarily abandoning the platform—so they might have been distracted.

The day before the accident, the crew tried to seal the wellhead with cement, and didn't realise that the light, foamy cement mixture hadn't created a proper seal. Better testing would have told supplier Halliburton that the cement wouldn't work, the report said, and better risk management would have made BP staff alert to that risk. In addition, a wellhead component probably failed (but this remains uncertain).

The rig crew misinterpreted data in a test to determine that the well had been shut off. It wasn’t sealed properly-but they thought it was. Two sources of information were used in the test; the drilling pipe itself and a control line leading to a blow-out preventer, which cuts off the reservoir in an emergency. The team in fact received contradictory indicators from the two tests. The report said that the company’s guidelines didn’t sufficiently spell out failure criteria for the test–so procedures relied largely on the competency of the rig leaders.

As the hydrocarbons rose in the pipe, an increase in drill pipe pressure was discernable about 40 minutes before the rig crew acted. But during a time when the crew would have been busy with other shut-down chores, no one may have been watching. Although a company handbook stated that the well should be monitored at all times, it did not specify how to do so during end-of-well activities.

Once they realised what was going on, the rig crew faced a crucial choice: whether to route the fluids coming up the pipe to an overboard diverter line, or a system on the rig that separates drilling mud from gas. They chose the second option–a crucial mistake that brought the hydrocarbons on to the rig. Had they chosen to divert the stream overboard, the disaster might not have happened. The report concluded that the rig team's actions showed they were not prepared to manage an ‘escalating well control situation’.

The rig’s ventilation probably carried the hydrocarbons to the engine room–partly because it was designed not to shut off automatically in the presence of gas (it had a manual switch-off instead). The hydrocarbons caused an engine overspeed, which might have provided the spark that set off the blaze. Several explosions rocked the platform.

Under the water, emergency bore cutoffs did not seal the well because they were in poor condition. An emergency disconnect sequence was probably disabled by the explosions. A secondary method should have actuated automatically when platform communications were lost. But it was not working: the report said one solenoid was faulty, and a set of batteries was dead.

The report executive summary concludes, “The team did not identify any single action or inaction that caused this accident. Rather, a complex and interlinked series of mechanical failures, human judgments, engineering design, operational implementation and team interfaces came together to allow the initiation and escalation of the accident. Multiple companies, work teams and circumstances were involved over time.” Every aspect of that conclusion could apply to an accident in nuclear power.

Let’s make sure our batteries are charged.


Author Info:

Will Dalrymple is editor of NEI



Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.