Heathrow have put out an announcement, and since are giving briefings and have the transport secretary doing similarly, which seem highly dubious and very much going on the offensive in response to criticism of their exposure to this not unforeseeable event. I find myself questioning the veracity of their claims.
Black swan incidents do happen from time to time, fallback systems fail, the perfect storm of issues coincides, and perhaps this is such an event – but I am also very interested in whether decision making arising from misaligned incentives (per Willie Walsh), "enterprise risk frameworks" and the impact of everything else that tries to squeeze the life out of hard engineering decisions so as to make them just another line item around a boardroom table.
I know as much as the next person on the decisions that led to this event, but the fact of the matter is that a single substation failure seems to have had cascading effects on the ability for a piece of "critical national infrastructure" to operate at all – and they didn't have any means to recover quickly. I don't see any other way to skin it.
It is rather frustrating when this quickly becomes a PR game of cover, deflect and pacify rather than accept, apologise and commit to better. This seems so prevalent in our services and institutions today (it is rare to ever receive an apology) and is a good reminder of how centralisation and standardisation reduces the resilience of a system in handling anomalous events without complete collapse.
Quoting from the piece in the press, with my commentary:
“We have multiple sources of energy into Heathrow.
It's trivially possible for this to be true without it actually meaning much – are those supplies intended to be resilient and interchangeable, or separate supplies for separate purposes? The rail network could still operate through the site with traction feeders from elsewhere – satisfying the definition of "multiple sources".
But when a source is interrupted, we have back-up diesel generators and uninterruptable power supplies in place, and they all operated as expected.
Great – so this sort of confirms the above in that it's not intended as a redundant system ("a source is interrupted" => run on diesel). Safety of flight wasn't impacted, which everyone keeps reminding us about as if that should be a surprise, but the airport couldn't function effectively for a day.
Our back-up systems are safety systems which allow us to land aircraft and evacuate passengers safely, but they are not designed to allow us to run a full operation.
Do they have multiple HV feeders, from separate sources, maybe connected in a ring, and N+1 or N+2 redundancy so they can lose one and still run the airport on full load? Apparently not, or at least they weren't working correctly.
As the busiest airport in Europe, Heathrow uses as much energy as a small city, therefore it’s not possible to have back-up for all of the energy we need to run our operation safely.
Meaningless drivel about consuming as much energy as a small city, which seems to be intended to keep criticism at bay by some appeal to handwaving "it's complicated". Plenty of other energy demanding facilities (data centres, heck even actual urban areas) are typically capable of operating with loss of a single feeder...
“We are implementing a process which will allow us to redirect power to the affected areas, but this is a safety critical process which takes time, and maintaining safety remains our priority, so we have taken the decision to close the airport for today.”
It rather surprises me that, had this sort of eventuality been foreseen, this wasn't a tried, tested, practised and documented procedure such that it could have been implemented with far less time.
I will stand corrected if there are indeed legitimate reasons that, despite the above, things broke in unforeseen ways – but I hold my suspicions that some people may have egg on face today as a result of botched resilience arrangements or failure to adequately plan – with huge consequences for many travellers and employees.