• Our new ticketing site is now live! Using either this or the original site (both powered by TrainSplit) helps support the running of the forum with every ticket purchase! Find out more and ask any questions/give us feedback in this thread!

Single points of failure: are the UK's railways less resistant than they used to be?

Status
Not open for further replies.

HSTEd

Veteran Member
Joined
14 Jul 2011
Messages
18,544
True. Worth noting that Mr Putin would only need to target the major signalling centres to cripple the rail network. Electrification also means that getting the railway working again would take much longer.
Perhaps these issues should be examined by ministry of defence!!
The cheap way to deal with that would likely to move to a National Rail Control Centre like the ones operated by US Class I operators, then slap enough concrete or soil on the roof to make it essentially invulnerable to anything short of a nuclear weapon.

Long range, low cost strike weapons have made the idea of dispersing signalling control to prevent strikes against it impractical.

But even that would cost quite a bit.
 
Sponsor Post - registered members do not see these adverts; click here to register, or click here to log in
R

RailUK Forums

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
True. Worth noting that Mr Putin would only need to target the major signalling centres to cripple the rail network. Electrification also means that getting the railway working again would take much longer.
Perhaps these issues should be examined by ministry of defence!!
The chances of a direct attack on the UK rail network being militarily relevant to the Russians is as near as makes no difference zero unless they get troops actually onto the south coast. The UK also has the strategic advantage for air defence of being an island nation, so we have a chance to intercept airborne threats at a standoff distance. The UK also has some fairly formidable NATO allies around Russian territory, so we'd need Finland, Norway, the Baltics, Poland, Denmark and Germany to have all fallen before we became a serious target of Russian ordinance.

Honestly the biggest threat to rail operations being centralised is the greater disruption caused if the fire alarm goes off. I suspect in the long run some burnt toast is going to cause many more delays than a missile strike.
 

Backroom_boy

Member
Joined
28 Dec 2019
Messages
454
Location
London
The chances of a direct attack on the UK rail network being militarily relevant to the Russians is as near as makes no difference zero
Conventional attacks yes but irregular or cyber attacks are more likely.

The cyber risk appetite at my work has definately become much more risk averse since Ukraine started.
 

al78

Established Member
Joined
7 Jan 2013
Messages
2,539
Is this true? A lot of the old routes crossed other lines but often only connected via goods-only or very operationally painful spurs. The grouping had led to a lot of connecting lines being built but even then BR had to build a lot of it's own. And of course a lot of dead end branches were close by Beeching as well
I was thinking about the closure of the line from East Grinstead to Lewes or from Uckfield to Lewes. If the former were still operational, it could provide an alternative London to Brighton service when the Brighton main line is closed. What happens now in that case I'm not sure, I guess passengers could go to Barnham then up via Horsham or there will be rail replacement buses.
 

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
I was thinking about the closure of the line from East Grinstead to Lewes or from Uckfield to Lewes. If the former were still operational, it could provide an alternative London to Brighton service when the Brighton main line is closed. What happens now in that case I'm not sure, I guess passengers could go to Barnham then up via Horsham or there will be rail replacement buses.
East Grinstead to Lewes closed (eventually) in 1958, when Lewes-Uckfield was still open. Lewes-Uckfield closed in 1968 because the costs of keeping the line open were (allegedly) too much and parliament refused to funds to reroute the line via Hamsey. Nowadays passengers will get a bus or drive up to Three bridges, with trains able to divert via Littlehampton or Lewes if the blockage is in the right place. Once on the 4-track it's much easier to manage 2-track blocks. However even if Lewes-Uckfield were open diversions would be of limited use as they'd miss Haywards Heath and Gatwick and drop the trains out on the slows at South Croydon.
 

GRALISTAIR

Established Member
Joined
11 Apr 2012
Messages
9,335
Location
Dalton GA USA & Preston Lancs
There will now be numerous examples of whataboutery where this isn’t possible. But it should be. A few weeks ago LNER were cancelling dozens of services on a busy day because of the Colton Junction failure and resulting delays. But if Stansted runway is closed because of an at risk flight coming in, we divert to Luton and we sort onward connections. If the M6 is closed, we signpost diversions.

If the railway is meant to be a serious alternative to either - the numerous examples of “all lines blocked, get off here, maybe come back tomorrow” really need to stop.
This is true. I agree.
 

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
This is true. I agree.
The rail industry keeps a remarkable amount running given the circumstances of a fixed path and everything else. When the job is stopped, much like an Airport, the dominant reaction is to get trains into stations and then start bussing people to complete their onwards journey. But a plane landing at Luton carries c.150-200 people, and you can only land a plane every 2 mins or so (less if you're taking off in between), and those passengers have to get through immigration and reclaim, which gives airports time to get stuff running. Often the first thing the railway knows about a problem is when the signals revert to red. Suddenly 500 people empty out of an IC train, sometimes with multiple trains at once in different platforms, and head straight for the entrance to get on the 1 bus you've manged to resource.

The amount of times a 'give up, go home, try tomorrow' advice is issued is vanishingly small, and the rail crack at Colton whilst the Carstairs blockade was on is just really bad luck to be at the precise two worst places to have incidents.
 

Annetts key

Established Member
Joined
13 Feb 2021
Messages
2,876
Location
West is best
The evidence at the landslip site at Hook seems to be that repairs are of a much better quality than the original, but it seems all they’ll ever be funded to do is repairs, no one is proposing pre-emptive reconstruction of embankments over long distances (yet).
Pre-emptive before problems, no. But in response to problems that passengers will mostly not have noticed, yes, sometimes. The embankments on the South Wales main line in the area of Westerleigh Junction was found to have problems. Network Rail spent millions (and more) enhancing the railway embankments in this area.

In the “last couple of decades” includes anything from 2000 onwards, and in that timeframe there’s been almost no infrastructure removal on key routes - indeed it’s increased. Lots of evidence suggests that rationalising and simplifying pointwork around major terminals actually reduces delays.
Well, often it’s not the rationalising and simplifying pointwork that makes the difference, but rather the ripping out of old and worn out infrastructure and the provision of new. This also gives an opportunity to realign taking advantage of space created when parallel sidings or similar were removed.

However, if pointwork or a junction is simplified too much, it can remove flexibility. Which hinders the ability to carry out maintenance and renewal work in the future while still being able to run a (limited) train service. And may result in worse disruption if there is a failure (train, infrastructure) compared to a more flexible layout.

However, more points means more maintenance. Currently, reducing the overhead cost of maintenance is the priority.

Fundamentally the railway network is pushed harder than it has been before, in a stricter safety environment, and under pressure over spiraling costs.
Yes, but some of the problems are the result of Network Rail’s own choices, such as effectively prohibiting staff from working with warning protection (red zone working). It’s a bit difficult to maintain the infrastructure if most of your staff spend large amounts of time waiting for a line line or a T3 worksite to be established.

And some strange choices in their ideas on engineering (such as preferring RCPL and HPSA, at one point both the most unreliable types of point operating equipment compared to conventional HW and 63 type point machines).

As the Carmont accident showed, the points at the Carmont signalbox crossover could be reversed by the signaller, but had no facing point locks (on a line used almost wholly by passenger trains - what else is going to need to be reversed there), and had no point clips in the signalbox, so an MOM had to drive through the storm the best part of two hours, delaying everything on the way, just to fit two clips.

Talk about not being joined up. What a waste of time and cost. And what a hazardous journey by road, but presumably if the MOM had a road accident in the flooding it would not be reportable to RAIB as it would happen outside railway premises. So that's alright then.
Under BR on Western, some signallers were able to clip up points. On Western at least (don’t know about elsewhere), major PSBs often had two “standby” staff (signallers) on duty that could attend point failures, level crossing failures, signal failures etc. They would also set up emergency (ticket) block working or act as pilot persons. MOMs of course did not exist at this time.

Now, I don’t think many signallers are permitted to work on the track.

Point clips would be stored in signal boxes and where remote from the signal box, in lineside cupboards. BR even detailed who was responsible for maintaining the point clips.

BR did eventually decide to make FPLs standard on all power operated points (where practical). But I don’t think this was ever applied to mechanical points due to the cost of the additional equipment required.

3. There seem to be frequent delays (in my part of the SE anyway) caused by a fault with the signalling. Could this be primarily a result of copper wire theft and if so, is it more of a problem not than in the past?
The amount of cable theft or attempted cable theft varies wildly. Copper prices do have an effect. But so does catching the people responsible. When they are caught.

The railway taking precautions and not making it easy also has a very significant effect. The railway knows that laying cables out on the surface in the cess is asking for trouble. But it costs money to provide new surface concrete troughing (SCT) routes (or plastic equivalents), or to empty existing SCT routes by removing redundant cables.

Indeed, a considerable number of damaged signalling cables are damaged by the railways own activities…

Has the move towards highly centralised signalling systems made signalling less reliable? There seem to be more failures than I remember years ago, and a major issue at one of these signalling centres could presumably affect a wide area. Do these systems have built in redundancy to cope with failures?

The ROCs are built with reserve power supplies and other such things.

Whilst failures affect a larger area, the reality is that failures in signalling in key areas can knock out half the network anyway.

If you have failures at Clapham Junction it doesn't matter if all the rest of the signalling on the SWML or other lines through it works perfectly, the service will fall over.

Whilst some faults might be worse, you bank the savings from centralisation every hour of every day.

Yes, they have redundancy, although things like a track circuit failure can't really be mitigated against. ROCs also have data logging which helps speed up fault finding but also gives lots of good data for future designers to iron out issues in later installations (among other uses).

Why is it any different to the move to PSBs in the 60s? Lose one of those and it would have all gone down the pan.

Ahh, it depends. There is so much variation that definite answers are impossible.

The next sections discusses BR Western Region practice. It may have been similar elsewhere.

Relay based route interlocking systems based on say Q series BR miniature relays to BR spec 930 series are generally very reliable. But it’s not practical to have these as duplicated systems. In the design used on Western in the late 1960s and early 1970s, the important parts of power supply systems and equipment for was generally duplicated. For example, two 650V to 110V transformers (one in use and one ‘standby’) were standard in PSB and relay rooms.

Emergency 650V generator sets were also provided to take over in the event of the loss of mains power.

The design of electronic remote control communications systems (‘TDM’ - time division multiplex) in the late 1960s and early 1970s, were not considered to be reliable enough (although two sets of telecommunications links/cables were provided and diverse routed where possible, however switching between the two was entirely manual by S&T technicians in the systems I worked on).

So, an alternative (but less flexible) emergency system was provided. Known as ‘through-routes’ or ‘selective overrides’. These enabled the main lines and important junctions to be able to be controlled and used even during a complete failure of a TDM system. But with reduced capacity at said junctions and no access to less important branch line junctions, loops, sidings etc.

In some cases, they even went to the trouble of providing local emergency panels so that a signaller could travel to the remote junction and take control. This provided near normal signalling once the emergency panel was in operation.

Note that later, 1980s and 1990s TDM systems were either partially duplicated systems or fully duplicated systems. the idea being that either automated switching would select automatically select a working system, or the change-over would occur when the signaller operated a control switch. Hence the failed system would be be deselected and a working system would take over. Then the S&T could repair the failed system. Note that as with the earlier systems, two sets of telecommunications links/cables were provided and diverse routed where possible, but now the switching between them was automatic.

The biggest limitation with the signalling system described above is/was the multicore lineside cables required to connect the interlocking relay rooms, lineside location cupboards to one another. Damage to certain main cables could completely wreck the timetabled service.

Loss of a 650V feed due to a blown fuse could also cause even worse problems. Even more so if it’s due to a damaged power cable.

ROCs and other modern signalling centres generally use computer based interlocking systems and computer based modules in lineside cupboards.

The interlocking typically follows similar principles to the BR SSI (solid state interlocking, one of the first computer based signalling systems) and has three duplicated “processor interlocking modules” for each interlocking. Three to provide redundancy. At least two are required for safety (if one develops a fault, to prevent any unsafe outcome, the errant unit will automatically be taken off line). So by having three, a failure of any one will still leave the interlocking system operational.

The data links that provide communication between the interlocking and all Trackside Functional Modules (TFM) in the lineside cupboards that actually operate the conventional lineside signals, points and which have inputs to read the state of point detection, track circuits, axle counters etc. are also duplicated. Each TFM has a connection to two data links. And the data links are supposed to be diverse routed, so that even if the cables for both data links are cut in the same place, all the signalling will continue operating as normal.

However, the TFM (of which, with SSI, there are signal modules and point modules) only have duplicated not triplicated processor systems. Hence, if any one suffers a problem, the whole TFM goes off-line.

One signal module can operate either one complex signal, two simple signals, or various other functions (
- up to eight individual outputs plus a similar number of individual inputs) sometimes using relays. Point modules can control one or two point ends.

For some reason, in some places signal modules fail relatively often. Not always in the same place. Often enough that depots keep running out of spares.

Also, although there are still 650V standby generators. In modern schemes, the power supply transformers are not duplicated like in the RRI system I described. There is still some limited auxiliary provision. The preference to have many smaller transformers and power supply units rather than a smaller number of larger capacity transformers and power supply units (which was what the WR RRI schemes used).

In the some of the BR WR PSU areas controlled by RRI, battery powered telephones were provided at junctions and at ground frames (GF) and linked by lineside telecommunications cables. The GF included either had facing points (complete with FPLs) or trailing points (often not FPL fitted). A set of keys was available to authorised staff so that the GF could be released (the key enabled the GF to be unlocked locally). Hence it was possible to set up a form of emergency block working even if there was a total power failure.

Some of the GF referred to above were only provided for use during emergencies and engineering works, hence the points operated by these were sometimes referred to as emergency crossovers.

Some emergency crossovers were provided with GF provided for sidings (used by occasional or regular freight traffic).


Moving away from interlocking systems, the comparison between older signalling technologies and equipment and modern does not get any easier.

Take signals. Mechanical signals were a pain. Wire (as in mechanical pulley systems) were a pain to maintain. Especially were they included mechanical point detection. The main signals and shunting semaphore signals were not too bad. Unless they had electrical repeaters going back to the signal box. These required primary cells being replaced before they ran out of juice. The contacts on the signal were difficult to keep in good condition.

But they were unaffected by power blips…

Conventional multi-aspect signal heads using tungsten filament lamps, with a main and a auxiliary filament, controlled automatically by the G(M)ECR (when the main blew, it automatically switched in the auxiliary filament) were generally speaking, extremely reliable. Obviously as the lamps had a rated life of 1000 hours, lamp changing occurred often. Occasionally a G(M)ECR would fail. Normally it was the contact that fed the indication back to the technicians depot rather than causing the signal to ‘go black’.

The ML type were designed and manufactured such that it was possible to change the lenes, the entire lamp assembly (including the lens holder and lamp holder), the transformer and the terminal block without too much fuss. They were built to last.

LED signals are indeed very low maintenance. But they use lots of electronics in the head. So can fail in strange ways. Such as still illuminating, but not drawing enough current, hence the lamp proving (relay or signal module) in the location cupboard thinks the signal is not lit. Rather annoying, sometimes these faults are intermittent.

And some types die completely if they experience a lightning strike even if not directly hit.

And with some types, you can’t rock up and change them easily (some do have modules that can be changed, but that only helps if you have the correct spare).

Conventional track circuits, at least, the signalling equipment itself is very, very reliable. The problem is the environment, particularly the condition of the track. With concrete sleepers, once the offending failed pad/biscuit insulator(s) have been located and renewed, it could be years before a “repeat” failure.

Similar with failed IRJ (insulated rail joints) insulations.

Track circuits running through bullhead rail mounted on damp wooden sleepers in poor ballast, oh my, if I could trade back some

Then audio frequency track circuits came along. These are great in that IRJs are no longer required in CWR/LWR. So main lines can run for miles and miles and miles without any IRJs. But as the track circuit equipment is more complex, the variations of the causes of failures increases. With these, a failure is just as likely (approximately) to be an item of track circuit equipment as a fault or problem in the physical track.

TI21 / EBI Track 200 are definitely more of a pain to fault find compared to the older ASTER U / SF15 type.

Now axle counters are favoured. But these are just as much of a pain. At least the type used in the area where I worked (AzLM), compared to the older type that I have worked on.

You have to carry out a download to obtain the logged data, so that you can see what the system reports. Unless the intelligent infrastructure technician can obtain a download and send it to you, you have to go to the REB (railway/relocatable equipment building). But as it’s not always obvious which REB the axle counter evaluator (ACE) for a particular section is in. It’s not always the nearest or most logical compared to the location of the axle counter head…

Then you have to interpret the data log. Is it a failure of the head, a failure of the electronic junction box (EAK) near the head, a communication problem between the EAK and the ACE, or a failure of one of the cards in the ACE.

If you are going to work on the cable, the EAK or the head, you have to pull the isolation links. Then travel to the relevant cable ’dis’ boxes (if a communications/cable fault is suspected) or to the EAK / head if either of these is suspected. It may be that the EAK needs setting up due to component drift (drift warning). Or the EAK or head may need replacing. None of this is quick…

The ACE uses two processor cards for safety. But if one has a problem, or if power is lost to an ACE, the signaller may experience as many as fifteen ‘track failures’. If these are across a junction or points, kiss goodbye to running a train service.

So, some advantages, and some disadvantages. Definitely not clear cut either way…
 

fishwomp

Member
Joined
5 Jan 2020
Messages
890
Location
milton keynes
[..]
There have been important moves towards standardisation of parts so that spare stocks are more cost-effective to maintain, but as ever the railway is a living beast and so there will always be odd places or old installations that haven't been captured yet. Staffing levels are a kettle of fish I'm not quite willing to look into yet.
[..]
I was under the impression that there was a lot less standardization now: coupling compatibility is substantially worse than sprinter/pacer days, which in themselves were worse than hook and chain couplings of old.

Is there some sort of adaptor coupling? How does a loco - or an incompatible unit - rescue a failure these days? I've (fortunately) not seen it done..

Whilst a pacer would never have rescued a failed Mendips stone 2,000t load.. it should be possible for a stone train to push a pacer.
 

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
I was under the impression that there was a lot less standardization now: coupling compatibility is substantially worse than sprinter/pacer days, which in themselves were worse than hook and chain couplings of old.

Is there some sort of adaptor coupling? How does a loco - or an incompatible unit - rescue a failure these days? I've (fortunately) not seen it done..

Whilst a pacer would never have rescued a failed Mendips stone 2,000t load.. it should be possible for a stone train to push a pacer.
Coupling standardisation is one of my bugbears, but yes there are adapters. Tbf the Dellner coupling is steadily becoming standard, although the issues of getting trains to work in multi with each other won't ever be solved.

However what I was meaning is more in terms of infrastructure. NR are abolishing complex custom-spec pieces of pointwork in favour of standard parts and coherent designs, with other elements like signal heads and some of the back-end parts getting steadily more standard.
 

43066

On Moderation
Joined
24 Nov 2019
Messages
11,532
Location
London
I was under the impression that there was a lot less standardization now: coupling compatibility is substantially worse than sprinter/pacer days, which in themselves were worse than hook and chain couplings of old.

The physical couplings are a lot more standardised (mostly Dellners, with some tightlocks still knocking around eg Networkers, 319s). The issue is the software compatability or lack thereof on modern units. Hence often even same family stock can’t rescue each other without a fitter being involved - 222s onto 220/1s for example.
 
Last edited:

MarkyT

Established Member
Joined
20 May 2012
Messages
6,899
Location
Torbay
Wire theft tends to follow the price of metals. At the moment copper prices are very high, hence the risk is worth it. The move towards Fibre-optics isn't making much of an impact yet as thieves don't know which is which so cut all of them before finding out they're useless. Signalling delays can also occur for a lot of other reasons as well.
Fibre optics have yet to supersede metallic conductors for lineside signalling power distribution, so these cables remain a target, although many modern systems are provided with redundant power distribution throughout via diverse routes and also local battery backups for selected equipment, so with good design, a single feeder cable being removed needn't result in wide area isolation. Early systems often didn't have automatic or remote switchover however so a restoration often had to wait for technicians to attend the site of the problem to do it manually.
Yes, they have redundancy, although things like a track circuit failure can't really be mitigated against. ROCs also have data logging which helps speed up fault finding but also gives lots of good data for future designers to iron out issues in later installations (among other uses).
Much equipment on the lineside today is remotely controlled over duplicated data links, diversely routed from the control centre via the national FTN (fixed telecoms network), a dedicated railway fibre optic communication system covering most of Great Britain.
 

fishwomp

Member
Joined
5 Jan 2020
Messages
890
Location
milton keynes
They physical couplings are a lot more standardised (mostly Dellners, with some tightlocks still knocking around eg Networkers, 319s). The issue is the software compatability or lack thereof on modern units. Hence often even same family stock can’t rescue each other without a fitter being involved - 222s onto 220/1s for example.
That's shocking - particularly the 222 v 220/1 scenario.. aren't they able to run as some form of rescue/dead mode? eg. the only service commanded over the coupler is brake on, brake release? all traction power disabled in the rescued unit, all doors to manual etc...??
 

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
Fibre optics have yet to supersede metallic conductors for lineside signalling power distribution, so these cables remain a target, although many modern systems are provided with redundant power distribution throughout via diverse routes and also local battery backups for selected equipment, so with good design, a single feeder cable being removed needn't result in wide area isolation. Early systems often didn't have automatic or remote switchover however so a restoration often had to wait for technicians to attend the site of the problem to do it manually.

Much equipment on the lineside today is remotely controlled over duplicated data links, diversely routed from the control centre via the national FTN (fixed telecoms network), a dedicated railway fibre optic communication system covering most of Great Britain.
Yep, lots of layers of redundancy, much of which I'm unaware of. Some interesting applications as well - I was present in a Signalbox where an aspect failure was threatening to cause a serious problem for the evening peak, so they got a technician to take the failed green aspect out of use (apologies for lack of technical terms) and force the signal to show double yellow at best so instead of a Blank signal causing drivers to brake and report it they just saw Green - Double Yellow - Green (or Yellow/Red as appropriate) which caused minimal delay.
 

43066

On Moderation
Joined
24 Nov 2019
Messages
11,532
Location
London
That's shocking - particularly the 222 v 220/1 scenario.. aren't they able to run as some form of rescue/dead mode?

Theoretically they can, but it will take so long to get someone out there who knows how to do it (and it may not work anyway) that it’s easier to just proceed on the basis that they can’t.

That’s the railway for you! :lol:
 

Horizon22

Established Member
Associate Staff
Jobs & Careers
Joined
8 Sep 2019
Messages
9,317
Location
London
Yep, lots of layers of redundancy, much of which I'm unaware of. Some interesting applications as well - I was present in a Signalbox where an aspect failure was threatening to cause a serious problem for the evening peak, so they got a technician to take the failed green aspect out of use (apologies for lack of technical terms) and force the signal to show double yellow at best so instead of a Blank signal causing drivers to brake and report it they just saw Green - Double Yellow - Green (or Yellow/Red as appropriate) which caused minimal delay.

Yes that’s quite common; to keep the signal at double yellow and have effectively a larger block section.
 

MarkyT

Established Member
Joined
20 May 2012
Messages
6,899
Location
Torbay
Yep, lots of layers of redundancy, much of which I'm unaware of. Some interesting applications as well - I was present in a Signalbox where an aspect failure was threatening to cause a serious problem for the evening peak, so they got a technician to take the failed green aspect out of use (apologies for lack of technical terms) and force the signal to show double yellow at best so instead of a Blank signal causing drivers to brake and report it they just saw Green - Double Yellow - Green (or Yellow/Red as appropriate) which caused minimal delay.
The active lamp in a signal is proved alight by a current proving relay in series with the switched supply to its filament, which is also, in the case of traditional incandescent lamps, duplicated, with an automated switchover and alarm circuit that prompts the technician to go out and 'change the bulb' before the second filament fails. In order to show proceed, main signals always prove the next signal ahead is alight by including a contact of its lamp proving relay in the rear signal's control circuitry. Thus artificially limiting the controlled aspects available at a signal with a lamp failure can avoid the next signal in rear being held at danger, allowing traffic to continue flowing. This is less of a problem with modern LED signals as they have a vastly longer reliable lamp life than their filament predecessors.
 

Gostav

Member
Joined
14 May 2016
Messages
517
The chances of a direct attack on the UK rail network being militarily relevant to the Russians is as near as makes no difference zero unless they get troops actually onto the south coast. The UK also has the strategic advantage for air defence of being an island nation, so we have a chance to intercept airborne threats at a standoff distance. The UK also has some fairly formidable NATO allies around Russian territory, so we'd need Finland, Norway, the Baltics, Poland, Denmark and Germany to have all fallen before we became a serious target of Russian ordinance.

Honestly the biggest threat to rail operations being centralised is the greater disruption caused if the fire alarm goes off. I suspect in the long run some burnt toast is going to cause many more delays than a missile strike.
This fresh video is a good example about how easy those graffiti groups trespassed the infrastructures.
 

Krokodil

Established Member
Joined
23 Jan 2023
Messages
4,378
Location
Wales
True. Worth noting that Mr Putin would only need to target the major signalling centres to cripple the rail network. Electrification also means that getting the railway working again would take much longer.
Perhaps these issues should be examined by ministry of defence!!
Which signalling centre was built to withstand a nuclear bomb?

I was under the impression that there was a lot less standardization now: coupling compatibility is substantially worse than sprinter/pacer days, which in themselves were worse than hook and chain couplings of old.

Is there some sort of adaptor coupling? How does a loco - or an incompatible unit - rescue a failure these days? I've (fortunately) not seen it done..
57/3s were converted for rescuing Pendolinos and Voyagers, some later had their coupling heads modified to drag suburban EMUs instead.

Whilst a pacer would never have rescued a failed Mendips stone 2,000t load.. it should be possible for a stone train to push a pacer.
Believe it or not, a First North Western class 101 once rescued a steam special. Nothing too taxing, the steam loco just couldn't create vacuum so the 101 was coupled on to release the brakes on the coaches.

Tbf the Dellner coupling is steadily becoming standard
Not necessarily at a standard height.

although the issues of getting trains to work in multi with each other won't ever be solved.
There should at least be a "rescue mode" so that anything can be used to clear the line.

The issue is the software compatability or lack thereof on modern units. Hence often even same family stock can’t rescue each other without a fitter being involved - 222s onto 220/1s for example.
Worse than that, GWR 800s won't talk to LNER 800s. Even though they are of the same class and were part of the same procurement programme. I know that the two will never meet, but it's possible that XC will eventually end up with some kind of 80x which will end up working alongside the 80x units at both companies (and others). If things had been done properly we could return to the days of XC hiring sets in to work specials to Newquay in peak season. We are so close to having a standard intercity unit, yet so far.

Contrast with the blue square DMMU fleets which came from a wide variety of manufacturers with different designs, yet could all work together. 15x too.
 

Taunton

Established Member
Joined
1 Aug 2013
Messages
11,102
Pre-emptive before problems, no. But in response to problems that passengers will mostly not have noticed, yes, sometimes. The embankments on the South Wales main line in the area of Westerleigh Junction was found to have problems. Network Rail spent millions (and more) enhancing the railway embankments in this area.
The direct line through Westerleigh, to Swindon, was one of those built by the GWR in the early 1900s. Supposedly to modern standards of the time, this has been nothing but trouble ever since. Flooding in Chipping Sodbury Tunnel has never been properly overcome. The whole line was closed for a year in 1975 just before HSTs came in, for a complete refurbishment of drainage, and earthworks reinforcement. The troubles are paralleled with other difficulties on the Castle Cary line, recently covered in another thread, which was also built at the same time. Point being made here is these are not dilapidated early Victorian structures, but some of the more recent lines built, from when there was considerable knowledge of rail construction, by a very well funded company who didn't do things on the cheap.

Worse than that, GWR 800s won't talk to LNER 800s. Even though they are of the same class and were part of the same procurement programme. I know that the two will never meet, but it's possible that XC will eventually end up with some kind of 80x which will end up working alongside the 80x units at both companies (and others). If things had been done properly we could return to the days of XC hiring sets in to work specials to Newquay in peak season. We are so close to having a standard intercity unit, yet so far.

Contrast with the blue square DMMU fleets which came from a wide variety of manufacturers with different designs, yet could all work together. 15x too.
I've made the point about Blue Square standardisation before, which I believe was specified by one technical officer at BR HQ. But also, in the USA, very many different railroads and multiple manufacturers, the AAR (Association of American Railroads) has long had a standard for multiple unit coupling, which among other things means that any US loco can work with any other. The standard has been progressively enhanced with technology, while retaining "backwards compatibility", which means among other things that a current, all-electronic loco can be coupled to an early 1950s one, and the combination driven from either one without issue.

I believe AAR is installed in all the Class 66 etc built in North America, all of which, different batches, different operators, appear capable of working with one another.
 
Last edited:
Joined
9 Sep 2022
Messages
68
Location
MAN
Pre-emptive before problems, no. But in response to problems that passengers will mostly not have noticed, yes, sometimes. The embankments on the South Wales main line in the area of Westerleigh Junction was found to have problems. Network Rail spent millions (and more) enhancing the railway embankments in this area.


Well, often it’s not the rationalising and simplifying pointwork that makes the difference, but rather the ripping out of old and worn out infrastructure and the provision of new. This also gives an opportunity to realign taking advantage of space created when parallel sidings or similar were removed.

However, if pointwork or a junction is simplified too much, it can remove flexibility. Which hinders the ability to carry out maintenance and renewal work in the future while still being able to run a (limited) train service. And may result in worse disruption if there is a failure (train, infrastructure) compared to a more flexible layout.

However, more points means more maintenance. Currently, reducing the overhead cost of maintenance is the priority.


Yes, but some of the problems are the result of Network Rail’s own choices, such as effectively prohibiting staff from working with warning protection (red zone working). It’s a bit difficult to maintain the infrastructure if most of your staff spend large amounts of time waiting for a line line or a T3 worksite to be established.

And some strange choices in their ideas on engineering (such as preferring RCPL and HPSA, at one point both the most unreliable types of point operating equipment compared to conventional HW and 63 type point machines).


Under BR on Western, some signallers were able to clip up points. On Western at least (don’t know about elsewhere), major PSBs often had two “standby” staff (signallers) on duty that could attend point failures, level crossing failures, signal failures etc. They would also set up emergency (ticket) block working or act as pilot persons. MOMs of course did not exist at this time.

Now, I don’t think many signallers are permitted to work on the track.

Point clips would be stored in signal boxes and where remote from the signal box, in lineside cupboards. BR even detailed who was responsible for maintaining the point clips.

BR did eventually decide to make FPLs standard on all power operated points (where practical). But I don’t think this was ever applied to mechanical points due to the cost of the additional equipment required.


The amount of cable theft or attempted cable theft varies wildly. Copper prices do have an effect. But so does catching the people responsible. When they are caught.

The railway taking precautions and not making it easy also has a very significant effect. The railway knows that laying cables out on the surface in the cess is asking for trouble. But it costs money to provide new surface concrete troughing (SCT) routes (or plastic equivalents), or to empty existing SCT routes by removing redundant cables.

Indeed, a considerable number of damaged signalling cables are damaged by the railways own activities…









Ahh, it depends. There is so much variation that definite answers are impossible.

The next sections discusses BR Western Region practice. It may have been similar elsewhere.

Relay based route interlocking systems based on say Q series BR miniature relays to BR spec 930 series are generally very reliable. But it’s not practical to have these as duplicated systems. In the design used on Western in the late 1960s and early 1970s, the important parts of power supply systems and equipment for was generally duplicated. For example, two 650V to 110V transformers (one in use and one ‘standby’) were standard in PSB and relay rooms.

Emergency 650V generator sets were also provided to take over in the event of the loss of mains power.

The design of electronic remote control communications systems (‘TDM’ - time division multiplex) in the late 1960s and early 1970s, were not considered to be reliable enough (although two sets of telecommunications links/cables were provided and diverse routed where possible, however switching between the two was entirely manual by S&T technicians in the systems I worked on).

So, an alternative (but less flexible) emergency system was provided. Known as ‘through-routes’ or ‘selective overrides’. These enabled the main lines and important junctions to be able to be controlled and used even during a complete failure of a TDM system. But with reduced capacity at said junctions and no access to less important branch line junctions, loops, sidings etc.

In some cases, they even went to the trouble of providing local emergency panels so that a signaller could travel to the remote junction and take control. This provided near normal signalling once the emergency panel was in operation.

Note that later, 1980s and 1990s TDM systems were either partially duplicated systems or fully duplicated systems. the idea being that either automated switching would select automatically select a working system, or the change-over would occur when the signaller operated a control switch. Hence the failed system would be be deselected and a working system would take over. Then the S&T could repair the failed system. Note that as with the earlier systems, two sets of telecommunications links/cables were provided and diverse routed where possible, but now the switching between them was automatic.

The biggest limitation with the signalling system described above is/was the multicore lineside cables required to connect the interlocking relay rooms, lineside location cupboards to one another. Damage to certain main cables could completely wreck the timetabled service.

Loss of a 650V feed due to a blown fuse could also cause even worse problems. Even more so if it’s due to a damaged power cable.

ROCs and other modern signalling centres generally use computer based interlocking systems and computer based modules in lineside cupboards.

The interlocking typically follows similar principles to the BR SSI (solid state interlocking, one of the first computer based signalling systems) and has three duplicated “processor interlocking modules” for each interlocking. Three to provide redundancy. At least two are required for safety (if one develops a fault, to prevent any unsafe outcome, the errant unit will automatically be taken off line). So by having three, a failure of any one will still leave the interlocking system operational.

The data links that provide communication between the interlocking and all Trackside Functional Modules (TFM) in the lineside cupboards that actually operate the conventional lineside signals, points and which have inputs to read the state of point detection, track circuits, axle counters etc. are also duplicated. Each TFM has a connection to two data links. And the data links are supposed to be diverse routed, so that even if the cables for both data links are cut in the same place, all the signalling will continue operating as normal.

However, the TFM (of which, with SSI, there are signal modules and point modules) only have duplicated not triplicated processor systems. Hence, if any one suffers a problem, the whole TFM goes off-line.

One signal module can operate either one complex signal, two simple signals, or various other functions (
- up to eight individual outputs plus a similar number of individual inputs) sometimes using relays. Point modules can control one or two point ends.

For some reason, in some places signal modules fail relatively often. Not always in the same place. Often enough that depots keep running out of spares.

Also, although there are still 650V standby generators. In modern schemes, the power supply transformers are not duplicated like in the RRI system I described. There is still some limited auxiliary provision. The preference to have many smaller transformers and power supply units rather than a smaller number of larger capacity transformers and power supply units (which was what the WR RRI schemes used).

In the some of the BR WR PSU areas controlled by RRI, battery powered telephones were provided at junctions and at ground frames (GF) and linked by lineside telecommunications cables. The GF included either had facing points (complete with FPLs) or trailing points (often not FPL fitted). A set of keys was available to authorised staff so that the GF could be released (the key enabled the GF to be unlocked locally). Hence it was possible to set up a form of emergency block working even if there was a total power failure.

Some of the GF referred to above were only provided for use during emergencies and engineering works, hence the points operated by these were sometimes referred to as emergency crossovers.

Some emergency crossovers were provided with GF provided for sidings (used by occasional or regular freight traffic).


Moving away from interlocking systems, the comparison between older signalling technologies and equipment and modern does not get any easier.

Take signals. Mechanical signals were a pain. Wire (as in mechanical pulley systems) were a pain to maintain. Especially were they included mechanical point detection. The main signals and shunting semaphore signals were not too bad. Unless they had electrical repeaters going back to the signal box. These required primary cells being replaced before they ran out of juice. The contacts on the signal were difficult to keep in good condition.

But they were unaffected by power blips…

Conventional multi-aspect signal heads using tungsten filament lamps, with a main and a auxiliary filament, controlled automatically by the G(M)ECR (when the main blew, it automatically switched in the auxiliary filament) were generally speaking, extremely reliable. Obviously as the lamps had a rated life of 1000 hours, lamp changing occurred often. Occasionally a G(M)ECR would fail. Normally it was the contact that fed the indication back to the technicians depot rather than causing the signal to ‘go black’.

The ML type were designed and manufactured such that it was possible to change the lenes, the entire lamp assembly (including the lens holder and lamp holder), the transformer and the terminal block without too much fuss. They were built to last.

LED signals are indeed very low maintenance. But they use lots of electronics in the head. So can fail in strange ways. Such as still illuminating, but not drawing enough current, hence the lamp proving (relay or signal module) in the location cupboard thinks the signal is not lit. Rather annoying, sometimes these faults are intermittent.

And some types die completely if they experience a lightning strike even if not directly hit.

And with some types, you can’t rock up and change them easily (some do have modules that can be changed, but that only helps if you have the correct spare).

Conventional track circuits, at least, the signalling equipment itself is very, very reliable. The problem is the environment, particularly the condition of the track. With concrete sleepers, once the offending failed pad/biscuit insulator(s) have been located and renewed, it could be years before a “repeat” failure.

Similar with failed IRJ (insulated rail joints) insulations.

Track circuits running through bullhead rail mounted on damp wooden sleepers in poor ballast, oh my, if I could trade back some

Then audio frequency track circuits came along. These are great in that IRJs are no longer required in CWR/LWR. So main lines can run for miles and miles and miles without any IRJs. But as the track circuit equipment is more complex, the variations of the causes of failures increases. With these, a failure is just as likely (approximately) to be an item of track circuit equipment as a fault or problem in the physical track.

TI21 / EBI Track 200 are definitely more of a pain to fault find compared to the older ASTER U / SF15 type.

Now axle counters are favoured. But these are just as much of a pain. At least the type used in the area where I worked (AzLM), compared to the older type that I have worked on.

You have to carry out a download to obtain the logged data, so that you can see what the system reports. Unless the intelligent infrastructure technician can obtain a download and send it to you, you have to go to the REB (railway/relocatable equipment building). But as it’s not always obvious which REB the axle counter evaluator (ACE) for a particular section is in. It’s not always the nearest or most logical compared to the location of the axle counter head…

Then you have to interpret the data log. Is it a failure of the head, a failure of the electronic junction box (EAK) near the head, a communication problem between the EAK and the ACE, or a failure of one of the cards in the ACE.

If you are going to work on the cable, the EAK or the head, you have to pull the isolation links. Then travel to the relevant cable ’dis’ boxes (if a communications/cable fault is suspected) or to the EAK / head if either of these is suspected. It may be that the EAK needs setting up due to component drift (drift warning). Or the EAK or head may need replacing. None of this is quick…

The ACE uses two processor cards for safety. But if one has a problem, or if power is lost to an ACE, the signaller may experience as many as fifteen ‘track failures’. If these are across a junction or points, kiss goodbye to running a train service.

So, some advantages, and some disadvantages. Definitely not clear cut either way…
Indeed a very impressive answer.

It seems a shame that we still need data cabling in 2023.

And have trouble determining whether or not an LED is actually lit.
 
Last edited:

Krokodil

Established Member
Joined
23 Jan 2023
Messages
4,378
Location
Wales
I believe AAR is installed in all the Class 66 etc built in North America, all of which, different batches, different operators, appear capable of working with one another.
59s, 67s, 69s, 70s and 73/9s too, plus the 68s modified for use with Chiltern.
 

Falcon1200

Established Member
Joined
14 Jun 2021
Messages
4,816
Location
Neilston, East Renfrewshire
That's shocking - particularly the 222 v 220/1 scenario.. aren't they able to run as some form of rescue/dead mode?

Making those two otherwise almost identical fleets incompatible does sound very strange, but OTOH how often does such a unit, with multiple engines and compressors, fail so completely that assistance from another train is required? One of the advantages of DMUs!
 

MarkyT

Established Member
Joined
20 May 2012
Messages
6,899
Location
Torbay
Making those two otherwise almost identical fleets incompatible does sound very strange, but OTOH how often does such a unit, with multiple engines and compressors, fail so completely that assistance from another train is required? One of the advantages of DMUs!
Isn't it to do with the train management and traction control systems being different on the two classes, precluding multiple operation but allowing one to drag or push the other dead in an emergency?
However, if pointwork or a junction is simplified too much, it can remove flexibility. Which hinders the ability to carry out maintenance and renewal work in the future while still being able to run a (limited) train service. And may result in worse disruption if there is a failure (train, infrastructure) compared to a more flexible layout.

However, more points means more maintenance. Currently, reducing the overhead cost of maintenance is the priority.
More points in a route also means more 'points of failure' (pun intended). It's a difficult balance. Japanese approach is to keep layouts as simple as possible and they have very few emergency crossovers and similar 'just in case' facilities for example on their Shinkansen lines. Better to concentrate on quality and reliability of equipment at the important junctions that must be there.
The next sections discusses BR Western Region practice. It may have been similar elsewhere.

Relay based route interlocking systems based on say Q series BR miniature relays to BR spec 930 series are generally very reliable. But it’s not practical to have these as duplicated systems. In the design used on Western in the late 1960s and early 1970s, the important parts of power supply systems and equipment for was generally duplicated. For example, two 650V to 110V transformers (one in use and one ‘standby’) were standard in PSB and relay rooms.

Emergency 650V generator sets were also provided to take over in the event of the loss of mains power.

The design of electronic remote control communications systems (‘TDM’ - time division multiplex) in the late 1960s and early 1970s, were not considered to be reliable enough (although two sets of telecommunications links/cables were provided and diverse routed where possible, however switching between the two was entirely manual by S&T technicians in the systems I worked on).
Early electronic TDM systems were apparently considered too novel and expensive for full duplication in 1960s and 70s schemes. By the time I joined Reading drawing office in the mid-1980s, duplicated TDMs were being provided as standard on new schemes and where earlier TDMs were being replaced, removing the need for selective and through routes overrides, but an independant 'all signals at red' control was always still provided via separate means for safety.
So, an alternative (but less flexible) emergency system was provided. Known as ‘through-routes’ or ‘selective overrides’. These enabled the main lines and important junctions to be able to be controlled and used even during a complete failure of a TDM system. But with reduced capacity at said junctions and no access to less important branch line junctions, loops, sidings etc.
These often used frequency division multiplex technology known as reed systems, available in more secure 'vital' and 'non-vital' variants. This was the preferred alternative to duplicated TDMs in the early days, but by the mid-1980s, duplication of the primary remote control telemetry was favoured.
In some cases, they even went to the trouble of providing local emergency panels so that a signaller could travel to the remote junction and take control. This provided near normal signalling once the emergency panel was in operation.
Local or emergency control panels were never common on the WR. I remember one at Swansea for local control if the link from Port Talbot PSB failed and Newbury had one (normal control was from the old Reading PSB), but I can't recall any other examples. They were standard on many schemes elsewhere in UK though. Every remote relay interlocking in the SR's Feltham control area has or had them for example (an early 1970s scheme). They were not particularly easy to operate as neither train describer nor phone concentrator were provided, and there was no view over the track from inside the windowless relay rooms. Essentially, the operator assigned to the task had to follow precise telephone instructions from the normal signaller who remained at the PSB. This is not unlike historic London Underground practice where technicians are trained to operate local mechanical interlocking machines under instruction if the primary remote control/sequence machine system fails.
 
Last edited:

Krokodil

Established Member
Joined
23 Jan 2023
Messages
4,378
Location
Wales
Local or emergency control panels were never common on the WR. I remember one at Swansea for local control if the link from Port Talbot PSB failed and Newbury had one (normal control was from the old Reading PSB), but I can't recall any other examples.
Weston-super-Mare was another. I did see a list once, I think that it was around half a dozen in total.

Newbury emergency panel is preserved, as part of the collection owned by the Swindon Panel Society and stored at Didcot.

 

zwk500

Veteran Member
Joined
20 Jan 2020
Messages
15,003
Location
Bristol
Weston-super-Mare was another. I did see a list once, I think that it was around half a dozen in total.

Newbury emergency panel is preserved, as part of the collection owned by the Swindon Panel Society and stored at Didcot.

I believe there's are similar emergency panels on the West Coastway which are still theoretically available if needed. Not 100% sure if that's still the case though.
 

MarkyT

Established Member
Joined
20 May 2012
Messages
6,899
Location
Torbay
Weston-super-Mare was another. I did see a list once, I think that it was around half a dozen in total.
I expect that one's still there then as Bristol PSB remains in normal control of that area.
I believe there's are similar emergency panels on the West Coastway which are still theoretically available if needed. Not 100% sure if that's still the case though.
It was standard to provide them at remote relay interlockings on the Southern for a long time, so if the older interlockings survive then the panels should still be there too. They also often function as useful local diagnostic panels for technicians when fault finding, able to show track circuit occupancy, route locking, point and signal states etc. even when the local control mode isn't switched in. In construction and commissioning of relay interlockings when they were new, the local panels were also invaluable for testing and debugging the functionality of the installations 'offline' before connecting them up to the control centre via TDM telemetry.
 

Annetts key

Established Member
Joined
13 Feb 2021
Messages
2,876
Location
West is best
Honestly the biggest threat to rail operations being centralised is the greater disruption caused if the fire alarm goes off. I suspect in the long run some burnt toast is going to cause many more delays than a missile strike.
I’ve already seen this happen, TVSC at Didcot had to be evacuated when the fire alarm went off. This was during the period when Bristol Panel (PSB) still controlled Westerleigh junction, but not Stoke Gifford/Bristol Parkway. Hence a service for London got trapped in the Westerleigh area. The Bristol signaller couldn’t send it forward as ahead was controlled by TVSC. And they couldn’t reverse it, because Stoke Gifford/Bristol Parkway was now controlled by TVSC…
I don’t know however if the cause was burnt toast :lol:

Conventional attacks yes but irregular or cyber attacks are more likely.

The cyber risk appetite at my work has definately become much more risk averse since Ukraine started.
The railway, for safety critical signalling and important telecommunications, uses it’s own private cables/network/communications links. Hence unless someone physically gets access to these systems, they can’t do anything (* there are some staff protection systems that do use commercial systems, but these will be using suitably robust security).

Successful cyber attacks mainly are either due to poor system security, people not using good enough passwords or the web site servers being kept so busy with traffic generated by the attack, such that there is little or no capacity left to service genuine users.

Yep, lots of layers of redundancy, much of which I'm unaware of. Some interesting applications as well - I was present in a Signalbox where an aspect failure was threatening to cause a serious problem for the evening peak, so they got a technician to take the failed green aspect out of use (apologies for lack of technical terms) and force the signal to show double yellow at best so instead of a Blank signal causing drivers to brake and report it they just saw Green - Double Yellow - Green (or Yellow/Red as appropriate) which caused minimal delay.
Aspect restrictions have been available via a technician terminal for SSI systems since BR days. With some types of relay interlocking, it could be done as well. However, the S&T technicians may have needed to travel to site depending on where the control interlocking was located. This was once relatively common practice if there was a cable problem. Green and double yellow aspects would be taken out of use and the affected signals would then only show red or a single yellow. While maintaining the correct aspect sequence in the signals leading up to the affected signal(s).

I know of one location where a 48 cable was so fracked that only about eight or ten cores were good enough to continue in use. All the signals in the affected area were reduced to just red and yellow aspects. The signaller lost all indications in the affected area. But we kept trains moving.

This fresh video is a good example about how easy those graffiti groups trespassed the infrastructures.
There is one GWR IET that now sports graffiti on the side of one of the driving cars!

The direct line through Westerleigh, to Swindon, was one of those built by the GWR in the early 1900s. Supposedly to modern standards of the time, this has been nothing but trouble ever since. Flooding in Chipping Sodbury Tunnel has never been properly overcome. The whole line was closed for a year in 1975 just before HSTs came in, for a complete refurbishment of drainage, and earthworks reinforcement. The troubles are paralleled with other difficulties on the Castle Cary line, recently covered in another thread, which was also built at the same time. Point being made here is these are not dilapidated early Victorian structures, but some of the more recent lines built, from when there was considerable knowledge of rail construction, by a very well funded company who didn't do things on the cheap.
To be fair, the problem with the cutting and the tunnel at Chipping Sodbury is the geology of the rock that they cut through. Coupled with the restriction on how much water the railway can release into the stream that feeds into the River Frome. Strangely enough, the locals (that are near to this stream) are not too keen on their property being flooded…

If the railway would stump up enough money to construct a new drainage system to take the water from the railway directly to the River Frome, plus likely improve said river, that would significantly reduce the number of occasions that the line would be closed due to flooding. But I would not hold your breath, as I don’t see the required investment ever being made available.

Local or emergency control panels were never common on the WR. I remember one at Swansea for local control if the link from Port Talbot PSB failed and Newbury had one (normal control was from the old Reading PSB), but I can't recall any other examples. They were standard on many schemes elsewhere in UK though. Every remote relay interlocking in the SR's Feltham control area has or had them for example (an early 1970s scheme). They were not particularly easy to operate as neither train describer nor phone concentrator were provided, and there was no view over the track from inside the windowless relay rooms. Essentially, the operator assigned to the task had to follow precise telephone instructions from the normal signaller who remained at the PSB. This is not unlike historic London Underground practice where technicians are trained to operate local mechanical interlocking machines under instruction if the primary remote control/sequence machine system fails.

Weston-super-Mare was another. I did see a list once, I think that it was around half a dozen in total.

Newbury emergency panel is preserved, as part of the collection owned by the Swindon Panel Society and stored at Didcot.
Weston-super-Mare emergency panel was provided with both train approaching audible warnings (bells/dying pigs) and a full concentrator (telephone/SPT) position. It did lack a train describer and the route setting system did not prevent the signaller from making silly errors (although the normal interlocking kept trains safe).

When the original TDM system was replaced, the new system was not compatible with WR practice of having the control circuitry in the negative side of the control circuits. The cost of adapting the relay room circuitry for the emergency panel in addition to the substantial costs already needed for this work, plus the money needed to maintain the emergency panel (spares for the buttons and switches were no longer available). And coupled to operations not seeing the emergency panel as very useful (often by the time a signaller was sent to operate it and arrived on site, the failure had already been fixed). Plus the new duplicated TDM was deemed to be sufficiently reliable that the emergency panel was very unlikely to be needed. And (yes, there is more!) there was also a cost of having to arrange for signallers to practice using it at frequent enough intervals that they remained competent. Hence the decision was made to decommission it.

The really interesting thing about the Weston-super-Mare emergency panel, was that it was located in a room on the station. The signaller could see out the door and see the trains in the platforms.

When a brand new WR free wired interlocking was provided at Filton Abbey Wood, that included an emergency panel. This was different to the one at Weston-super-Mare. It used modern plastic domino tiles. No windows were provided, but if the door was open, at least the signaller could see part of the junction and part of the new station. This emergency panel did not have any train approaching audible warnings or a concentrator position. So, the signaller did have to work very closely with the signaller at Bristol.

This emergency panel was taken out of use at the request of, I think, operations. But it remained in place until the interlocking was superseded by TVSC taking control of the area.

The interesting thing about this installation, is that the TDM provided was a modern duplicated system. I’m told however, that part of the reason that an emergency panel was provided, was for ease of testing prior to full commissioning. If it had not been provided, a temporary testing panel would have been needed anyway…

I know that Weston-super-Mare emergency panel saw real life operation. In fact it took over from the existing mechanical signal boxes before Bristol Panel (PSB) was ready to take control of the area!

Plus it was used for its intended purpose on at least two occasions during extended TDM failures.

I don’t remember Filton Abbey Wood emergency panel ever having needed to be used during a TDM failure. It definitely did get used so that the signallers could practice on a regular basis (well, until it was taken out of use). A nice rest day worked according to one Bristol signaller.

Before the provision of the new Filton Abbey Interlocking, the junction had been controlled by Filton Junction Interlocking. Of all the selective overrides on Bristol Panel (PSB), this was the most comprehensive and complex system. So much so that there was no loss of line capacity when this system was in use compared to normal working. Unlike the nearby Stoke Gifford Junction or Patchway Junction systems!

I’ve seen emergency panels elsewhere, I think one may have been in an area controlled by Westbury Panel (PSB), but it’s too long ago for me to remember much about these.

Anyways, with the modern computer controlled systems, there is absolutely no provision for anything similar to emergency panels on Western that I know about. Or indeed for any emergency secondary systems. The interlocking being in Didcot far from the railway it controls makes such systems impractical and expensive.
 
Last edited:

Bald Rick

Veteran Member
Joined
28 Sep 2010
Messages
32,016
Ahh, it depends. There is so much variation that definite answers are impossible.

The next sections discusses BR Western Region practice. It may have been similar elsewhere….

Thats a great treatise on the issues, thanks for taking the time to write it.


Worse than that, GWR 800s won't talk to LNER 800s.

West Country Accent vs Yorkshire Accent. A bit like Jethro talking to Boycott.


To be fair, the problem with the cutting and the tunnel at Chipping Sodbury is the geology of the rock that they cut through. Coupled with the restriction on how much water the railway can release into the stream that feeds into the River Frome. Strangely enough, the locals (that are near to this stream) are not too keen on their property being flooded…

It is now commonly referred to as “Sodding Chipbury”
 

HSTEd

Veteran Member
Joined
14 Jul 2011
Messages
18,544
TDM (Time Division Multiplexing) implies transmitting data in sequence.

What sort of time are we talking about to step through every data point once?

Does this meaningfully slow the response of the signalling system to things like track circuits being actuated?

Is there a TDM specification somewhere that I could look at? If so, what would the document number be for it?
 
Status
Not open for further replies.

Top