The evidence at the landslip site at Hook seems to be that repairs are of a much better quality than the original, but it seems all they’ll ever be funded to do is repairs, no one is proposing pre-emptive reconstruction of embankments over long distances (yet).
Pre-emptive before problems, no. But in response to problems that passengers will mostly not have noticed, yes, sometimes. The embankments on the South Wales main line in the area of Westerleigh Junction was found to have problems. Network Rail spent millions (and more) enhancing the railway embankments in this area.
In the “last couple of decades” includes anything from 2000 onwards, and in that timeframe there’s been almost no infrastructure removal on key routes - indeed it’s increased. Lots of evidence suggests that rationalising and simplifying pointwork around major terminals actually reduces delays.
Well, often it’s not the rationalising and simplifying pointwork that makes the difference, but rather the ripping out of old and worn out infrastructure and the provision of new. This also gives an opportunity to realign taking advantage of space created when parallel sidings or similar were removed.
However, if pointwork or a junction is simplified too much, it can remove flexibility. Which hinders the ability to carry out maintenance and renewal work in the future while still being able to run a (limited) train service. And may result in worse disruption if there is a failure (train, infrastructure) compared to a more flexible layout.
However, more points means more maintenance. Currently, reducing the overhead cost of maintenance is the priority.
Fundamentally the railway network is pushed harder than it has been before, in a stricter safety environment, and under pressure over spiraling costs.
Yes, but some of the problems are the result of Network Rail’s own choices, such as effectively prohibiting staff from working with warning protection (red zone working). It’s a bit difficult to maintain the infrastructure if most of your staff spend large amounts of time waiting for a line line or a T3 worksite to be established.
And some strange choices in their ideas on engineering (such as preferring RCPL and HPSA, at one point both the most unreliable types of point operating equipment compared to conventional HW and 63 type point machines).
As the Carmont accident showed, the points at the Carmont signalbox crossover could be reversed by the signaller, but had no facing point locks (on a line used almost wholly by passenger trains - what else is going to need to be reversed there), and had no point clips in the signalbox, so an MOM had to drive through the storm the best part of two hours, delaying everything on the way, just to fit two clips.
Talk about not being joined up. What a waste of time and cost. And what a hazardous journey by road, but presumably if the MOM had a road accident in the flooding it would not be reportable to RAIB as it would happen outside railway premises. So that's alright then.
Under BR on Western, some signallers were able to clip up points. On Western at least (don’t know about elsewhere), major PSBs often had two “standby” staff (signallers) on duty that could attend point failures, level crossing failures, signal failures etc. They would also set up emergency (ticket) block working or act as pilot persons. MOMs of course did not exist at this time.
Now, I don’t think many signallers are permitted to work on the track.
Point clips would be stored in signal boxes and where remote from the signal box, in lineside cupboards. BR even detailed who was responsible for maintaining the point clips.
BR did eventually decide to make FPLs standard on all power operated points (where practical). But I don’t think this was ever applied to mechanical points due to the cost of the additional equipment required.
3. There seem to be frequent delays (in my part of the SE anyway) caused by a fault with the signalling. Could this be primarily a result of copper wire theft and if so, is it more of a problem not than in the past?
The amount of cable theft or attempted cable theft varies wildly. Copper prices do have an effect. But so does catching the people responsible. When they are caught.
The railway taking precautions and not making it easy also has a very significant effect. The railway knows that laying cables out on the surface in the cess is asking for trouble. But it costs money to provide new surface concrete troughing (SCT) routes (or plastic equivalents), or to empty existing SCT routes by removing redundant cables.
Indeed, a considerable number of damaged signalling cables are damaged by the railways own activities…
Has the move towards highly centralised signalling systems made signalling less reliable? There seem to be more failures than I remember years ago, and a major issue at one of these signalling centres could presumably affect a wide area. Do these systems have built in redundancy to cope with failures?
The ROCs are built with reserve power supplies and other such things.
Whilst failures affect a larger area, the reality is that failures in signalling in key areas can knock out half the network anyway.
If you have failures at Clapham Junction it doesn't matter if all the rest of the signalling on the SWML or other lines through it works perfectly, the service will fall over.
Whilst some faults might be worse, you bank the savings from centralisation every hour of every day.
Yes, they have redundancy, although things like a track circuit failure can't really be mitigated against. ROCs also have data logging which helps speed up fault finding but also gives lots of good data for future designers to iron out issues in later installations (among other uses).
Why is it any different to the move to PSBs in the 60s? Lose one of those and it would have all gone down the pan.
Ahh, it depends. There is so much variation that definite answers are impossible.
The next sections discusses BR Western Region practice. It may have been similar elsewhere.
Relay based route interlocking systems based on say Q series BR miniature relays to BR spec 930 series are generally very reliable. But it’s not practical to have these as duplicated systems. In the design used on Western in the late 1960s and early 1970s, the important parts of power supply systems and equipment for was generally duplicated. For example, two 650V to 110V transformers (one in use and one ‘standby’) were standard in PSB and relay rooms.
Emergency 650V generator sets were also provided to take over in the event of the loss of mains power.
The design of electronic remote control communications systems (‘TDM’ - time division multiplex) in the late 1960s and early 1970s, were not considered to be reliable enough (although two sets of telecommunications links/cables were provided and diverse routed where possible, however switching between the two was entirely manual by S&T technicians in the systems I worked on).
So, an alternative (but less flexible) emergency system was provided. Known as ‘through-routes’ or ‘selective overrides’. These enabled the main lines and important junctions to be able to be controlled and used even during a complete failure of a TDM system. But with reduced capacity at said junctions and no access to less important branch line junctions, loops, sidings etc.
In some cases, they even went to the trouble of providing local emergency panels so that a signaller could travel to the remote junction and take control. This provided near normal signalling once the emergency panel was in operation.
Note that later, 1980s and 1990s TDM systems were either partially duplicated systems or fully duplicated systems. the idea being that either automated switching would select automatically select a working system, or the change-over would occur when the signaller operated a control switch. Hence the failed system would be be deselected and a working system would take over. Then the S&T could repair the failed system. Note that as with the earlier systems, two sets of telecommunications links/cables were provided and diverse routed where possible, but now the switching between them was automatic.
The biggest limitation with the signalling system described above is/was the multicore lineside cables required to connect the interlocking relay rooms, lineside location cupboards to one another. Damage to certain main cables could completely wreck the timetabled service.
Loss of a 650V feed due to a blown fuse could also cause even worse problems. Even more so if it’s due to a damaged power cable.
ROCs and other modern signalling centres generally use computer based interlocking systems and computer based modules in lineside cupboards.
The interlocking typically follows similar principles to the BR SSI (solid state interlocking, one of the first computer based signalling systems) and has three duplicated “processor interlocking modules” for each interlocking. Three to provide redundancy. At least two are required for safety (if one develops a fault, to prevent any unsafe outcome, the errant unit will automatically be taken off line). So by having three, a failure of any one will still leave the interlocking system operational.
The data links that provide communication between the interlocking and all Trackside Functional Modules (TFM) in the lineside cupboards that actually operate the conventional lineside signals, points and which have inputs to read the state of point detection, track circuits, axle counters etc. are also duplicated. Each TFM has a connection to two data links. And the data links are supposed to be diverse routed, so that even if the cables for both data links are cut in the same place, all the signalling will continue operating as normal.
However, the TFM (of which, with SSI, there are signal modules and point modules) only have duplicated not triplicated processor systems. Hence, if any one suffers a problem, the whole TFM goes off-line.
One signal module can operate either one complex signal, two simple signals, or various other functions (
- up to eight individual outputs plus a similar number of individual inputs) sometimes using relays. Point modules can control one or two point ends.
For some reason, in some places signal modules fail relatively often. Not always in the same place. Often enough that depots keep running out of spares.
Also, although there are still 650V standby generators. In modern schemes, the power supply transformers are not duplicated like in the RRI system I described. There is still some limited auxiliary provision. The preference to have many smaller transformers and power supply units rather than a smaller number of larger capacity transformers and power supply units (which was what the WR RRI schemes used).
In the some of the BR WR PSU areas controlled by RRI, battery powered telephones were provided at junctions and at ground frames (GF) and linked by lineside telecommunications cables. The GF included either had facing points (complete with FPLs) or trailing points (often not FPL fitted). A set of keys was available to authorised staff so that the GF could be released (the key enabled the GF to be unlocked locally). Hence it was possible to set up a form of emergency block working even if there was a total power failure.
Some of the GF referred to above were only provided for use during emergencies and engineering works, hence the points operated by these were sometimes referred to as emergency crossovers.
Some emergency crossovers were provided with GF provided for sidings (used by occasional or regular freight traffic).
Moving away from interlocking systems, the comparison between older signalling technologies and equipment and modern does not get any easier.
Take signals. Mechanical signals were a pain. Wire (as in mechanical pulley systems) were a pain to maintain. Especially were they included mechanical point detection. The main signals and shunting semaphore signals were not too bad. Unless they had electrical repeaters going back to the signal box. These required primary cells being replaced before they ran out of juice. The contacts on the signal were difficult to keep in good condition.
But they were unaffected by power blips…
Conventional multi-aspect signal heads using tungsten filament lamps, with a main and a auxiliary filament, controlled automatically by the G(M)ECR (when the main blew, it automatically switched in the auxiliary filament) were generally speaking, extremely reliable. Obviously as the lamps had a rated life of 1000 hours, lamp changing occurred often. Occasionally a G(M)ECR would fail. Normally it was the contact that fed the indication back to the technicians depot rather than causing the signal to ‘go black’.
The ML type were designed and manufactured such that it was possible to change the lenes, the entire lamp assembly (including the lens holder and lamp holder), the transformer and the terminal block without too much fuss. They were built to last.
LED signals are indeed very low maintenance. But they use lots of electronics in the head. So can fail in strange ways. Such as still illuminating, but not drawing enough current, hence the lamp proving (relay or signal module) in the location cupboard thinks the signal is not lit. Rather annoying, sometimes these faults are intermittent.
And some types die completely if they experience a lightning strike even if not directly hit.
And with some types, you can’t rock up and change them easily (some do have modules that can be changed, but that only helps if you have the correct spare).
Conventional track circuits, at least, the signalling equipment itself is very, very reliable. The problem is the environment, particularly the condition of the track. With concrete sleepers, once the offending failed pad/biscuit insulator(s) have been located and renewed, it could be years before a “repeat” failure.
Similar with failed IRJ (insulated rail joints) insulations.
Track circuits running through bullhead rail mounted on damp wooden sleepers in poor ballast, oh my, if I could trade back some
Then audio frequency track circuits came along. These are great in that IRJs are no longer required in CWR/LWR. So main lines can run for miles and miles and miles without any IRJs. But as the track circuit equipment is more complex, the variations of the causes of failures increases. With these, a failure is just as likely (approximately) to be an item of track circuit equipment as a fault or problem in the physical track.
TI21 / EBI Track 200 are definitely more of a pain to fault find compared to the older ASTER U / SF15 type.
Now axle counters are favoured. But these are just as much of a pain. At least the type used in the area where I worked (AzLM), compared to the older type that I have worked on.
You have to carry out a download to obtain the logged data, so that you can see what the system reports. Unless the intelligent infrastructure technician can obtain a download and send it to you, you have to go to the REB (railway/relocatable equipment building). But as it’s not always obvious which REB the axle counter evaluator (ACE) for a particular section is in. It’s not always the nearest or most logical compared to the location of the axle counter head…
Then you have to interpret the data log. Is it a failure of the head, a failure of the electronic junction box (EAK) near the head, a communication problem between the EAK and the ACE, or a failure of one of the cards in the ACE.
If you are going to work on the cable, the EAK or the head, you have to pull the isolation links. Then travel to the relevant cable ’dis’ boxes (if a communications/cable fault is suspected) or to the EAK / head if either of these is suspected. It may be that the EAK needs setting up due to component drift (drift warning). Or the EAK or head may need replacing. None of this is quick…
The ACE uses two processor cards for safety. But if one has a problem, or if power is lost to an ACE, the signaller may experience as many as fifteen ‘track failures’. If these are across a junction or points, kiss goodbye to running a train service.
So, some advantages, and some disadvantages. Definitely not clear cut either way…