System X is the digital switching system installed in almost all telephone exchanges throughout the United Kingdom, from 1980 onwards.

History

edit

Development

edit

System X was developed by Post Office Telecommunications (later to become British Telecom), GEC, Plessey, and Standard Telephones and Cables (STC), and was first shown in public in 1979 at the Telecom 79 exhibition in Geneva, Switzerland.[1] STC withdrew from the project in 1982. In 1988, the telecommunications divisions of GEC and Plessey merged to form GPT, with Plessey subsequently being bought out by GEC and Siemens. In the late 1990s, GEC acquired Siemens' 40% stake in GPT. GEC renamed itself Marconi in 1999.

When Marconi was sold to Ericsson in January 2006, Telent plc retained System X and continues to support and develop it as part of its UK services business.

Implementation

edit

The first System X unit to enter public service, in September 1980, was installed in Baynard House, London and was a 'tandem junction unit' which switched telephone calls amongst some 40 local exchanges. The first local digital exchange started operation in 1981 in Woodbridge, Suffolk (near BT's Research HQ at Martlesham Heath). BT's last electromechanical trunk exchange (in Thurso, Scotland) was closed in July 1990, completing the UK's trunk network transition to purely digital operation and becoming the first national telephone system to achieve this. The last electromechanical local exchanges, Crawford, Crawfordjohn and Elvanfoot, all in Scotland, were changed over to digital on 23 June 1995 and the last electronic analogue exchanges, Selby, Yorkshire and Leigh on Sea, Essex were changed to digital on 11 March 1998.

In addition to the UK, System X was installed in the Channel Islands, and several systems were installed in other countries, although it never achieved significant export sales.

Small exchanges: UXD5

edit

Separately from System X, BT developed the UXD5 ("unit exchange digital"), a small digital exchange which was cost-effective for small and remote communities. Developed by BT at Martlesham Heath and based on the Monarch PABX, the first example was put into service at Glenkindie, Scotland, in 1979, the year before the first System X.[2] Several hundred of these exchanges were manufactured by Plessey[3] and installed in rural areas, largely in Scotland and Wales. The UXD5 was included as part of the portfolio when System X was marketed to other countries.

System X units

edit

System X covers three main types of telephone switching equipment. Concentrators are usually kept in local telephone exchanges but can be housed remotely in less populated areas. DLEs and DMSUs operate in major towns and cities and provide call routing functions. The BT network architecture designated exchanges as DLEs / DMSUs / DJSUs etc. but other operators configured their exchanges differently depending on their network architecture.

With the focus of the design being on reliability, the general architectural principle of System X hardware is that all core functionality is duplicated across two 'sides' (side 0 and side 1). Either side of a functional resource can be the 'worker' with the other being an in-service 'standby'. Resources continually monitor themselves and should a fault be detected the associated resource will mark itself as 'faulty' and the other side will take the load instantaneously. This resilient configuration allows for hardware changes to fix faults or perform upgrades without interruption to service. Some critical hardware such as switchplanes and waveform generators are triplicated and work on an 'any 2 out of 3' basis. The CPUs in an R2PU processing cluster are quadruplicated to retain 75% performance capability with one out of service, instead of 50% if they were simply duplicated. Line cards providing customer line ports or the 2 Mbit/s E1 terminations on the switch have no 'second side' redundancy, although a customer can have multiple lines or an interconnect have multiple E1s to provide resilience.

Concentrator unit

edit

The concentrator unit has four main sub-systems: line modules, digital concentrator switch, digital line termination (DLT) units and control unit. Its purpose is to convert speech from analogue signals to digital format, and concentrate the traffic for onward transmission to the digital local exchange (DLE). It also receives dialled information from the subscriber and passes this to the exchange processors so that the call can be routed to its destination. In normal circumstances, it does not switch signals between subscriber lines but has limited capacity to do this if the connection to the parent switch is lost.

Each analogue line module unit converts analogue signals from a maximum of 64 subscriber lines in the access network to the 64 kilobit/s digital binary signals used in the core network. This is done by sampling the incoming signal at a rate of 8 kS/s and coding each sample into an 8-bit word using pulse-code modulation (PCM) techniques. The line module also strips out any signalling information from the subscriber line, e.g., dialled digits, and passes this to the control unit. Up to 32 line modules are connected to a digital concentrator switch unit using 2 Mbit/s paths, giving each concentrator a capacity of up to 2048 subscriber lines. The digital concentrator switch multiplexes the signals from the line modules using time-division multiplexing and concentrates the signals onto up to 480 time slots on E1s up to the exchange switch via the digital line termination units. The other two time slots on each channel are used for synchronisation and signalling. These are timeslots 0 and 16 respectively.

Depending on the hardware used, concentrators support the following line types: analogue lines (either single or multiple line groups), ISDN2 (basic rate ISDN) and ISDN30 (primary rate ISDN). ISDN can run either UK-specific DASS2 or ETSI (European) protocols. Subject to certain restrictions a concentrator can run any mix of line types, which allows operators to balance business ISDN users with residential users to give a better service to both and efficiency for the operator.

Concentrator units can either stand alone as remote concentrators or be co-located with the exchange core (switch and processors).

Digital local exchange

edit

The Digital Local Exchange (DLE) hosts a number of concentrators and routes calls to different DLEs or DMSUs depending on the destination of the call. The heart of the DLE is the Digital Switching Subsystem (DSS) which consists of Time Switches and a Space Switch. Incoming traffic on the 30 channel PCM highways from the Concentrator Units is connected to Time Switches. The purpose of these is to take any incoming individual Time Slot and connect it to an outgoing Time Slot and so perform a switching and routing function. To allow access to a large range of outgoing routes, individual Time Switches are connected to each other by a Space Switch. The Time Slot inter-connections are held in Switch Maps which are updated by Software running on the Processor Utility Subsystem (PUS). The nature of the Time Switch-Space Switch architecture is such that the system is very unlikely to be affected by a faulty time or space switch, unless many faults are present. The switch is a 'non-blocking' switch.

Digital main switching unit

edit

The Digital Main Switching Unit (DMSU) deals with calls that have been routed by DLEs or another DMSU and is a 'trunk / transit switch', i.e. it does not host any concentrators. As with DLEs, DMSUs are made up of a Digital Switching Subsystem and a Processor Utility Subsystem, amongst other things. In the British PSTN network, each DMSU is connected to every other DMSU in the country, enabling almost congestion-proof connectivity for calls through the network. In inner London, specialised versions of the DMSU known as DJSUs carry intra-London traffic only. The DMSU network in London has been gradually phased out and moved onto more modern "NGS" switches over the years as the demand for PSTN phone lines has decreased and BT has sought to reclaim some of its floorspace. The NGS switch referred to is a version of Ericsson's AXE10 product line, phased in between the late 1990s and early 2000s.

It is common to find multiple exchanges (switches) within the same exchange building in large UK cities: DLEs for the directly connected customers and a DMSU to provide the links to the rest of the UK.

Combined Trunk & Local Exchange

edit

The combined Trunk & Local Exchange (CTLE) is an exchange that performs the duties of both a DLE and DMSU – it has its own directly connected subscribers and also acts as a transit switch. These can be used by smaller network operators who have a small number of exchanges.

Processor utility subsystem

edit

The Processor Utility Subsystem (PUS) controls the switching operations and is the brain of the DLE or DMSU. It hosts the Call Processing, Billing, Switching and Maintenance applications Software (amongst other software subsystems). The PUS is divided into up to eight 'clusters' depending on the amount of telephony traffic dealt with by the exchange. Each of the first four clusters of processors contains four central processing units (CPUs), the main memory stores (STRs) and the two types of backing store (primary (RAM) and secondary (hard disk)) memory. The PUS was coded with a version of the CORAL66 programming language known as PO CORAL (Post Office CORAL) later known as BTCORAL.

The original processor that went into service at Baynard house, London, was known as the MK2 BL processor. It was replaced in 1980 by the POPUS1 (Post Office Processor Utility Subsystem). POPUS1 processors were later installed in Lancaster House in Liverpool and also, in Cambridge. Later, these too were replaced with a much smaller system known as R2PU or Release 2 Processor Utility. This was the four CPU per cluster and up to 8-cluster system, as described above. Over time, as the system was developed, additional "CCP / Performance 3" clusters were added (clusters 5, 6, 7 and 8) using more modern hardware, akin to late-1990s computer technology, while the original processing clusters 0 to 3 were upgraded with, for example, larger stores (more RAM). The advanced features of this fault-tolerant system help explain why these are still in use today – like self fault detection and recovery, battery-backed RAM, mirrored disk storage, auto replacement of a failed memory unit, and the ability to trial new software (and roll back, if necessary) to the previous version. Later, the hard disks on the CCP clusters were replaced by with solid-state drives to improve reliability.

In modern times, all System X switches show a maximum of 12 processing clusters; 0–3 are the four-CPU System X-based clusters and the remaining eight positions can be filled with CCP clusters which deal with all traffic handling. Whilst the status quo for a large System X switch is to have four main and four CCP clusters, there are one or two switches that have four main and six CCP clusters. The CCP clusters are limited to call handling only, there was the potential for the exchange software to be re-written to accept the CCP clusters, but this was scrapped as being too costly of a solution to replace a system that was already working well. Should a CCP cluster fail, System X will automatically re-allocate its share of the call handling to another CCP cluster, if no CCP clusters are available then the exchange's main clusters will begin to take over the work of call handling as well as running the exchange.

In terms of structure, the System X processor is a "one master, many slaves" configuration – cluster 0 is referred to as the base cluster and all other clusters are effectively dependent to it. If a slave cluster is lost, then call handling for any routes or concentrators dependent to it is also lost; however, if the base cluster is lost then the entire exchange ceases to function. This is a very rare occurrence, as due to the design of System X it will isolate problematic hardware and raise a fault report. During normal operation, the highest level of disruption is likely to be a base cluster restart, all exchange functions are lost for 2–5 minutes while the base cluster and its slaves come back online, but afterwards the exchange will continue to function with the defective hardware isolated. The exchange can and will restart ('rollback') individual processes if it detects problems with them. If that doesn't work then a cluster restart can be performed. Should the base cluster or switch be irrecoverable via restarts, the latest archive configuration can be manually reloaded using the restoration procedure. This can take hours to bring everything fully back into service as the switch has to reload all its semi-permanent paths and the concentrators have to download their configurations. Post-2020, exchange software is being modified to reduce the restoration time significantly.

During normal operation, the exchange's processing clusters will sit between 5-15% usage, with the exception of the base cluster which will usually sit between 15 and 25% usage, spiking as high as 45% - this is due to the base cluster handling far more operations and processes than any other cluster on the switch.

Editions of System X

edit

System X has gone through two major editions, Mark 1 and Mark 2, referring to the switch matrix used.

The Mark 1 Digital Subscriber Switch (DSS) was the first to be introduced. It is a time-space-time switch setup with a theoretical maximum matrix of 96x96 Time Switches. In practice, the maximum size of switch is a 64x64 Time Switch matrix. Each time switch is duplicated into two security planes, 0 and 1. This allows for error checking between the planes and multiple routing options if faults are found. Every timeswitch on a single plane can be out of service and full function of the switch can be maintained, however, if one timeswitch on plane 0 is out, and another on plane 1 is out, then links between the two are lost. Similarly, if a timeswitch has both plane 0 and 1 out, then the timeswitch is isolated. Each plane of the timeswitch occupies one shelf in a three-shelf group – the lower shelf is plane 0, the upper shelf is plane 1 and the middle shelf is occupied by up to 32 DLTs (Digital Line Terminations). The DLT is a 2048 kbit/s 32-channel PCM link in and out of the exchange. The space switch is a more complicated entity, but is given a name ranging from AA to CC (or BB within general use), a plane of 0 or 1 and, due to the way it is laid out, an even or odd segment, designated by another 0 and 1. The name of a space switch in software, then, can look like this. SSW H'BA-0-1. The space switch is the entity that provides the logical cross connection of traffic across the switch, and the time switches are dependent to it. When working on a space switch it is imperative to make sure the rest of the switch is healthy as, due to its layout, powering off either the odd or even segment of a space switch will "kill" all of its dependent time switches for that plane. Mark 1 DSS is controlled by a triplicated set of Connection Control Units (CCU's) which run in a 2/3 majority for error checking, and is monitored constantly by a duplicated Alarm Monitoring Unit (AMU) which reports faults back to the DSS Handler process for appropriate action to be taken. The CCU and AMU also play part in diagnostic testing of Mark 1 DSS.

A Mark 1 System X unit is built in suites, each 8 racks in length, and there can be 15 or more suites. Considerations of space, power demand and cooling demand led to development of the Mark 2.

Mark 2 DSS ("DSS2") is the later revision, which continues to use the same processor system as Mark 1, but made serious and much needed revisions to both the physical size of the switch and the way that the switch functions. It is an optical fibre-based time-space-time-space-time switching matrix, connecting a maximum of 2048 2 Mbit/s PCM systems, much like Mark 1; however the hardware is much more compact.

The four-rack group of the Mk1 CCU and AMU is gone, and replaced neatly by a single connection control rack, comprising the Outer Switch Modules (OSMs), Central Switch Modules (CSMs) and the relevant switch/processor interface hardware. The Timeswitch shelves are replaced with Digital Line Terminator Group (DLTG) shelves, which each contain two DLTGs, comprising 16 Double Digital Line Termination boards (DDLTs) and two Line Communication Multiplexors (LCMs), one for each security plane. The LCMs are connected by optical fibre over a forty megabit link to the OSMs. In total, there are 64 DLTG's in a fully sized Mk2 DSS unit, which is analogous to the 64 Time Switches of the Mk1 DSS unit. The Mk2 DSS unit is a lot smaller than the Mk1, and as such consumes less power and also generates less heat to be dealt with as a result. It is also possible to interface directly with SDH transmission over fibre at 40 Mbit/s, thus reducing the amount of 2 Mbit/s DDF and SDH tributary usage. Theoretically, a transit switch (DMSU) could purely interface with the SDH over fibre with no DDF at all. Further to this, due to the completely revised switch design and layout, the Mk2 switch manages to be somewhat faster than the Mk1 (although the actual difference is negligible in practice). It is also far more reliable, having many less discrete components in each of its sections means there is much less to go wrong, and when something does go wrong it is usually a matter of replacing the card tied to the software entity that has failed, rather than needing to run diagnostics to determine possible locations for the point of failure as is the case with Mk1 DSS.

In the early 2020s, BT commenced rationalisation of its SystemX estate to save power, cost, floorspace and improve reliability – the ageing Mk1 switches were becoming a maintenance headache. Reduced traffic volumes and reduced numbers of subscribers mean the System X estate has significant opportunity for downsizing. This rationalisation process entails re-purposing DMSUs/DJSUs equipped with Mk2 switches into CTLEs, re-parenting concentrators onto them from other exchanges and shutting down those exchanges. This results in 'super CTLEs' with large (60+) numbers of concentrators hosted on them. The large number of concentrators results in a long restoration time in the event of a major fault on the exchange, so Telent have re-written the exchange software to improve restoration times. This major software revision is expected to last until network operators retire their SystemX estate, even though retirement plans are invariably taking longer than planned due to the inability of IP-based networks to handle legacy services, especially machine-to-machine communications.

Message Transmission Subsystem

edit

A System X exchange's processors communicate with its concentrators and other exchanges using its Message Transmission subsystem (MTS). MTS links are 'nailed up' between nodes by re-purposing individual 64 kbit/s digital speech channels across the switch into permanent paths for the signalling messages to route over. Messaging to and from concentrators is done using proprietary messaging, messaging between exchanges is done using C7 / SS7 messaging. UK-specific and ETSI variant protocols are supported. It was also possible to use channel associated signalling, but as the UK and Europe's exchanges went digital in the same era this was hardly used.

Replacement system

edit

Many of the System X exchanges installed during the 1980s continue in service into the 2020s.

System X was scheduled for replacement with Next Generation softswitch equipment as part of BT's 21st Century Network (21CN) programme. Some other users of System X – in particular Jersey Telecom and Kingston Communications – replaced their circuit-switched System X equipment with Marconi XCD5000 softswitches (which were intended as the NGN replacement for System X) and Access Hub multiservice access nodes. However, the omission of Marconi from BT's 21CN supplier list and the shift in focus away from telephony to broadband all led to much of the System X estate being maintained.

See also

edit

References

edit
  1. ^ "Exhibits: System X". The Communications Museum Trust. Retrieved 27 May 2021.
  2. ^ Ames, John (9 December 2015). "Memories of the Glenkindie telephone exchange". National Museums Scotland. Retrieved 27 May 2021.
  3. ^ "History of Plessey". www.britishtelephones.com. Retrieved 27 May 2021.