For several years, I’ve said cable operators can provide reliable high-speed data, voice, and digital video services if the entire cable network – headend, distribution plant, and subscriber drops – meets or exceeds certain minimum technical performance parameters. Those parameters include: (1) the technical requirements in Part 76 of the Federal Communications Commission‘s rules (or equivalent cable regulations in countries other than the United States); (2) the assumed downstream and upstream RF channel transmission characteristics in the DOCSIS Radio Frequency Interface Specification; and (3) ensuring the HFC plant’s unavailability contribution does not exceed 0.01 percent as described in the PacketCable Availability Reference Architecture. The latter is equivalent to 99.99 percent availability, or what we call "four nines." Striving to meet or exceed the PacketCable availability guidelines is a good idea even if you don’t use PacketCable in your network.
I penned a two-part column about deploying voice over Internet protocol (VoIP) telephony in the October and November 2004 issues of Communications Technology. The columns focused on the three categories of technical parameters highlighted in the previous paragraph, with a fair amount of discussion about factors that impact network availability. Availability defined This might be a good place to stop for a moment and define availability, a term that’s often confused with reliability. Availability is the ratio of time that a device, system, network or service is available for use to total time, often expressed as a percentage. We know that a 365-day year comprises 8,760 hours. A common network availability spec – which comes from the old Bellcore spec – is the previously mentioned 99.99 percent. That works out to 8,759.124 hours of uptime, or no more than 52.56 minutes of outage per year. If you think that’s tough, try five nines (99.999 percent), which works out to 8759.91 hours of uptime, or no more than about 5.26 minutes of outage per year. Yikes!
Reliability is related to availability, but it’s not the same thing. Reliability is the probability that a system or device will not fail during some specified period. Thus, it’s incorrect to say "99.99 percent reliability" or "four nines reliability." Meeting four nines Back to network availability: Can an HFC network meet four nines? The bottom line is generally yes, but there are some "ifs" that come into play. Let’s look at a few of the major factors that impact network availability. First, a modern HFC architecture significantly reduces cascaded devices and components compared to old tree-and-branch architectures, which helps a bunch. Obviously, the shorter the active device cascades after the node, the better. A maximum of two or three amps in cascade is ideal, as far as helping to achieve the holy grail of four nines availability.
Next is suitable use of backup power – in the headend, hub, outside plant, and embedded multimedia terminal adapter (EMTA). That means backup generator and uninterruptible power supply (UPS) in the headend and hub, standby power supplies in the outside plant, and battery backup in the EMTAs.
Status monitoring of critical headend and hub equipment, nodes, and standby power supplies is important to achieving high end-to-end availability.
Appropriate redundancy where it makes sense helps, too, as does the use of hardened – think more reliable – devices everywhere. To round things out, I like to toss in proactive system maintenance practices, high quality subscriber drop installations, a quality control program, and practices to enable quick service restoration after an outage has happened. Service availability That largely takes care of network availability, but what about service availability? In other words, what things can affect service availability but not network availability?
To understand where I’m going with this, consider the following hypothetical example. Assume that one of your subscribers is watching a movie on HBO, and another subscriber is surfing the Web using a cable modem. During the movie, HBO has a problem of some sort at its main studio or uplink that causes its signal to disappear. Did an outage occur? Think carefully about your answer.
Let’s look at this more closely. From the perspective of the subscriber surfing the Web, did an outage occur? What about from the perspective of the subscriber watching the movie on HBO? Obviously the person surfing the Web did not experience an outage, but to the sub watching HBO there was an outage.
Did a cable network outage occur? No.
Did a service outage occur? Yes.
So, did an outage occur? You be the judge. My vote is yes.
The point of this example is to illustrate that there can be service outages but not network outages. The cable network – specifically the outside plant – keeps on working, but one or more services are affected for whatever reason. This suggests that we need to pay attention to both network availability and service availability. That said, what are some factors that can affect service availability? Factors Ingress and impulse noise are biggies. When they’re present, the cable network is usually still operational. Indeed, high-speed data subs might not notice anything wrong because if their data packets don’t get through the first time, they can be retransmitted. But VoIP telephony subs might experience voice quality problems or perhaps even dropped calls. Remember, voice packets cannot be retransmitted. They have only one chance to get through. If they don’t make it the first time, they’re toast.
Other gremlins that can affect service availability but not network availability include upstream or downstream laser clipping, sweep transmitter interference, and intermittent connections. These likely will have a similar impact on VoIP telephony service as ingress or impulse noise, while cable modem service appears to be more or less humming along just fine. Digital video might get hammered with intermittent tiling, yet analog TV channels appear fine with perhaps nothing more than an occasional glitch in the picture.
Distortions – composite triple beat (CTB), composite second order (CSO), common path distortion (CPD), and so on – don’t physically take the network down, but they sure can impair signals if they’re severe enough. Group delay? Micro-reflections? Crummy frequency response? Low carrier-to-noise ratio (CNR)? The plant is still up and running, but some or all of the digital services may be affected, perhaps to the point of no longer working. Analog TV channels may or may not be visibly affected, but they’re probably still watchable. Tracking Many cable operators track network availability, but how many track service availability? Granted, the latter would be a lot more difficult because it would entail keeping tabs on each channel or service carried on the network. Indeed, it probably would be very difficult to obtain accurate service availability numbers for all services. But put yourself in the shoes of your customers.
If we go back to the example of the subscriber watching a movie on HBO when the signal disappeared, in that subscriber’s mind an outage occurred. He or she doesn’t know or care if the service outage was caused by loss of power, a cut cable, a defective piece of headend equipment, or the hypothetical problem back at HBO’s main studio or uplink. We might know that the cable network itself didn’t experience an outage, so its availability was unaffected. But there was a service outage, one that impacted all subscribers watching HBO at the time.
Ron Hranac is a technical leader, Broadband Network Engineering, for Cisco Systems and senior technology editor for Communications Technology. Reach him at email@example.com.