A recent discussion on the SCTE-List about dealing with signal leakage in cable networks that have migrated to all-digital operation sparked a lengthy and very interesting thread of messages. As you know, Part 76 of the FCC Rules is very clear about signal leakage limits, leakage monitoring and measurement, and other criteria with which U.S. cable operators have had to comply for many years. The bottom line is that leakage monitoring, measurement and repair don’t go away in an all-digital network. Why is it even a topic of discussion, then?
Nature of the beast
One major challenge is measuring leakage when the signal leaking out of the plant is a noise-like 64- or 256-QAM (quadrature amplitude modulation) "haystack." Consider a 20 microvolt per meter (µV/m) leak on Ch.16. A field strength of 20 µV/m on that channel works out to about -43 dBmV at the terminals of a half-wave dipole antenna. If the signal being measured is a continuous wave (CW, or unmodulated) carrier or an analog TV channel’s visual carrier, no problem. The power being measured is largely confined to the carrier. But a QAM signal’s amplitude, or digital channel power, is the average power across the entire channel! We have to account for the bandwidth difference between the QAM signal being measured and the intermediate frequency (IF) bandwidth of the leakage detector because the detector will measure only a tiny portion of the 6 MHz wide noise-like signal.
Assuming the leakage detector’s IF bandwidth is 15 kHz, the indicated leak amplitude will be approximately 10log(6,000,000/15,000) = 26 dB lower than it really is, or about 1 µV/m (-69 dBmV) for what is really a 20 µV/m leak. Making matters worse is trying to figure out whether an indicated 1 µV/m noise-like signal is Ch. 16’s QAM signal leaking from the cable plant, or just over-the-air background noise. A moderately high level leak of, say, 100 µV/m (-29 dBmV dipole level on Ch. 16) would register about 5 µV/m on the leakage detector, or approximately -55 dBmV equivalent dipole level.
A couple posts in the SCTE-List thread suggested that the FCC ought to change the signal leakage rules for all-digital networks, perhaps even eliminating the requirement to monitor and measure leakage. The reasoning is that a noise-like leak from a QAM signal would be so low that it wouldn’t interfere with over-the-air users. Let’s look at this idea and see if it makes sense.
Amateur (ham) radio operators have a VHF allocation in the 144-148 MHz range, known as the 2-meter ham band. Assume that a 20 µV/m leak on Ch. 18 (144-150 MHz) exists. According to research done by lab personnel at the American Radio Relay League, a 20 µV/m leak registers somewhat greater than S9 on a typical 2-meter ham transceiver’s signal strength meter if the antenna is 3 meters from the leak and the interfering signal is a CW carrier or analog TV channel’s visual carrier. A received signal strength of S9 is high enough to be considered harmful interference likely to disrupt communications. But what happens if Ch. 18 is a QAM signal?
We have to account for the difference between the 6 MHz wide noise-like QAM signal and the transceiver’s IF bandwidth, the latter about 25 kHz. Here the noise power bandwidth difference is 10log(6,000,000/25,000) = 23.8 dB. In other words, the ham transceiver’s signal strength meter will register about S5, based on approximately 6 dB per S unit. Rather than a discrete carrier, the received S5 signal will be noise-like and cover most, if not all, of the 2-meter ham band. Is an S5 signal enough to cause harmful interference? Maybe; maybe not. It depends on several factors, including the received strength of desired signals and the ham transceiver’s mode of operation – for instance, FM vs. single sideband (SSB). SSB is considered weak signal communications, and S5 background noise is not particularly good when dealing with weak signals that might be only S1 or S2.
If the Ch. 18 leak is 100 µV/m, that will result in an indicated S9+14 dB or thereabouts for a CW carrier or an analog TV channel’s visual carrier, but a QAM signal will yield an indicated received signal strength between S7 and S8. The latter will be noise, but it’s arguably strong enough to be considered harmful interference. Weak signal operation in the 2-meter ham band would be effectively wiped out by S7~S8 noise. The ham can’t tune away from the interference because it’s broadband in nature, and the cable operator can’t offset the QAM channel’s center frequency. Doing so would have no effect on reducing interference, again because of the wide, noise-like nature of QAM signals.
So I don’t think we want the FCC to change Part 76 to eliminate signal leakage rules for all-digital operation. Besides, if the FCC does change the rules, that likely would take time – perhaps several years – and would require technical studies and a complicated rulemaking process. And we might not get what we want. Can you say "politics"?
What about modifying leakage detector design to work with broadband noise-like QAM signals? That might be possible, but I have a feeling it will be difficult for an economical detector to differentiate background noise from a noise-like QAM signal.
There is no question that we need to continue to monitor and measure leakage in all-digital networks. Keeping leakage down to a dull roar helps to ensure good network performance and signal quality, as well as keep ingress more manageable. QAM signals are far from ideal for this purpose, which means we have to dedicate a CW or similar carrier to signal leakage tasks, preferably operating at old analog levels. Maybe the carrier can be one of the automatic level control/automatic gain control (ALC/AGC) pilots, or maybe a standalone carrier for leakage and possibly also a signal level reference for field personnel.
Where should the carrier be located? That’s a tough call. One possibility is to simply keep a CW carrier (or analog TV channel) on the current frequency used for leakage monitoring and measurement. But what if a cable operator is reluctant to give up a channel slot in an otherwise all-digital lineup?
Placing a CW carrier between two QAM signals apparently doesn’t work well – those who’ve tried it tell me it affects both digital channels, causing degraded performance.
Somewhere in the FM band might work, but if the system is carrying QAM signals on Ch. 95-97, that leaves just 88-90 MHz open. If high-powered FM stations are transmitting in the cable network’s service area, leakage monitoring and measurement could be affected. Tagging the leakage carrier may help, but the detector’s front end may still be overloaded by a 100 kilowatt FM station’s signal on a nearby frequency, requiring that the detector be modified or used with external filtering. Another option is the 4 MHz slot between Ch. 4 and 5 (72-76 MHz), but that has issues, too.
What’s the answer? This is definitely an "it depends" situation. Current leakage rules in Part 76 are unlikely to change any time soon, so signal leakage monitoring and measurement must continue to be done by operators of networks that have migrated to all-digital. Trying to measure leakage on a QAM signal is difficult or impossible, which pretty much says that a dedicated CW carrier or similar will have to be used.
Ron Hranac is a technical leader, broadband network engineering, for Cisco Systems and senior technology editor for Communications Technology. Reach him at firstname.lastname@example.org.