Ron Hranac’s February CT column "Signal Leakage in an All-Digital Network" (click here to read it online), sparked by a discussion on the SCTE-List, examines the familiar topic of signal leakage limits (Part 76 of the FCC Rules) within the less familiar scenario of an all-digital world. The topic has attracted the attention of some industry heavyweights, some of whose input has been collected here. – CT
Director, Engineering Technology
Cox has a number of signal leakage detection devices in service today, so we would want to continue their use if possible. These break down into two types: about 1,200 maintenance tech versions priced at about $2,500 each and about 3,500 installer tech versions priced at about $400 each. Across the company, these units are valued at about $4.5 million.
Current detection methods are based on two techniques. One is the detection of an analog video carrier located within the aeronautical band of frequencies, and the other is based on the detection of a pilot or reference signal specific to the receiver. Only the first method would be disadvantaged in an "all digital" cable network. The question is whether or not "all" carriers will be digital – or rather, must they all be digital. The nice thing about using a pilot carrier or an analog video carrier in the aeronautical band is the resulting readily identifiable tone or video sync buzz, which assures the technicians that what they are detecting is actually leakage from the cable network.
Now presuming that all our cable network signals are QAM (quadrature amplitude modulation) carriers, I find it difficult to imagine that anyone with an aeronautical or navigational radio would notice the difference between thermal noise and cable system leakage given that higher order QAM appears as noise itself. This doesn’t mean that it can’t occur, and what of the calls from radio operators who detect a break in their squelch setting and might tend to suspect cable leakage?
I think the greater issue we face is interference to our QAM carriers from outside sources through signal ingress and our inability to detect these interference levels before they become customer-affecting. With the move to digital test equipment and envelope detection, it is virtually impossible to detect low level ingress carriers beneath a QAM carrier. If the industry becomes so pressed for bandwidth that we must use known interference channels for important digital content, then we must have a method of detecting ingress interference to QAM carriers before they become customer-affecting. Long ago, expensive analog spectrum analyzers were used to accomplish this on analog TV channels; maybe some variation of these will come back.
So finally, I would recommend systems that use a specific pilot carrier that can be strategically placed in the cable spectrum with minimal impact to capacity. These systems will allow the cable operator to continue to be effective in detecting, measuring, isolating and repairing signal egress.
Jonathan L. Kramer
BTS, BDS, BPS
Kramer Telecom Law Firm
Ron Hranac’s comprehensive treatment of the digital leakage issue in the February edition of CT, "Signal Leakage in an All-Digital Network," is extremely valuable in framing and quantifying the issue and explaining why it deserves our industry’s careful attention.
As an attorney, I think that if we, as an industry, don’t explore, understand, and adequately deal with the manifestations of digital plant leakage, we could easily find ourselves on the wrong end of an ugly Notice of Proposed Rulemaking. It would only take the FCC (guided by the FAA, amateur radio operators, and other Commission licensees) to determine that a materially raised pseudo-noise floor constitutes "harmful interference."
Harmful interference, as legally and functionally defined by the Commission, is a rather low barrier to hurdle. The result of that designation could force cable systems around the country to shutter dozens or more megahertz of presently digitized cable spectrum. Even with increasing compression ratios, the loss of spectrum could be devastating to program carriage, especially diverse and narrowly focused programming services that might be dropped in favor of more heavily viewed programming.
I suspect Ron’s treatment of this issue will be viewed historically in the same light as the early analog leakage identification and control work by Ted ("Dr. Strange Leak") Hartson and his colleagues some three decades ago. Let’s remember the lessons and successes of our past.
In general, I agree with Ron’s remarks and am a fellow amateur radio operator with a self-interest in avoiding interference to that service. I am also a believer in frequent leakage monitoring using an analog or continuous wave (CW) carrier as a good operational practice.
Let us look, however, at the aeronautical aspect of the rules: §76.609(a)(12) limits leakage to 20 µV/m at 3 meters, but this was written before QAM, and the measurement procedure (§76.609(h)) does not specify a measurement bandwidth. However, other parts of the rules, such as §76.610, consistently refer to the energy falling within a 25 kHz bandwidth, so we may assume that is the susceptibility bandwidth of aeronautical receivers. That being the case, a 256-QAM signal whose total leakage energy is about 300 µV/m (spread uniformly across 5.4 MHz) would inject the same amount of energy into a receiver as an analog video signal whose visual carrier fell in the aeronautical channel at a level of 20 µV/m; that is, 23.5 dB higher. Given that the QAM carrier is likely carried on the cable system 6 dB below the analog signal’s carrier, the all-digital system could be almost 30 dB less well-shielded for the same level of interference!
Would I suggest changing leakage standards on that basis? Absolutely not. Nevertheless, the danger to aircraft communications will be drastically reduced by the shift to digital.
Director of Marketing/Business Development
In the ongoing discussion concerning the need for signal leakage detection in an all-digital network, Ron Hranac’s article brought out many salient points as to why the cable industry must continue the practice of leakage detection and a reasonable solution to the problem. Much of his argument centered on the continued need to comply with Part 76 of the FCC rules. While I would not downplay his point, I would shift the emphasis to another point that was not brought out so strongly: the need for good network performance, cable integrity and signal quality.
It is true that more and more of the analog channels are moving to the digital domain to make room for more content, particularly high definition (HD). However, there is another important trend that has a significant dependence on the cable integrity of the system. The demand for streaming video and digital content in upstream and downstream directions is rapidly increasing, thus placing even more bandwidth demands on the cable infrastructure. Some operators have already started to transition to DOCSIS 3.0 in response to this demand and to stay ahead of the competition. As the technologies evolve to squeeze more and more bandwidth out of the cable plant, it will also become more susceptible to noise. An excessive amount of cable ingress (noise) could severely impact the ability of a cable system to deliver high performance services.
If noise can get in a cable, it can also get out, which means that leakage points in the network cause both ingress and egress to occur. Egress equals signal leakage. Thus, it is not only important that the cable industry continue to monitor and fix signal leakage for FCC compliance, but it also becomes imperative to remain competitive as the need for bandwidth continues to increase.
I also support the idea of using a common frequency in the cable spectrum for an analog signal to be transmitted for the purpose of leakage detection. This solution provides the most economical and lowest impact to the industry’s current practices and equipment. I would recommend that the appropriate standards or industry committee adopt the cause and work out a solution before the problem really becomes a problem.
Cable Leakage Technologies
The $64,000 question: What frequency should an operator use for signal leakage in an all-digital network? Most cable operators will be reluctant to give up a channel slot to use the current frequency for a dedicated signal leakage channel. However, if this luxury exists, it certainly would be the best way to go. There would be no hassle and expense of retuning all the field leakage meters to the new frequency. But if the switch to a new frequency is necessary, the most obvious choice is 108.625. This frequency has been used by several operators for years. It is right above the FM band and just below A-2 (109.275). You could use a CW carrier alone, but the best way is to place a channel tag on this carrier. A 20 Hz tag will only have a bandwidth of 40 Hz. It does not interfere with the surrounding architecture. I predict that 108.625 will replace 133.2625 as the most used and common signal leakage frequency in the United States.
[Editor’s note: If Ch. 98 (A-2, or 108-114 MHz) were occupied by a 6 MHz-wide QAM signal, then use of 108.625 MHz for a dedicated leakage carrier likely would cause interference to the QAM signal. – RH]
Steven C. Johnson
I read Ron’s article, and I whole-heartedly agree. There has been lots of discussion recently in the SCTE NCTA Recommended Practices working group. I personally have had discussions on this topic for three or four years on how to control signal leakage in a digital world. The detectors use an analog signal, and we haven’t come up with a viable alternative detector for a digital signal. Cable operators must continue to maintain plant integrity for two main reasons: outside signals getting in and causing interference to cable signals on the plant, and cable signals getting out and causing interference to over-the-air users. As Ron says in his article, when cable signals leak out of the cable, over-the-air signals are usually leaking in as well.
The easiest solution from a technical standpoint seems to be to maintain an analog carrier for signal leakage detection, but, as Ron mentions, this involves giving up a 6 MHz channel. No one wants to waste an entire channel that can carry a number of digital programs in order to put a single unmodulated carrier on the plant for signal leakage and other measurement.
Work is being done in the SCTE working group. Hopefully, we can come to a conclusion that will satisfy all or most issues.
Bresnan Communications recently rolled out an all-digital network in one of our systems in northern Wyoming. There are a number of positive benefits that come from transitioning to all QAM carriers, but there are a few challenges as well. Having a quality signal leakage program in place where technicians are actively patrolling, monitoring and fixing signal leakage will remain part of our skill set for the foreseeable future. While the FCC is concerned about signal leakage – i.e., "egress" in Part 76 – as operators, we need to be equally concerned about "ingress."
Random electrical noise and over-the-air digital carriers can cause customer interruptions in digital video, high-speed data and voice over Internet protocol (VoIP) services. Our customers expect our services to be reliable and of the highest quality. Debating about whether a QAM carrier that is leaking from a cable system is at a high enough level to qualify under Part 76 rules is akin to the Captain of the Titanic wondering if the ice on the deck after he hit the iceberg should be cause for concern. Sooner or later, it will cause problems and should be addressed.
I still remember a lesson a very wise engineer taught me years ago. When he was questioned about the correct process for triangulating and measuring leaks and how to be sure you’re at the proper distance before making the measurement, he simply replied, "Just fix the DARN leak!" His advice still applies today.