Over the past few years, the cable industry has been adding digitally modulated signals to the downstream spectrum to accommodate high-speed data, digital video, video-on-demand (VOD) and voice. A few operators have done something called digital simulcast, where most or all of the analog TV channels are duplicated in the digital lineup. The latter allows subs to receive all services with a single digital set-top box. As we continue to add even more digitally modulated signals to our networks, the trend is pretty clear. It won’t be all that long before we make the migration to all-digital. For the purists in the crowd, those 64- and 256-QAM (quadrature amplitude modulation) digitally modulated signals are in fact analog RF signals, so technically we don’t transmit digital data on our outside plant. But I digress …. Drivers What are the drivers that are migrating us to all-digital, anyway? More efficient use of the RF spectrum is certainly one. After all, each analog TV channel takes up a 6 MHz chunk of downstream bandwidth (7 or 8 MHz in many countries outside of North America). Digital video compression technology supports eight to 10 or more channels in the same 6 to 8 MHz. In a nutshell, digital allows one to reclaim the inefficiently-used analog TV channel spectrum. Business opportunities are another driver. Examples include even more digital video services, high definition TV (HDTV), network or in-home personal video recorder (PVR) functionality, videoconferencing, and commercial services. Here in the United States, over-the-air TV broadcasters are supposed to end analog transmissions in 2009, which may be a good time for cable operators to make a similar move. In some countries, government regulations require cable operators to switch to digital when over-the-air analog broadcasts are turned off. Regardless of what the drivers are, I think a move to all-digital is inevitable. That said, here’s my take on what will be necessary in a modern HFC network’s outside plant to accommodate all-digital operation. Technical performance The entire network—headend, distribution plant, and subscriber drops—must meet or exceed the technical regulations in Part 76 of the Federal Communications Commission’s Rules, or relevant government regulations for systems outside of the United States. This should be a given, since most cable systems are required by law to meet these rules. But I’d argue that it’s even more important for the outside plant to be in top technical condition for all-digital operation. One spec that should be much tighter than the FCC’s is signal leakage. Rather than 20 microvolts per meter (µV/m), figure on no more than 5 to 10 µV/m. After all, where there’s leakage, there’s bound to be ingress—even in the downstream. Equally important from a performance perspective is that the entire network should meet or exceed the downstream and upstream assumed RF channel transmission characteristics in the DOCSIS Radio Frequency Interface Specification. These parameters pertain specifically to digitally modulated signals and will help to ensure even more reliable operation. Before making the migration to all-digital, I recommend that the forward and reverse be sweep aligned. It might not be a bad idea to record end-of-line operating parameters after the sweep—carrier-to-noise ratio (CNR) and distortions, frequency response and so forth—before making the transition so you have some documented baseline performance information. Signal levels Digitally modulated signals are measured in terms of their average power, also known as digital channel power. I can’t over-emphasize the need to make this measurement accurately. Use test equipment that supports more-or-less one-button digital channel power measurement, and don’t mess with manual measurements that require bandwidth and detector correction. Manual measurements leave too much room for error. Speaking of signal levels, a spectrum full of digitally modulated signals likely will be carried a few dB lower than the old analog TV channel signal levels. If everything is 256-QAM, you’ll probably set each signal’s digital channel power 5 or 6 dB below what an analog TV channel would be on the same frequency. 64-QAM should be OK 10 dB down. Fiber link, amp alignment From what I’ve been able to determine, getting levels correct throughout the system will one of the most important parts of successful migration to all-digital. Contact your optoelectronics manufacturer and find out what is recommended level-wise for all-digital operation at downstream laser inputs and node outputs. Lasers generally have lower dynamic range than amplifiers, so levels are going to be a bit more critical here. If that information is unavailable or unknown, you may end up doing some pioneering work in this area. Start by setting all digital levels 6~10 dB down at the laser’s input. Using a QAM analyzer, check pre- and post-forward error correction (FEC) bit error rate (BER), modulation error ratio (MER), constellation, in-channel flatness, and the adaptive equalizer graph for each QAM signal. Ideally, there should be no impairments. BER problems at the laser input are often caused by sweep transmitter interference, so temporarily turn off the sweep transmitter. If the errors go away, you’ve found the problem. (You’ll have to reprogram the sweep transmitter to have suitable guard bands around each digital signal.) Repeat the QAM analyzer measurements at the node. If you find bit errors at the node’s downstream output but not at the laser transmitter input, laser clipping is the likely culprit. You may have to reduce total composite power—that is, the level of all signals—at the laser input a decibel or two to eliminate the bit errors. Amplifier alignment will be fairly straightforward. As mentioned previously, carrying the digitally modulated signals a few decibels lower than the old analog levels ought to work just fine. One potential issue is amplifier automatic gain control (AGC) operation. Some AGC circuits won’t perform well with QAM signals for pilots, so it may be necessary to use continuous wave (CW) carriers on the AGC pilot frequencies. Naturally, the CW carriers will operate at the old analog levels and can serve as signal level references for routine plant alignment and troubleshooting in the future. At least one amplifier manufacturer recently announced the availability of amplifier AGC that is intended for use with QAM pilot carriers, so check with the manufacturer of your gear. Signal leakage The noise-like characteristic of QAM signals will make it very difficult to measure signal leakage. Complicating this is the fact that leakage detection equipment has relatively narrow intermediate frequency (IF) bandwidths compared to the RF bandwidth of a QAM signal, further reducing the measured apparent amplitude of a leak. Here’s why: Assume your leakage detection equipment’s IF bandwidth is 15 kHz. Let’s say that a QAM signal on Ch. 14 is producing a 20 µV/m leak. This works out to about -42 dBmV at a half-wave dipole antenna’s terminals. But this level is digital channel power for the 256-QAM signal, based on the average power in the full channel bandwidth. The leakage detection equipment will indicate a leakage field strength that is around 26 dB lower than 20 µV/m based on the bandwidth difference between 15 kHz and 6 MHz (I’m leaving detector correction factor out of this). The indicated field strength will be around 1 µV/m—and noise-like to boot—even though the real leakage amplitude is 20 µV/m. My suggestion is to use a CW carrier operating at old analog levels for leakage detection—maybe one of the AGC pilots if it’s in an aeronautical band. Next month I’ll take a look at additional issues surrounding a move to all-digital, including the need for digital set-top boxes for most or all subs, staff training, test equipment requirements and plant maintenance. Ron Hranac is a technical leader, Broadband Network Engineering, for Cisco Systems and senior technology editor for Communications Technology. Reach him at email@example.com.