Let’s say we want to ensure the BER – bit error rate or bit error ratio – at a cable modem’s input is less than 1 x 10-8, also expressed as 1.0E-08. Recall from last month’s column that transmitting three times the reciprocal of the specified BER without an error gives 95 percent confidence level that the device or network meets the BER spec. If one wants 99 percent confidence level, the multiplier is 4.61 rather than 3. The reciprocal of 1 x 10-8 is 1/(1 x 10-8) = 1/0.00000001 = 100,000,000, and 3 x 100,000,000 = 300,000,000. So if we transmit 300 million bits and receive all 300 million bits without an error, the confidence level is 95 percent that we meet 1 x 10-8 BER. Duration Another important factor in BER estimation is the duration of the measurement. Bit errors can be caused by all sorts of things – random noise, bursty or impulsive interference (laser clipping, sweep transmitter interference), etc. If one makes a brief BER measurement during an interval of time in which a particular intermittent interference is not occurring, the BER estimate won’t accurately reflect what’s really happening. One common question is, "How long is long enough when performing a BER measurement?" The answer is, "It depends." Clearly, the longer the measurement duration, the better.
As mentioned previously, 95 percent confidence level for 1 x 10-8 BER requires that we transmit at least 300,000,000 bits without an error. How long does it take to transmit 300 million bits? Assuming 256-QAM (quadrature amplitude modulation) at 42.88 Mbps, the minimum test time is 3 x 108 bits/42.88 x 106 bits per second = 6.996 seconds. For 64-QAM at 30.34 Mbps, the minimum test time is 3 x 108 bits/30.34 x 106 bits per second = 9.89 seconds. Of course, if your BER target is something more aggressive like 1 x 10-10 at 99 percent confidence level, the required number of bits that must be transmitted is [1/(1 x 10-10)] x 4.61 = 46,100,000,000. Here the minimum test time for 256-QAM is 4.61 x 1010 bits/42.88 x 106 bits per second = 1,075 seconds, or 17.9 minutes. Keep in mind these minimum test times assume that all of the transmitted bits are received error-free. Method The way the BER estimation is performed is important. A true BER measurement involves a data source and error detector. The data source transmits a bit pattern through the device or network being tested, and the error detector has to either reproduce the original pattern or somehow directly receive it from the data source. Then the error detector compares on a bit-by-bit basis the original pattern with the one received from the device or network being tested. This method is usually an out-of-service test, so it isn’t used very often – at least not on cable networks. Most QAM analyzers don’t perform BER measurements in this way. Instead, they use an internal algorithm to derive a BER estimate based upon what the forward error correction (FEC) is doing.
The type of data sent during the BER measurement can affect the outcome, too. A long string of the same bits – say, all zeroes – will generally yield different BER numbers than when doing the same measurement with a pseudo-random bit stream. Why? That long string of identical bits may result in something called deterministic jitter (along with other distortions) which could affect the integrity of the BER measurement. For these reasons, most true BER measurements use a pseudo-random bit stream.
This is unlikely to be a problem when measuring BER on a DOCSIS digitally modulated signal. A randomizer or scrambler is used in the data transmitter to make sure that this sort of thing does not occur. The scrambler mixes the actual data stream with a pseudo-random sequence, so a long string of zeroes or ones will become randomized in the transmitted digitally modulated signal. At the receiver, the same pseudo-random sequence is used to descramble the data, giving back the original data.
When using a QAM analyzer to measure BER, it’s important to avoid what is sometimes called false BER. In other words, under some circumstances the QAM analyzer might incorrectly indicate the presence of bit errors. How is this possible? Two typical scenarios are too much or too little input signal level. The former results in overdriving the QAM analyzer, which causes bit errors to be generated inside the analyzer. Too little signal means poor carrier-to-noise ratio (CNR) at the QAM analyzer input, which also causes bit errors to be displayed on the instrument. The solution? Make sure the QAM analyzer’s input signal levels are within the range specified by the manufacturer. Common sources Let’s look at a few of the more common sources of bit errors in a cable network, starting with the signal sources: a cable modem termination system (CMTS) for high-speed data and a QAM modulator for digital video. It’s important that all of the following BER measurements be done using the same QAM analyzer and the same QAM signal. Check the QAM signal at the CMTS or QAM modulator output, preferably at a test point that is fairly close to the device’s downstream connector. This ensures that no other signals are present except the one being measured. There should be no bit errors here.
If an external upconverter is used with the CMTS or QAM modulator, check the upconverter’s input and output. If you find bit errors at the upconverter output but not at the input, the likely culprits include improper upconverter setup, incorrect input or output levels, or a defective upconverter.
Assuming the CMTS, QAM modulator, and upconverter check out OK, move to the downstream laser input. If you see bit errors here but not at the signal source, look for sweep transmitter interference, combiner problems (for instance, a faulty combining amp or loose or intermittent connections), in-channel ingress, or junk underneath the haystack coming from another channel. My experience suggests that sweep transmitter interference is the most likely cause of bit errors at this location. Temporarily turn off the sweep transmitter; if the errors go away, you’ve identified the problem. Dig out the sweep gear’s instruction manual and then program suitable guard bands around the affected QAM signal.
If the BER is good at the downstream laser input, move to the node’s downstream output. Bit errors at the node downstream output but not at the downstream laser input usually indicate downstream laser clipping. If this proves to be the case, it’s necessary for all of the headend’s downstream levels to be carefully measured and readjusted as needed. If laser clipping is still happening after doing this, it may be necessary to reduce the laser input’s composite power—that is, reduce the level of all signals at that laser’s input by a decibel or two. Don’t rule out the possibility of an improperly set up or defective node. Make sure that the node’s optical input power and RF output levels are correct. If tweaking levels doesn’t do the job, you may have a defective node on your hands. Defective downstream lasers can cause bit errors, too. Hardline and drop Here’s a partial list of things that can cause bit errors in the hardline coax plant and subscriber drop: in-channel ingress (including impulse or burst noise); improperly aligned or defective amps (poor CNR, excessive distortions, levels out of whack); improperly installed, loose, damaged, or intermittent connections; nasty impedance mismatches (think micro-reflections, amplitude ripple/tilt, group delay); and so on. Use the divide-and-conquer technique to track down the location and cause of the problem.
At the customer premises, too much or too little signal can cause bit errors in the modem or set-top box. In addition to measuring digital channel power on the QAM signal of interest, make sure that total power – the combined power of all downstream signals – is less than +30 dBmV.
Ron Hranac is a technical leader, Broadband Network Engineering, for Cisco Systems and senior technology editor for Communications Technology. Reach him at email@example.com.