A recent discussion on the SCTE-List is the basis for this month’s column. The discussion had to do with the difference between carrier-to-noise ratio (C/N ratio or CNR) and signal-to-noise ratio (S/N ratio or SNR), and whether or not the Data Over Cable Service Interface Specification (DOCSIS) includes an upstream SNR spec in addition to the recommended 25 dB CNR spec. Let’s look at the last question first.
Table 2-2, "Assumed Upstream RF Channel Transmission Characteristics," from the DOCSIS 1.0 Radio Frequency Interface Specification summarizes recommended upstream performance for reliable data transmission on a cable network. Among the parameters in that table are carrier-to-noise, carrier-to-ingress power and carrier-to-interference ratios, all of which are supposed to be "not less than 25 dB." SNR is not mentioned.
In the world of telecommunications, SNR and CNR often are used interchangeably. I dug out Roger L. Freeman’s Telecommunications Transmission Handbook, 2nd Ed. (John Wiley & Sons, Inc., ©1981, ISBN 0-471-08029-2). Freeman says, "The signal-to-noise ratio expresses in decibels the amount by which a signal level exceeds its corresponding noise." Another reference,
Measuring Noise in Video Systems (Tektronix Application Note 25W-11148-0, ©1997), says, "In the most general case, SNR is expressed as the ratio of rms (root mean square) signal level, Srms, to the rms noise, Nrms, (SNR = Srms/Nrms)."
In the world of cable, we tend to use CNR and SNR to represent quite different measurement parameters, one in the RF domain and the other in the baseband domain. Let’s look at CNR first.
Carrier-to-noise ratio is generally accepted to be a pre-detection measurement—that is, one made at RF. When we carried only analog TV channels on our networks, CNR was understood to be the difference, in decibels, between the amplitude of a TV channel’s visual carrier and the rms amplitude of system noise in some specified bandwidth. According to the Federal Communications Commission’s (FCC’s) cable rules in §76.609 (e), system noise is the "total noise power present over a 4 MHz band centered within the cable television channel." This latter definition, of course, is applicable only to National Television System Committee (NTSC) television channel CNR measurements. Phase alternating line (PAL) channels use a slightly greater noise power bandwidth.
The FCC doesn’t actually use the term CNR in the rules. §76.605 (a)(7) states, "The ratio of RF visual signal level to system noise shall be…" That’s more or less in line with the generic definition of SNR, although we take it to mean CNR.
But what about SNR? SNR has, for as long as I can remember, been a post-detection measurement, one made on a baseband signal such as video or audio. Indeed, the previously mentioned Tektronix application note says, "In video applications, however, it is the effective power of the noise relative to the nominal luminance level that is the greater concern." It goes on to define video SNR in dB as 20log(Lnominal/Nrms), where Lnominal has a value of 714 millivolts peak-to-peak (100 IRE) for NTSC or 700 mV p-p for PAL. These luminance values exclude sync.
So what did I just say? Baseband video SNR is the ratio of the peak-to-peak video signal, excluding sync, to the noise within that video signal. The noise is measured in some specified bandwidth, usually defined by a combination of low pass, high pass and weighting filters. These filters limit the measured noise to a bandwidth that is roughly the same as the video signal and may be used to remove certain low frequency noise from the measurement. Weighting filters are used to simulate the eye’s response to noise in the TV picture. Various standards such as RS-170A, RS-250B and NTC-7 define the specific filters to be used in video SNR measurements.
OK, so CNR is an RF measurement, and SNR is a baseband measurement—at least in cable vernacular.
But what happens when we bring digitally modulated carriers into the mix? Well, DOCSIS does specify minimum CNR for downstream (35 dB) and upstream operation (the previously mentioned 25 dB). Carrier amplitude is the digitally modulated carrier’s average power level. The noise power bandwidth here is not 4 MHz, though. I prefer to use a noise power bandwidth equivalent to the digitally modulated carrier’s symbol rate, although some use the full RF bandwidth. In the case of a 6 MHz downstream 64-QAM (quadrature amplitude modulation) digitally modulated carrier, the symbol rate is 5.056941 Msym/sec, so the noise power bandwidth is 5.057 MHz. If you prefer to use 6 MHz as the noise power bandwidth, the CNR difference between that and the equivalent symbol rate bandwidth is only 0.7 dB. An upstream 1.6 MHz bandwidth digitally modulated carrier has a symbol rate of 1,280 ksym/sec, so the equivalent noise power bandwidth is 1.28 MHz. A 3.2 MHz digitally modulated carrier’s symbol rate is 2,560 ksym/sec, so the noise power bandwidth is 2.56 MHz. For upstream CNR measurements, the results using full-channel bandwidth versus symbol-rate bandwidth for the noise power bandwidth will differ by 0.97 dB.
MER beats SNR
OK, CNR is no problem for digitally modulated carriers. What about SNR? Well, if we assume that SNR is a baseband measurement, there really is no practical way to measure baseband data SNR. A better parameter is modulation error ratio (MER), which is analogous to baseband SNR. MER takes into account the combined effects of CNR; transmitter, upconverter or cable modem termination system (CMTS) phase noise; and impairments such as second and third order distortions. MER is, by definition, 10log(average symbol power/average error power).
Much like video, SNR is a better way than CNR to quantify a TV channel’s picture quality. MER (along with bit error rate) is a better way than CNR to gauge the health of a digitally modulated carrier.
Ron Hranac is a technical leader, Broadband Network Engineering for Cisco Systems, based in Englewood, Colo., and senior technology editor for Communications Technology. Reach him at firstname.lastname@example.org.