It’s official, the competition is on: Cable operators and telcos are battling for the same subscribers, and at the end of the day, the victory belongs to the one who proposes the best variety and quality of service (QoS) at the most competitive price. In order to make additional services available to customers, more throughput needs to be offered at lower cost per bit.
Regardless of how data is transmitted, it needs to travel from a super headend to the headend in the metro rings, then to the access optical nodes, and finally (often via a coax cable) to the subscriber. However, for high data rates to be offered to subscribers, the metro and core pipes need to be capable of handling this bundled capacity – and this is where problems, such as dispersion, can occur.
In order to address the question of how cable operators can avoid dispersion, the typical cable fiber infrastructure needs to be examined. (See Figure 1.)
The core network is often in a ring topology where each fiber segment is typically up to 160 km and runs amplified dense wavelength division multiplexing (DWDM) over synchronous optical network (SONET). These rings are fairly new (the oldest dating to as far back as 1998), and most are leased fiber pairs, which represent a fairly new investment for cable operators since only a few years ago cable ops had no fiber between their market clusters.
On the other hand, the metro rings have segments of up to 100 km, non-amplified, typically in both coarse WDM (CWDM) rings and DWDM MSPP (SONET-based) multiservice switch network. This fiber infrastructure is fairly old since most of the fibers were installed some 10 to 15 years ago. Note that cable operators are not particularly fiber-rich in this part of the network, hence the need to optimize the capacity.
In both cases, characterizing fiber is key to knowing how much throughput each pipe can support; however, dispersion is one of the biggest challenges that will need to be faced, and as a result, a number of lessons can be learned from the struggles that telcos face when upgrading to high-speed on their backbone.
There are two types of dispersion that cause data rate capacity limitations in single-mode fibers: chromatic dispersion (CD) and polarization mode dispersion (PMD). CD is a well-known and -understood phenomenon that can be dealt with once it has been properly characterized. PMD poses a far greater challenge.
CD originates from the fact that each wavelength travels at a slightly different speed in glass. CD would not be a problem if an optical channel were composed of only one wavelength. However, the reality is that each laser or source producing an optical channel has a certain spectral width and is therefore composed of numerous wavelengths; moreover, modulating a laser results in an optical widening of the laser output (called "chirp"). Since a single optical channel is in reality a multitude of individual wavelengths, and each individual wavelength is propagated at a slightly different speed, the light pulse eventually broadens, resulting in dispersion. If the rate of propagation of the pulses is high enough, the time between each pulse is short and the dispersion of an individual pulse overlaps the time frame of the following pulse, which makes signal detection difficult and can potentially cause detection errors or bit errors. (See Figure 2.)
The approximate CD values per fiber type are defined by ITU standards documents, so it can be estimated when dispersion is likely to occur, provided that the fiber type is known. (Each fiber type has a typical CD value at 1,550 nm and a typical slope throughout the C-band.) Typically on standard G.652 fibers, problems start for OC-192 at a distance of 60 km; however, this distance varies greatly depending on the transmission speed. Approximate compensation can be done, but as links become longer and/or need to support faster transmission speeds, such as 40 Gbps (40G), more accuracy is required in the compensation scheme, making CD field testing of prime importance.
CD depends on the fiber type and not on its age, which means that both old and new constructions experience CD issues. Since CD increases with distance, there is more probability that, at similar transmission speeds, longer routes will experience issues than shorter routes.
There are two main methods used to test CD. The first method used is the end-to-end (dual-ended) test, which allows testing straight through the amplifiers to assess the CD of the entire route, including all the network elements, such as amplifiers and possibly dispersion-compensating devices. This is the preferred method for the core since the dynamic range and reach are virtually unlimited. The second method used is the single-ended test, which is intended for shorter routes and non-amplified metro rings or mesh. This method is especially useful in a mesh topology, from a single headend or central office, where several routes can be tested from a single location by one technician, greatly reducing the time of test. (See Figure 3.) Historically, this method has had limited accuracy, but recent test gear can double the number of testing points and render the method 40G-ready.
Similar to CD, PMD is a form of dispersion that has the effect of spreading the pulse, and if the propagation rate is high enough, the interval between the bits shortens and each bit runs the risk of overlapping another bit’s time frame, making signal recovery and detection difficult, possibly leading to bit errors. However, PMD is not dependent on the index of refraction or on the wavelength. PMD is a stochastic phenomenon (that is, its behavior is non-deterministic; a given state does not define the following one) caused by the local imperfections, asymmetries and stresses, resulting in a random spreading of the pulse, which may limit the capabilities of a fiber to transmit high data rate information (for example, 10 Gbps and more).
Because PMD is a random phenomenon, two fibers of the same type from the same manufacturer, built the same year, and even coming from the same production lot can have values that are entirely different once deployed in the field. Regardless of the fiber’s year of make, the PMD value of a neighboring fiber in the cable and the previously tested value, nothing can be assumed as far as PMD goes if the installed conditions are different. While PMD can occur in relatively new fiber, it is more likely to occur in older fiber that was manufactured before PMD became an issue.
The main reason PMD has been considered as the No. 1 network threat by the telecom industry is its random and ever-changing nature, which makes it virtually impossible to compensate for. While CD is tested to ensure proper compensation, PMD on the other hand is tested to determine what the transmission limit of the fiber is. Typically, bad PMD fibers are kept for low-speed services, while low PMD fibers are for high-speed ones.
The same two methods used for testing CD, the end-to-end method and the single-ended method, are used for PMD testing. Yet testing PMD is more complex because a number of factors need to be taken into account. PMD is less present in newer fibers than in older ones; however, when greater throughput is required, all available fibers need to contribute, which is more important today than ever before. One reason is, for instance, the new presence of reconfigurable optical add/drop multiplexers (ROADMs). Good low-PMD fiber has, in most cases, already been upgraded to high speed, and the No. 1 cable operator priority is now margin, so making costly upgrades is all but out of the question.
There are several ways to deal with PMD issues, some good and others less so.
1. Try to find another fiber. If the fiber that you intend to use has PMD that is too high, the easiest way to work around the problem is to test another fiber and continue to test until a suitable one is found. However, this strategy is based on nothing more than hope. If this is not the first high-speed upgrade being performed, then most of the low-PMD fibers may already have been used. This strategy often results in a failure.
Another scenario that frequently occurs is that services (often low-speed) were deployed on fibers by pure order, regardless of PMD values, since they are not significant at low speeds. So only a few fibers may be left available for high speeds, and if their PMD is too high, testing other fibers may prove problematic since they are already in service. Continuing on this path may require rerouting, decommissioning and other such actions that can cost time and revenue and require intense planning. The result after all of these efforts may turn up a good fiber (though there’s no guarantee), while the bad-PMD fiber stays. This simply pushes back the problem, which is not a long-term solution.
2. Install new fiber. While digging the trenches and laying out new fiber can be a good long-term investment, it does remain extremely expensive, in metro areas more so than in rural ones. Cost can range from $50 and $125 per meter, which rounds up for a 60 km span to between $3 million and $7.5 million. To this, add the fact that such an endeavor is time-consuming and requires much planning, resulting in decreased time to revenue, the last thing that service providers want.
3. Pinpoint the problem and upgrade the network. As is often the case, investing a little bit of time and effort to find the cause of the problem can pay off and offer significant return on investments. By the statistical nature of PMD, distribution along a high-PMD fiber is very seldom uniform, and most of the time only one or two faulty sections contribute most of the total PMD. Accurately pinpointing these sections enables local fiber upgrades or bypasses, optimizing your most valued asset, the network itself.
Distributed PMD analyzers are now available on the fiber-optic test market, and they offer a way to quantify the PMD contribution of every section within a link, helping to pinpoint faulty sections. Compared to upgrading an entire fiber run, the solution is to upgrade only a few kilometers in order to be rid of the PMD trouble spot, which can save millions of dollars. (See Figure 4.)
Some distributed PMD analyzers give the contribution of each section in relation to the entire PMD value, and also allow simulating the replacement of a faulty section by a good one.
When facing dispersion issues, the needs of fiber characterization at high speeds go beyond standard optical time domain reflectometer (OTDR) and power-meter tests. Fiber characterization requires methods that avoid costly downtime. The good news is that distributed PMD analyzers offer a means of characterizing an entire network, including through amplifiers on the core or single-handedly in the metro.
Francis Audet is engineering senior product manager for EXFO. Reach him at firstname.lastname@example.org.