The third quarter of 2008 brought with it a raft of news and reports indicating that 40 Gbps (40G) optical networking had arrived and that 100G was on deck.
For anyone in the midst of deploying 10G transport with plenty of 2.5G or less on hand, or for anyone who simply doesn’t track these categories, this flurry of activity may seem puzzling. To help clarify, here’s a brief Q&A:
• Is this all about transport or routing? Actually, both, which is largely a function of converged technologies.
• Why this quick jump from 40G to 100G? Network traffic, of course. Also, a standards body decided to work on 40G and 100G at the same time, aligning the timetables for both.
• Is this about cable or telcos or both? Both – and more. Comcast and AT&T are big into 40G, and Verizon is kicking tires on 100G solutions. Plus, other players are interested.
• What’s the hype factor? Higher for 100G than 40G.
The 40G and 100G buzz over the past several months has come from service providers and vendors alike. What follows is an attempt to connect some dots and extract a few themes.
Perhaps needless to say, but here’s a reminder.
Well before the telcos, if not before other Internet service providers (ISPs), the cable community embraced IP, Gigabit Ethernet (GigE) and dense wavelength division multiplexing (DWDM) transport mechanisms.
That point was central to the "Backbone Transport" article in these pages last month, but you can go back three years, or more, and find it stated no less plainly.
For several years, these technologies have interfaced more and more via common, multisource agreement (MSA) form factors. Ongoing work at the International Telecommunications Union (ITU) and the Institute of Electrical and Electronics Engineers (IEEE) is further advancing the work of network interfaces.
As with 10GigE, switch and router interfaces for 40G and 100G are likely to come first, before the transport equipment that multiplexes signals onto a single metro or long-haul network. (The 40G interface on routers today is actually Packet over SONET, or POS.) One reason, as noted above, that 40G and 100G appear to have collapsed into each other is that last summer the IEEE last summer formally approved a committee (P802.3ba) to work on both high-speed Ethernet standards.
Still the market advances, standards or no standards. In the case of router interfaces, the path was through baseband (Cisco, ECI Telecom and Juniper), broadband (Cisco and ECI Telecom) and broadband tunable (Cisco.) And silos persist, sometimes in curious ways.
According to Ovum Research Director Ron Kline, and author of a recent report on 40G/100G, the market leaders in terms of shipped 40G optical linecards would be Nokia Siemens and/or Cisco; that is, if he were tracking Cisco.
"The Cisco 40G is on the router," said Kline. "We don’t count that piece in the ON (optical networks), that’s counted in the router numbers."
Capacity vs. services
Another way of analyzing this topic, especially at 100G, is to distinguish between network capacity and services, the latter being less here-and-now than the former.
"100 Gig service will not be necessary until there is a switch or router with 100 Gig service interface to it that would then plug into the transport gear," said Dave Welch, chief marketing and strategy officer of Infinera.
As per the IEEE roadmap, however, that interface won’t be available until late 2009 or early 2010.
On the other hand, "the benefit of having a 100 Gig wavelength (is) to increase the capacity of the network," Welch said. "There are places in a network where people can benefit from that now."
How to fill that 100G wavelength is a live question. Separating optical transport from service levels, Infinera works on the basis of "bandwidth virtualization." In demos, it has used a "tributary adapter module" to aggregate 10 10Gs into 100G.
The two general approaches among contending vendors are to muxpond data in increments of 10G or 40G or fill a wavelength entirely with a single 100G stream.
Starting with 10G makes sense because it is so widespread. Indeed, reduced costs on 10G links make return on investment (ROI) calculations, especially on 40G, something of a vanishing target.
Thus far, the largest 40G application has been router-to-router interconnects, with the largest commercial deployments having been AT&T and Comcast, according to Ovum. But there’s more to come.
"Global revenue for 40G line cards in 2007 was $178 million, and we expect the market to grow 48 percent annually through 2013," said Ovum’s Kline. (See Figure 1.)
As far as services go, demand looks strong. Economic downturn notwithstanding, Infonetics sees 40GigE services growing at a compound annual rate of 59 percent from 2007 to 2011.
Another proof point for the arrival of 40G is the attention it has received from the test and measurement (T&M) community. The hook is that sensitivity to optical noise increases dramatically from 10G to 40G.
Exactly which modulation scheme best handles the heightened dispersion requirements for which applications and distances and speeds is another matter (discussed briefly below.)
Finally, while 40G has arrived, it is still early – and expensive. According to Scott Wilkinson, VP product management and system engineering at Hitachi Telecom, 40G is far from the ideal point of being only 2.5 times as expensive as 10G.
"Now it’s more like four times over 10G," he said. "It’s not the right answer, but for places where you need 40G because 10G has run out of steam, it’s available."
Verizon: It’s the networks
That said, the premiums that Comcast and AT&T evidently paid made sense. "You wouldn’t do it if you couldn’t prove it in," said Kline.
So what’s with Verizon? Why no 40G?
Kline noted that whereas AT&T has a single IP-MPLS network, Verizon has three. "That’s one of the reasons that Verizon hasn’t had to upgrade to 40G as quickly as AT&T," he said.
One of those three networks, the old long-haul heavy MCI network that became Verizon Business, hosted the initial 100G trial with Alcatel-Lucent in late 2007. Given the recent spate of additional trials and various public comments, it’s no secret that at least one of Verizon’s networks wants 100G.
The question remains, how long can it wait?
"If 40G is too expensive, 100 is way too expensive," said Hitachi’s Wilkinson.
"Some providers are hoping to skip the 40G phase altogether," said Michael Howard, principal analyst at Infonetics, in a statement announcing the firm’s recent 10G/40G/100G forecast. "But we don’t see that being a viable option, as growing traffic demands are outstripping current capabilities, and 100G won’t reach reasonable price points until about 2012 or 2013."
Not just carriers
But it’s not just telcos and MSOs who are itching for these really fast network links. Governments, non-profit research organizations and corporations are in this game, too.
Two years ago, Infinera announced that Internet2, the non-profit networking technology consortium led by more than 200 U.S. universities, had selected it for enhancements to the Level 3 100G backbone serving Internet2 members.
In November 2008, those three players plus Juniper Networks and the U.S. Department of Energy’s ESnet announced an agreement to develop and test emerging 100GigE technologies.
In that announcement, as in the Ciena demo with Caltech, one point of emphasis is making an operational 100GigE network available to certain end users.
Caltech, for instance, was at pains to test the ability of transmission control protocol (TCP) to handle a 100 GE link. Appropriately tweaked, it could.
"From a cloud computing perspective, if you had this type of connectivity, you suddenly open up the massive computing power that’s idle," said Dimple Amin, VP products and technologies, special projects, for Ciena. "With the simple application like TCP that is used today, it could keep up with the capacities that could be available in the near future."
These kinds of users are poised to drive the market, according to a report released in early January by Freesky Research.
"Over 70 percent of all 40/100 Gigabit data revenue through 2013 will come from corporations, governments and research labs, not telcos," said David Gross, author of the report.
Last summer, the Optical Internetworking Forum decided to focus on dual-polarization quadrature phase-shift keying (DP-QPSK) as a modulation framework for 100G long-haul transmission.
This decision appeared to be a step forward. The technology, pioneered by Nortel and commercialized in the 40G space, is good on long distances and effectively slows signals down to allow for inexpensive processing.
"It makes the 40G look like a 10G," said Helen Xenos, Nortel optical solutions marketing manager.
In addition to modulation, Nortel advanced the cause of "coherence" in the receiver. "In a typical optical system in the past, you’d simply get intensity modulated detection," Xenos said. "With a coherent receiver, it’s able to detect a lot more than just the amplitude of the signal. It also gets phase information. And you can do processing on the signal that you’re receiving."
One question is whether – or rather when – digital signal processing (DSP) technology can advance from 40G to 100G. An interest rather than a standards group, the OIF is aiming to channel vendors in one direction.
Not all are on board. Hitachi is putting a lot of research into amplitude and phase shift keying (APSK). "We believe that it can be done with a single wavelength without a coherent receiver, and that’s going to get the cost down to the point that makes sense," said Wilkinson.
Yet DP-QPSK has its fans. By December, Nortel had racked up 42 40G customers. These weren’t the biggest ticket customers, but enough to win third place (behind Huawei) in the 40G line card space, according to Ovum. Corporate woes notwithstanding, that indicates some momentum.
"A ton of momentum," said Ovum’s Kline. "Their 40G technology is tops in the industry."
How the other relevant industry bodies parse out actual standards is another matter. Comcast’s Leddy said last November that Comcast was following modulation and interface efforts of the IEEE, OIF and the ITU.
Comcast is not the lone MSO. These bodies have for several years seen participation from Time Warner Cable, Cox and BrightHouse Networks.
What may surprise some observers from the cable space is how optical technology itself has taken turn toward into familiar territory. "Optics in the past was always ones and zeroes, on and off," said Wilkinson.
"Now you’re talking four or five different levels and you have to worry about what phase they’re running at the same time. These were things they solved on the cable plant a long time ago."
Jonathan Tombes is editor of Communications Technology. Reach him at firstname.lastname@example.org.