We all like to network in the cable industry, but I want to put this word back where it belongs, as a noun. The network is the single most important part of a cable system – it’s the glue that connects everything together, and it’s what makes cable sticky, too. And the network has come a long way. It seems like only yesterday that cable systems were a one-way video delivery mechanism; now the fact that they are decidedly not is a key differentiator against satellite. Background Back in the ’90s, I worked for Time Warner Cable‘s Advanced Engineering group on a project to define the architecture for a digital cable system. We called it Pegasus, and you can still find the RFP on the Web if you search for it. One of our biggest "Aha" moments from the Full Service Network was that a real-time, two-way signaling mechanism was a fundamental requirement for a digital cable system.
Back then, of course, cable systems were two-way, but not as we’d recognize them today. The advanced analog set-tops had a return transmitter, but they could only speak when spoken to. Polling analog boxes was OK because the only thing you really needed to get from them was any pay-per-view buys stored in the secure microprocessor.
But what we wanted to do was allow set-tops to transmit on the network whenever they wanted – sounds like a recipe for disaster, doesn’t it? Well, Ethernet had been allowing PCs to do just that since the early ’80s, so we decided it was time for cable to catch up. In fact, most developers were using Ethernet to connect up their development set-tops in the lab, and so it wasn’t a big leap to say, "Why don’t we do that in the field?"
Well, it wasn’t a big intellectual leap, but when we started to look at actually creating a real-time, two-way network in the cable system, the economics got really difficult. Back then, cable systems scaled as a step function whenever you added new services. It was a relatively small step, and it wasn’t dependent on the number of subscribers, just the service. So if you added a new channel, no problem – install a receiver, a multiplexer, a modulator and hey-presto – you have a new channel. A thousand subscribers or a million; it was the same step function.
But a real-time, two-way network meant adding network termination equipment and routers to every distribution hub, transport networks between them and the headend router. And, more importantly, this additional equipment wasn’t a step function – it was more like a lot of step functions (one per hub initially) that all added up.
However, we were really convinced that real-time, two-way was the way to go, and thanks to the visionary leadership, the network was funded. And once it got rolling, the network really started to grow. Initially, we built the signaling (or out-of-band) network to about 10 Mbps per hub. Not a great deal of bandwidth, but it kept the costs under control. We used long-reach optics on asynchronous transfer mode (ATM) switches to provide inexpensive transport and the smallest routers we could buy for the hub sites. By sharpening our pencils, we were able to build a signaling network into the budget.
Of course, at the same time there was a new entrant just starting trials called high-speed data. It had the potential to generate significant cash growth and was being funded as a start-up, so at the time we kept the networks separate. Fundamentals Now, let’s get back to some fundamental network principles that will provide a foundation for the discussion going forward. When some very clever engineers came up with the seven-layer open systems interconnection (OSI) model, they tried to provide some strong boundaries between the layers. Prior to that, many of the layers had been mixed up, and so you had to re-invent the wheel every time someone devised a new physical (layer 1) technology.
The most important layer of the OSI model is the network layer (layer 3). Why? Because it is like a fulcrum that connects an expanding set of underlying layer 2/layer 1 choices with an almost unlimited set of higher layer alternatives. And of course, the de facto standard for the network layer is now Internet protocol (IP).
The OSI model can be viewed as a "martini glass." The layers spread out, in terms of alternative protocols, above and below the network, which forms the stem. And as the stem of the martini glass, the network layer looks a bit like the pipe that is truly is.
The layer directly below the network layer is the data link layer (layer 2). So, we call an important property of IP "link layer abstraction." Essentially, IP doesn’t care what the link layer is, as long as it can provide an acceptable service by exchanging data link messages between two endpoints.
We’ll get into some of the other layers another time, but for now hang onto that concept of "link layer abstraction" because it is one of the most important ideas in networks.
Going back to our digital cable signaling network, Pegasus was originally deployed over ATM links, which, when they became unfashionable, were replaced with packet over synchronous optical network (SONET) or optical Ethernet (the latest being 10 or 40 Gigabit Ethernet carried over wave division multiplexing [WDM] transport systems). The point is, to the network layer, it really didn’t matter. The only difference is capital and operation cost.
Traditional video transmission systems do not follow the seven-layer OSI model, and there is a very good reason for this – they can be made cheaper by being cost-optimized for one service, video. But fast-forward to today, and video is now being fed over all kinds of networks.
In cable, everything fundamentally changed with the development of on-demand video delivery. Early on-demand servers generated video in a pretty conventional format, mainly because that’s what the set-top demands – it was designed to consume linear (e.g., broadcast) programming. But pretty soon, people designing video servers wanted to leverage the tremendous strides in Ethernet development, especially when Gigabit Ethernet (GigE) came along. It turned out that Ethernet had become so ubiquitously deployed that it generated efficiencies of scale that have far out-distanced alternative link-layer technologies like DVB asynchronous serial interface (ASI) and SONET. What’s next? If the link-layer can be easily abstracted away, what does that predict for how networks for voice, video and data will be built in the future?
The price-per-bit equation will continue to be critically important for residential networks. The lower the price-per-bit delivered, the more cost effective the network, so the more services that can be delivered while still making a profit. Cable has actually developed some unique technologies based on the analog transmission of RF signals over fiber that actually drive cost-per-bit way below where digital fiber is today. However, some of those savings are related to the ability to deliver analog video cost effectively over the same network as digital services. As analog services go away, where does that leave cable?
Answering that question calls for further discussion of cost-per-bit and convergence of signaling and delivery networks. Stay tuned.
Michael Adams is VP, system architecture, for Tandberg Television. Reach him at MAdams2@tandbergtv.com.