The question of how video on demand (VOD) is—or is not—working covers a lot of territory. On the one hand, MSOs differ among themselves in terms of strategy, experience and outlook. Sometimes they differ within themselves, too. On the vendor side, the VOD `ecosystem’ entails a range of functional roles. Add competition to that mix, and you get an even greater variety of views. Not to overplay these differences: There is a broad and high-level consensus that this technology works. "VOD systems are mature and reliable," said Paul Allen, Charter Communications chairman of the board, at the Society of Cable Telecommunications Engineers Conference on Emerging Technologies in January. He’s not alone. "What’s really working is the whole on-demand infrastructure and the presentation of content on demand to customers," says Mike LaJoie, Time Warner Cable CTO. On the business side, some have their doubts. CableOne, for instance, has yet to see a model that works for its predominantly small and rural systems and has chosen to fight satellite on other grounds. But LaJoie himself is direct: "The economics have really exceeded our expectations. On demand is a good source of incremental revenue, and the time to payback was very quick." The on-demand infrastructure itself comprises numerous interlocking components: servers, transport links, architectures, management systems, user interfaces, technical personnel—and content. To the extent that these components work individually and together, VOD works. But as content has increased from hundreds to thousands of hours, they have not always clicked. The ongoing acceleration of content may call for new technologies to minimize future growing pains. Servers At the heart of this infrastructure are the devices that store and then serve up available content. Originally deployed with a limited number of movie titles, VOD servers have grown as the business evolved, in some cases with a measure of discomfort. A year ago, a system-level engineer forwarded these notes: "We have for all intents and purposes performed three forklift upgrades and wish to get off that crazy merry-go-round. Therefore, it is necessary to have VOD server architectures that play nice with expansion." This engineer was hopeful of "some of the newer platforms," but warily noted that "easier" was a "relative term" with regard to the addition of storage and/or streaming capacity. In other words, he thought jumping off the carousel could be a bumpy move in its own right. "Newer platforms" could refer to new products and vendors or to revisions of the original. At BrightHouse Networks in Tampa (formerly Time Warner Cable), Vice President of Engineering Gene White oversaw an upgrade of its Concurrent-based software/server platform that provided a more than 10-fold increase in the number of available hours. "What we had initially was prototype gear," White says. "We started out with 200 hours, stretched it to 800, and we were killing ourselves." That death was coming by way of a precipitous drop in the system’s "confidence rate." Corresponding to the ability of viewers to make a buy with one hit of the button, it dropped from 98 percent to 78 percent. The new platform, which reduced the number of server farms from 60 to 11 yet ramped up the hours to 2,400, enabled the confidence rate to rebound to nearly 98 percent. The total number of sessions was another number that increased on the new platform. As Figure 1 indicates, sessions regularly broke the 100,000 mark after the new platform was installed last October. "I remember when we were very excited about 20,000 sessions," White says. Gigabit Ethernet High streaming capacity is one of the strongest claims backing the latest servers to enter the market. But this revolution in servers is driven by the transition away from point-to-point Digital Video Broadcasting-asynchronous serial interface (DVB-ASI) transport that accopmanied the kind of reduction in numbers of servers seen in the case of Tampa. "A tremendous amount of investment has occurred in the operators’ network to facilitate VOD transport, primarily GigE,’ says Greg Grigaitis, customer vice president at Broadbus Technologies. "Historically, the combination of both the operators’ networks and the conventional VOD server architecture simply precluded the ability to scale to allow tens or hundreds of thousands of streams and 5 to 10,000 hours of content in a single site." Concurrent has had GigE interfaces on its servers for several years. Combined with Fujitsu dense wavelength division multiplexing (DWDM) gear, those interfaces enabled the VOD transport in Tampa to go "IP at the core," White says. Companies that have entered this space more recently have integrated—and leveraged—GigE technology from the get-go. A case in point is Arroyo Networks’ recently announced support of 10 GigE. "MSOs are using Ethernet and 10 GigE around the metro area," says David Yates, Arroyo’s vice president, business development. "But up ’til now, they have had to feed that transport with multiple, one GigE links." Arroyo’s breakthrough, Yates says, is the ability to scale server capacity so far as to fill two 10 GigE links per 3RU server. In doing so, it rides the plummeting cost curve for standard 10 GigE interfaces. A related feature of Arroyo’s design derives from its strategy of adopting the latest and greatest from Intel, with the Lindenhurst/Nocona chips that started shipping last summer currently fitting that bill for Arroyo, Yates says. The upshot is another point along a trend line that’s hard to buck. "Cable did a great job of embracing industry standards on the cable modem side," Yates says. "Now we’re seeing the same thing happen on VOD." And architectures It’s nonetheless possible to overstate the acceleration toward standards, such as GigE. Cox Director of Interactive Television Technologies Michael Pasquinilli says that in adding eight quadrature amplitude modulation (QAM) modulators per service group in their hybrid (centralized/decentralized) architecture in San Diego, they added four GigE QAM modulators, but left the existing four ASI QAM modulators in place. "So now I can balance streams, GigE streams, over a broader number of service groups," he says. "There’s a big statistical benefit, but we wouldn’t have seen that much more of an improvement if we’d gone all GigE QAMs." As currently engineered, the Cox architecture provides a 6,000-hour capacity. With current content only at the 300-400 hour mark, Cox’s marketing and content acquisition team has its work cut out for them. But the attention that Pasquinilli’s team continues to give to the idea of pushing popular content to the edge bears watching. "It may be that the term hybrid disappears and new language starts, where they may say `this is a QAM with cache, or this is a switch with cache,’" Pasquinilli says. "Same principle, but a new vocabulary around it." Which kind of on-demand architecture prevails is a matter of some debate. The centralized model has plenty of wind in its sails (see example above: reduced number of server sites in Tampa). But Arroyo positions itself to go either way; and Kassena is emphatically in the decentralized camp. As for industry momentum, much depends on the shape of Comcast’s Next Generation On Demand (NGOD) specifications. Speaking at a high level, Comcast CTO David Fellows described the NGOD initiative as an attempt to hit something between the 100 percent on-demand model of a content delivery network (CDN) and the 7-10 percent peak utilization of current on-demand systems. A blast from the dot-com past, CDNs are famously intensive in local caching, which suggests a decentralized component. All the same, Fellows has framed Comcast’s decision to build a 19,000-mile fiber backbone in terms of shuttling not only digital video assets (expected to grow to 10,000 hours by the end of 2006), but also pages to Comcast’s popular Web portal, The Fan. That leaves us at no one point in particular, which may actually be the point of the matter (and help account for why "open interfaces" is one of the buzzwords surrounding NGOD.) "That whole endeavor is really about moving content around," explains one participant. On-demand management In any case, the proliferation of content is going to entail ever more software management tools. This is a point that SeaChange International has been driving home for a more than three years, and SeaChange Director of Broadband Systems Joe Ambeault remains on message. One of his points is that whether an operator is packaging premium or local and broadcast content to the on-demand menu, there is going to be an increasingly urgent need to ingest, encode, transcode and condition content of all types—HD included—into VOD friendly formats. It is, after all, the popularity of shows such as "The Sopranos," "NFL Highlights" and the like that fuels this business. SeaChange’s out-of-the-box, time-shifted applications are one way to make the business case tighter. The core technology behind Time Warner’s Mystro project, which enables the rapid storing, encoding and distribution of content, may be another. Monetizing these products through next-gen entitlement servers and solutions is a second component, one that may entail wresting some control from the incumbent billing providers, says IMAKE Chairman and CEO Mark Schaszberger. Better use of content metadata is a third ingredient—and candidate for improvement—in the on-demand management arena. "Because this is brand new, everybody has different ways of doing it," Time Warner’s LaJoie says. Pending business applications include creatively bundled service packages and user interfaces (UIs) that are driven by the metadata file itself, LaJoie says. Meanwhile, Comcast’s own development and deployment of its own advanced UI, the i-guide, proceeds amidst enthusiasm, bug complaints and impatience, as witness discussion threads in Broadband On-demand talent Like it or not, on-demand technology is not entirely automated. It calls for live personnel. Opinions diverge on the challenge of finding the right technical people who can act as what nCUBE (now C-COR) VOD Product Architect Cliff Aaby calls "the technical backstops to these advanced services." The multi-disciplinary engineers who are managing the burgeoning on-demand infrastructure largely come from within MSOs. "They’re growing these guys internally," says Aaby. LaJoie notes that it takes both personnel who are growing from within the field as well as newcomers to cable who are more system-aware. And he downplays the difficulty of acquiring the latter category of skills: "The reality is that if you could figure out how to make a cable television system work, operating and managing an IP network with its associated systems is relatively straightforward," he says. Whereas a lot of cable system work is "like black magic," (an expression often enough used to describe RF engineering), LaJoie says that the IT world has manuals and international standards. At the same time, he notes that since the deployment of digital set-top boxes in 1996, cable technicians have been dealing with (albeit private) IP addresses. "This isn’t something that’s new to the planet for cable operators," he says. The learning curve yet exists. "With the move toward GigE and switching and in some ways very complicated video streaming, personnel have to know a ton of stuff," says Nishith Sinha, Cox ITV systems engineer. That poses what is becoming a classic challenge, neatly summarized by Sinha: "Having to deal with folks in the field who don’t really understand the data side or folks on the data side who don’t really understand the RF side." Tear down those walls "Cable companies have been able to run their businesses as silos," Cox’s Pasquinilli adds. "You had RF engineers, MTC (master telecommunications center) engineers or headend engineers; you had data engineers; MIS (management information systems) engineers; and each one of those built with their own structure." "And all of a sudden I come in with VOD, and VOD touches everybody’s network. So now there’s no single go-to guy, and it pushes them into a very uncomfortable area," Pasquinilli continues. "They were happy in their silos." One upside, he adds, is that VOD is forcing cable now to break down walls that advancing technology will eventually undermine in any case. Growing talent As for growing talent, it’s hard to beat a good coach and a stable team. BrightHouse’s White says half of his people come from the RF side, the other half from outside. Then he cross-trains. White’s core talent at the network operations center (NOC) in Tampa grew from the original group that built LineRunner, a precursor to Road Runner, in 1997. "If you think about an ISP (Internet service provider), you’ve got content, you’ve got a mail server," White says. And as per LaJoie’s comment on cable’s private IP networks, the cross-pollination between cable’s video and the IT worlds is already well under way. The people who run a system’s Scientific-Atlanta digital network control system (DNCS), for instance, are an obvious talent pool. "They’re high-end computer people; they know how to run large servers and databases," White says. It works To recap, the on-demand platform has matured considerably. "We’re running over 99 percent success rate," LaJoie says. "Of the half a percent that fails the first try, a very huge proportion of them get in on the second try." The reason for the successful follow-up is that traffic on a cable network is dynamic, changing second-by-second. Achieving that kind of reliability, however, entails many fundamentals: a solid two-way plant, efficient management of nonresponding set-top boxes, resolution of the trouble tickets that any on-demand infrastructure kicks up (see sidebar, p. 28), and over-provisioning of streams to ensure minimal first-time failures. Those advocating a data-model approach to VOD see that kind of over-provisioning as wasteful. The potential advantages to simply drilling into a broadcast stream for on-demand content are intriguing, but will have to be a subject for another day. Jonathan Tombes is the editor of Communications Technology. Reach him at What’s in those buckets? Cliff Aaby, VOD product architect for nCUBE, analyzed more than 200 trouble tickets. He classified them into the following five buckets, with the first three buckets comprising some 80 percent of the total:

  • Configuration issues, related to products, the rotation and churn of content, and field upgrades
  • Third-party integration, both of headend components and software applications
  • Metadata issues, such as missing content or characters, often correlated with rising volumes of content
  • Product defect and/or enhancements
  • Operator error
  • The Daily


    Doing Good

    Lifetime announced “Gift of a Lifetime,” part of the net’s pro-social giveback initiative partnering with charities to identify five women and their families to receive the “gift of a lifetime”

    Read the Full Issue
    The Skinny is delivered on Tuesday and focuses on the cable profession. You'll stay in the know on the headlines, topics and special issues you value most. Sign Up