While cable operators aggressively have launched a variety of new digital, on-demand and data services, their methods for managing the network capacity consumed by these services has remained fairly static. For hybrid fiber/coax (HFC) networks to continue their evolution, a new cadre of master system resource managers (SRMs) is needed that can aid in delivering any input to any output. What does bandwidth management mean? The answer lies in the flexibility of our networks. Let�s look, for example, at the networks of our past. We�ve built fairly passive networks�a channel comes in via satellite, passes to a modulator and upconverter and gets combined with other channels, as depicted in Figure 1. Each traditional channel follows a similar flow, so bandwidth management has been more defined by managing what program is wired, or assigned, to which channel. (Figure 1)
Figure 1 Now let�s look at backbone management. To get the channel depicted in Figure 1 to various parts of a metropolitan area, we�ve decided to narrowcast, or build separate channel lineups (different combiners) for different groups of subscribers. Such narrowcasting has allowed us to offer different channels, do local ad insertion and offer advanced services such as high-speed data and on-demand. Combining, or narrowcasting, can be done at a central location (headend) or at hubs. In addition, we�ve determined it to be very cost-effective to send all of our data and channels digitally to the hub and to convert to the desired �channel� in the hub itself. However, imagine the thousands of gigabytes that have to be transferred from the central headend to each of the hubs and their narrowcast/combiner groups. Bandwidth management has focused much in this area in the past few years. What�s next? But where are we going with bandwidth management? What�s next? Our next step is moving away from a passive network and into an active bandwidth lineup. Let�s first look at a typical spectrum as depicted in Figure 2. (Figure 2)
Figure 2 Although we have a variety of channels and services, our spectrum still is split into 6-MHz channels. We squeeze digital, data and on-demand services into 6-MHz channels�or with 256-QAM (quadrature amplitude modulation), into 38 Mbps increments. An optimal environment would be to assign bandwidth intelligently based on various parameters such as time of day, authorization, usage, usage allowances, day of week, priority weighting factors and more. Imagine having bandwidth available as depicted in Figure 3 (below)�a truly active channel lineup based on your business, not channel limitations. (Figure 3)
Figure 3 Seeking flexible options Bandwidth management clearly is changing. We are seeing the need to manage bandwidth according to our own changing parameters, and living in a fixed-bandwidth environment is no longer adequate. Although some may argue that we have plenty of capacity in our 860-MHz plants, I would tend to disagree. There is never enough bandwidth; there is never enough content; there is never a perfect mix. The key to success is flexibility�being able to offer what customers want whenever they want it, and being able to account for that with a flexible network and flexible bandwidth management to accommodate different usage patterns at different times. This is where bandwidth management is going. What allows flexible bandwidth management? As we know from Figure 1, our networks primarily have been fixed. How can we move from such a fixed-channel environment to an active network? Let�s look at Figure 4, which shows a typical network with multiple services, in this case analog broadcast, digital broadcast, on-demand and high-speed data. Each service�s channel bandwidth management is depicted. (Figure 4)
Figure 4 One key advancement in broadband has been the addition of gigabit Ethernet (GigE). Most of the traditional output of satellite receivers, data and on-demand servers has been either proprietary, digital video broadcast/asynchronous serial interface (DVB/ASI) or asynchronous transfer mode (ATM). Proprietary interfaces have not allowed the industry to build neutral devices for network manipulation. DVB/ASI, which supports approximately 150 to 210 Mbps, has been expensive to transport and manipulate. GigE, however, has the benefit of not only being an extremely cost-effective means for transport (about one quarter of the cost of DVB/ASI transport), but also of allowing services to be switched and shared easily. Switching and sharing The combining of multiple services prior to their RF modulation is key to the sharing of bandwidth. In most networks today, services are merged in an RF combiner, which takes all RF channels and joins them into one spectrum that is then broadcast from a headend or hub. Once combined, channels remain static. Therefore, each service has its relatively fixed bandwidth, and each system resource manager (SRM) determines its appropriate use of bandwidth. A broadcast SRM, for example, assigns channels to programs in a fixed-path method, whereas VOD will assign channels per on-demand request. Each of these SRMs can make use of narrowcast groups, but unfortunately they cannot share bandwidth with other services. With GigE, broadband services can make use of standard switches to avoid fixed-wired networks, therefore gaining further and more flexible access into our networks. Such switching is depicted in Figure 5. (Figure 5)
Figure 5 Any input to any output Not only does service switching enable one connection to be assigned a network path dynamically, but it also allows services to switch to each other�s modulators. In addition, the connection between switches also may go across a dense wavelength division multiplexing (DWDM) backbone (currently achieving about 40 gigabit links on a single fiber) as seen in Figure 6, therefore truly allowing a dynamic network�every input to any QAM modulator. (Figure 6)
Figure 6 As our cable modem termination system (CMTS) strategy and development evolves, it also will be possible to share modulation between traditional Moving Picture Experts Group (MPEG)-2 streams and data, adding yet another shared component at the edge. Narrowcasting with switches remains easy, because content can be mixed at the switch level, in the headend, hub and at the combiner/RF level. How will it work? Switched, shared, dynamic, routable, flexible bandwidth�sounds good, but in reality it�s not that easy. Many SRMs are built based on point-to-point networks, in other words, many SRMs have a fixed connection scheme in mind, where one input is mapped through our network to one modulator. How can each SRM understand all of the paths that one stream or program can take based on the switch connectivity? Also, remember that each service (broadcast, VOD, etc.) has its own SRM. How can we share bandwidth and QAM modulators between services without having a single, master SRM? Such a master SRM will require each application�s bandwidth managers that we saw in Figure 4 to instead use a new method of resource allocation. If we had a master SRM, we easily could switch among broadcast, on-demand, real-time record (RTR) channels and even high-speed data (although quality of service (QoS) and Data Over Cable Service Interface Specification (DOCSIS)-based bandwidth management schemes would need to exist). One scheme for such a master SRM is depicted in Figure 7 (Figure 7)
Figure 7 Standards support To achieve a master SRM as we see in Figure 7, we have to ensure that the proper standards exist to enable multiple applications to share bandwidth. Some of these potential standards are labeled in Figure 7. (1) Each client, or set-top, will need to request bandwidth. In some cases such a client is a headend application, because bandwidth assignment may be implemented for broadcast or relatively static bandwidth. Some standards for such session management already exist. These include real time streaming protocol (RTSP) and digital storage media-command and control (DSM-CC) (including extended DSM-CC standards). (2) As each client requests bandwidth, such requests will need to be negotiated between the application and the master SRM. For example, a VOD session request may only be able to be played out from various server outputs because that is where the title resides, and, therefore, the VOD application would need to negotiate not only switch and bandwidth path assignments, but also be able to limit what these can be. The master SRM would manage the dynamic network topology and official bandwidth allocations, but must also be able to take into account individual application requests. Management of the network topology can happen several ways: The likely implementation in the short term will be what will work with the legacy equipment deployed. Many of these do not have the processing for an edge resource manager, so they will require an external resource manager�either distributed or central. Both approaches are feasible if they take scalability and distributed architectures into account. (3) and (5) As we consider bandwidth management all the way to the edge QAM modulators, we also will need to include the management and control of encryption. In the case of session-based encryption, the source of the encryption key generator and manager will need to register with the master SRM (5), and the QAM modulators will require a common interface for the enabling of such encryption (3). (4) This interface does not necessarily have to become a standard. But certainly a good master SRM will wish to manage the applications� bandwidth rules. This would include assignments of bandwidth limits to applications, time-of-day usage restrictions, tracking requirements, bandwidth charging and more. (5) In addition to encryption management, the master SRM certainly will need to be able to control each edge device. For a QAM modulator, this is mostly bandwidth management and modulation definition, but for switches this is a bit more complex. The master SRM will need to be able to manage all of the bandwidth being used systemwide while being able to interface to all of the switches and other active components to build the entire network connectivity map. (6) and (7) Applications and network equipment will need to register into a master SRM. Having such a standard for registration of the application, ID, privileges, and more, makes the development of headend applications easier. Interface (6) is not likely to be one of the highest priority standards, but interface (7) is a must to enable network equipment to be launched easily. The preference for a scalable and automated interface (7) would be to develop an auto-discovery mechanism for network equipment (with proper security) to make dynamic network changes easy. (8) and (9) These two are not likely to be standards, but are key needs for a master SRM. When will we see master SRMs? There are several companies looking into launching or building master SRMs. It is a clear need for our network evolution, starting with switched broadcast and leading into a true digital, integrated and routed set of service offerings. Master SRMs likely will launch with on-demand and broadcast bandwidth management, with other services such as high-speed data to follow at a later time. Analog networks likely will remain relatively static and probably will not fall into the master SRMs control. Cable has evolved tremendously in the past 10 years and digital broadcast and on-demand services are just the start. Building anything-to-anywhere networks will enable a whole new network infrastructure and service capability. Yvette Kanouff is corporate vice president of SeaChange International. She also is an at-large director on the SCTE Board of Directors. Email her at [email protected]. Bottom Line Flexible Bandwidth While cable operators have aggressively launched a variety of new digital, on-demand and data services, their methods for managing the network bandwidth consumed by these services has remained fairly static. A new cadre of master system resource managers (SRMs) is needed that can aid in delivering any input to any output. With GigE, broadband services can make use of standard switches to avoid fixed-wired networks, therefore gaining further and more flexible access into our networks. Not only does service switching enable one connection to be assigned a network path dynamically, but it also allows services to switch to each other�s modulators. In addition, the connection between switches also may go across a dense wavelength division multiplexing (DWDM) backbone (currently achieving about 40 gigabit links on a single fiber), therefore truly allowing a dynamic network�every input to any QAM modulator.

The Daily

Subscribe

Reviews

“Franklin,” streaming on Apple TV+ , new eps Fridays. Most people will come to “Franklin” for Michael Douglas , who plays Benjamin Franklin and is an exec prod, along with heavies like former HBO

Read the Full Issue
The Skinny is delivered on Tuesday and focuses on the cable profession. You'll stay in the know on the headlines, topics and special issues you value most. Sign Up

Calendar

Jun 13
2024 American Broadband Congress Conference Registration is Open!
Jun 26
2024 FAXIES Awards Nominations Are Open!
Full Calendar

Jobs

Seeking an INDUSTRY JOB?
VIEW JOBS

Hiring? In conjunction with our sister brand, Cynopsis, we are offering hiring managers a deep pool of media-savvy, skilled candidates at a range of experience levels and sectors, The result will be an even more robust industry job board, to help both employers and job seekers.

Contact Rob Hudgins, [email protected], for more information.