SCTE member since 2001
Title: Principal engineer, Time Warner Cable
Broadband Background: Chuck is a principal engineer in the Time Warner Cable Advanced Technology Group located in Broomfield, CO. He is responsible for VOD and SRM architectures, in addition to design and architecture work in encryption, advanced video/audio encoding and next generation video delivery platforms. Prior to Time Warner Cable, he worked for the Mystro TV Group and Adelphia Communications. He holds a BS in Computer Engineering from the University of Pittsburgh.
Your recent CT article on edge resource management discusses ways to reclaim stranded resources. How big a problem is this? Is it hurting people’s bottom lines, or is it more a matter of maximizing efficiency?
To be clear, we really have three types of edge resources in our network, allocated, unallocated and orphaned. To maximize efficiency, we want most of the resources allocated between SDV and VOD. If something went wrong or is not working correctly, that’s when we end up with orphaned resources in our network. It is critical that the network have mechanisms in place to recover orphan sessions, since they are unavailable for use until recovered. Stranded (unallocated) resources are not hurting the bottom line, just not providing the most efficient use of network resources.
If a system allocates QAM channels based on need, does that mean no service has dedicated channels? Could that potentially be a problem?
Services (channels) can be statically assigned to dedicated QAM channels and programs, if the channel is highly watched channel (such as CNN). In addition, some channels will still be offered to non-switching devices and clients, which means services should be broadcast at all times and not switched. The system is more agile and efficient when switching services that are niche programs, HD channels or lightly viewed services. In the end, no service will be statically assigned to a single QAM channel; it could move around as needed to optimize the stacking services in a given QAM carrier.
There’s been talk of a "bandwidth crunch" for cable, both within the industry and outside. What’s your take on that?
I think the cable industry is looking at how to more efficiently use and assign edge resources to their services. SDV systems and edge resource managers provide a more dynamic system for allocating these resources. I think the "crunch" people refer to is going to solved by more intelligent resource management of our network. Typically, new services and products are pushing the bandwidth limits in our network (cable, satellite, etc.), but it allows us to opportunity to innovate in managing our resources. New product demands will always push on the bandwidth limits of the network. Cable is well-positioned to continue to provide more and more bandwidth to our customers. We have come to a point where static configuration of our network resources will be replaced by highly dynamic resource allocation systems. The marriage of dynamic resource allocation and sophisticated policy, business rules, and algorithms will allow us to open up more bandwidth in the coming years.
It seems that service "silos" have been quite resistant to efforts to tear them down. Why is that, and what’s the solution?
QAM sharing or non-siloed resource management presents new challenges that require new resource management systems in our network. Adding a centralized resource manager in your network (such as GSRM or ERM) will provide the system to help minimize the silos. I think the harmonization of industry interfaces for edge QAM modulators and other components will help remove resistance from the siloed approach. In the past, most QAM interfaces were proprietary or table-based, which made it difficult to manage edge resources. CableLabs has just started an effort to standardize a number of video QAM interfaces. The standardization will allow all MSOs to use the same QAMs, but architect their systems (in terms of resource and session managers) into a configuration that suits their needs and requirements.
What causes silos in the first place? Is there a way to prevent them from forming from the outset?
In the past, resource management was a function of the headend controller, VOD system or SDV system. As such, the resource silos were created as a byproduct of how these products were launched and deployed. Since the products were launched as stand-alone or over time, the silos were the quickest way to get the product in the field with a minimal set of interdependency with other components. We have now become more sophisticated and have evolved the architectures (such ISA and NGOD) to support a single resource manager in the network. We may not completely remove silos, but possibly minimize them to a couple QAM carriers for known usage patterns and then have a large shared space. Over time, we will look how we optimize resources assigned to each service and product based on usage and policy.