How can an intelligent asset management system designed for the hybrid on-demand delivery architectures prep you for XOD?
When preparing an infrastructure to support tomorrow’s everything-on-demand (XOD) world, it’s essential that cable operators consider the requirements for an intelligent asset management system (IAMS) that can operate within a hybrid on-demand delivery architecture.
You can categorize video-on-demand/interactive TV (VOD/ITV) architectures deployed thus far into three basic types: distributed, centralized and hybrid. The "best" architectural choice for a cable operator is a function of many variables including the number and location of hubs; the number of homes passed per hub and expected utilization; available power, cooling and rack space within the hubs and headend(s); available fiber in the ground between the headend and hub sites; and existing transport solutions. Many MSOs have chosen the distributed approach. Because of its limited scalability, they will need to migrate to hybrid architectures to support increased content volumes in the near future. Near-term issues Despite the seemingly modest total storage requirements shown in the accompanying table below, this amount easily eclipses the storage capacity of many video servers (e.g., 800 hours) that comprise an embedded base installed one or more years ago within a distributed architecture. Augmenting capacity across all servers is generally a very costly endeavor. Increasing overall storage in a single server could potentially require the replacement of all disks because of potential interoperability problems with heterogeneous disk types (i.e., disks with differing capacities). Moreover, this would only be a short-term fix. Content volumes will inevitably increase. The amount of content that will require propagation will grow beyond what can be supported by the delivery network or server propagation output. Migrating to a hybrid architecture from an incumbent distributed architecture will be necessary. It is likely that the incremental revenue gained from other services per unit of bandwidth relinquished from on-demand streaming capacity is greater than that which would have been garnered by the on-demand services, at least in the near-term. There will be some level of bandwidth at which it will make business sense to invest in storage and streaming capacity at one or more hub sites. Basic IAMS terms Asset: Within the context of this article, an asset can be considered as anything that consumes system resources, and can be requested by and delivered to a customer in real time. Essentially, an asset can be thought of as any form of stored digital media. System resource: A system resource is any consumable resource related to the content distribution infrastructure on a two-way cable plant. These include in-band/out-of-band bandwidth, transport bandwidth, available edge bandwidth and storage capacity, etc. An IAMS must be able to: * Intelligently propagate assets to ensure timely availability at required locations within the cable plant (e.g. hubs), while simultaneously minimizing the consumption of available system resources such as transport bandwidth or hub storage. * Respond to stochastic usage behavior by dynamically balancing the content load across the headend and subtending hubs based upon an MSO-defined cost function to maintain a defined level of system resource use (e.g., transport bandwidth,
hub storage, etc.). * Employ historical usage patterns to refine its efficacy in asset propagation and dynamic load balancing. * Provide MSO-defined reports indicating use of system resources, efficiency and effectiveness in propagating and load balancing content, etc. Headend storage capacity: This represents the aggregate storage capacity at the headend to support all content, both library and popular. This parameter is essentially the same across all of the architectural approaches. This parameter is dependent on the number of services offered and active hours of content for each service. It also depends on the refresh rate for each service, the average time prior to start of availability window for each service, and the storage architecture supported by video servers. To support required minimum storage across all headend streams, some sever vendors may require additional storage capacity. Hub storage: The hub storage capacity represents the local content storage that must be available to a given hub to house "popular" content. We define that as the number of hours/titles that account for ~80 percent of the total usage volume at a given hub during a given time period. (The time period is an MSO-defined parameter that is determined by statistically similar request volumes and content type selection. Based upon some samples of empirical data we have seen, this is generally a four-hour block.) This parameter will vary based upon the chosen implementation architecture. If the hybrid architecture is being migrated to or from a distributed architecture, hundreds of hours of content storage already exist that can easily be exploited. Headend streaming capacity: This refers to the aggregate peak bandwidth required to support the streaming of content from the headend to all hub sites. This parameter is highly dependent upon the implementation architecture of the IAMS. Propagation bandwidth: The propagation bandwidth refers to the aggregate bandwidth required from the headend to all hubs to support both the initial propagation of content, and its dynamic movement in response to usage patterns. This parameter is very difficult to determine without simulation. (At the time of this writing, we are initiating the simulation of dynamic propagation algorithms for the architectural approaches mentioned earlier). As expected, the propagation bandwidth is highly dependent upon the architectural approach. Predictive propagation This implementation provides a method for propagating content throughout the network utilizing the content metadata and content usage patterns to determine exactly how and where the content gets propagated. The functional requirements for a predictive propagation architecture consist of: * Prioritized "load" of content to VOD server-load content directly to VOD server disk array (third-party or resident drives) from content aggregation devices (e.g., content catcher, etc.). * Prioritized loading based upon business rules (e.g., minimum time to availability window, specific content type [SHBO always given load priority, etc.]) * Prediction of popularity via statistical algorithms, based on the existence of meaningful metadata. * Dynamic distribution of assets across the network based on predictions and usage. * Determination of placement of content via a statistical algorithm that is continuously updated to reflect actual usage history. * Use of scheduling based on available hub storage and bandwidth * Garbage collection at hubs Algorithms must take into account the day of week, time, content bandwidth requirements, expected take-rates and decay patterns of content types. The ability to predict popularity and placement of assets is highly dependent upon the existence of accurate and consistent metadata. Popular content is distributed as determined by the algorithm onto local XOD server storage systems. The XOD server software distributes the "handed-off" content across the disk arrays as deemed appropriate based upon expected usage criteria. Content propagation is scheduled to occur based upon network resource information (available link bandwidth, storage capacity at hub, business rules, etc.) to ensure that storage at the hubs is minimized (i.e., lead time before availability window) while streams generated from hub storage are maximized (i.e., local VOD servers handle ~80 percent of total demand at a given hub site). A statistical threshold algorithm is used to determine when and from where, say the headend or hub, to propagate content to a specific hub site, and when content is to be deleted because of diminished popularity. Usage patterns indicate that approximately 20 percent of total request volume of nonmenu related assets can be attributed to library-type content. So, if the system is designed for approximately 10 percent simultaneous peak usage, then approximately 20 percent of that amount (or 2 percent simultaneous peak usage) should be allocated for headend to hub streaming. Propagation bandwidth is very dependent upon the load balancing algorithm, popularity prediction algorithm, number of hours of content that exceeds thresholds for load balancing and popularity, and the worst-case for the amount of content that exceeds those thresholds. Figure 1 illustrates functional components within the predictive propagation architecture. Content enters the system at the headend (left side of Figure 1) and starts its distribution through the FastE/GigE switch. It is distributed through the headend system by the IAMS. The initial propagation of content is to the library VOD server. The IAMS determines the popularity of the content and predicts where in the system the content is to be propagated. Initially, all new movie releases will be propagated to all hubs. The timing of this propagation will depend upon the network bandwidth consumption and rates. All content is propagated using the IAMS algorithms, which are based upon meaningful content metadata, including release dates, title type and genre, and even descriptions. The IAMS algorithms will utilize all possible metadata to determine the likelihood of a specific demographic requesting the title, whether that title is a movie, music game or interactive application. When hub propagation occurs, data enters the hub and is directed to the storage array through the content manager at the hub. The IAMS determines whether this data should supplant other data at the hub. Supplanted content still is available from the library server if needed. The predictive algorithm becomes the brains behind the architecture. Without it, the architecture is ineffective. Benefits and challenges The predictive propagation architecture exploits the existing distributed infrastructure and minimizes the use of headend-to-hub transport bandwidth. It also makes bandwidth available for various other services, and content placement based upon usage statistics. The challenges include dynamic load balancing and popularity prediction algorithm design and dimensioning of propagation bandwidth. It also requires standardization of interface to VOD vendors for content management, and support of third-party vendor disk arrays by VOD vendors. Simultaneous propagation and streaming On-demand simultaneous propagation and streaming is a different approach to propagating content that does not necessarily care what the usage patterns or metadata for the content are. It is essentially a store-and-forward method of edge server caching that stores to disk and caches content for streaming at the same time, then removes that edge content based on storage availability. The basic functional requirements for the on-demand method (the caching method) are much less demanding than that for predictive propagation. Content is loaded directly to the library disk array, the same as in predictive propagation. Requests for content come from subscribers, causing the content to be streamed from the headend (for the initial request) and cached by the hub servers. Note that this requires the hub XOD server to be able to use a store-and-forward technique. Content is continuously streamed and propagated at stream rates throughout the network. Cached versions of content only are streamed from the hubs. In the case of a cached content store at the hub, the headend server does not need to stream. The LRU copy of content is bumped from the hub whenever there is contention for hub resources. (See Figure 2) On-demand content enters system and flows through a Fast Ethernet or Gigabit Ethernet switch into the various network elements. The XOD server manager component manages the distribution of content through the storage-switching array. This software management piece most likely will be a part of a video server. The algorithms to decide which content gets placed directly onto the network vs. that which is only stored centrally are in the IAMS server component. Additional hardware or software is required only if the content load capacities of the video servers is inversely proportional to the stream load. This component guarantees a minimization of headend-to-hub bandwidth via prediction. Content propagation terminates at the edge. On-demand store-and-forward propagated content storage at the edge ultimately gets delivered via XOD servers to consumers. With the caching method, there are no complex asset management algorithms because content is distributed as it is used. No separate propagation network or reservation of bandwidth are required, and there is no dependence on content metadata. It also has the potential to reduce streaming bandwidth and the required stream capacity at the headend. There also are challenges. The "trick mode" generation for fast-forward can be complicated for the first copy of a given title. Because of simultaneous streaming, the ability to fast-forward will need definition. Servers are required to support the simultaneous store-and-stream capability. The simultaneous peak number of unique requests for content across all hubs drives the required bandwidth. If this were large, then subscribers could be denied service. The determination of properly sized streaming bandwidth is dependent upon statistical analysis and simulation of the sum of the peak number of unique requests in a given hub. Partial predictive propagation On-demand simultaneous propagation and streaming with partial predictive propagation combines the predictive method with the caching method to allow for intelligent initial propagation to the edge, smart garbage collection at the edge, and bandwidth optimization throughout the network to avoid the constant streaming issues associated with the caching method. Basically, the functional requirements for this type of architecture are a combination of the requirements for the cached method, with the addition of intelligent propagation from the predictive method. Longer term architecture Figure 3 shows the functional components within a longer-term on-demand architecture. The IAMS will decide what and when content is propagated and streamed for the various VOD servers. The VOD vendor is required to provide its capabilities (storage used per content, bandwidth left, etc.) so that the IAMS can properly distribute content across multiple vendor systems. The VOD system management software can decide where and how in the storage array the content is stored. The IAMS works with a global system resource manager (i.e., bandwidth broker) to define the bandwidth availability parameters in both headend and hubs for delivery of assets, and provides session-based QoS (i.e., discrimination of session type served) based upon business rules. It is impossible to clearly state which architectural scenarios will be the best fit for the various services. However, we believe that the basic functional building blocks for predictive propagation and on demand simultaneous propagation will continue to be pertinent. We encourage developers and MSOs to contact us to discuss the various implementations of the algorithms. We are constantly seeking new and effective ways to increase our bottom line. Nishith Sinha and Steve Calzone are ITV systems engineers at Cox Communications. Email them at firstname.lastname@example.org and email@example.com. Content Is King: Provided You Know Where it Is When preparing your system for tomorrow’s everything-on-demand (XOD) world, be sure to examine an intelligent asset management system (IAMS) architecture and algorithms. An IAMS can support your current embedded base of users and exploit the natural usage patterns evident in the class of subscribers for each network. Furthermore, by utilizing usage data and ensuring consistent metadata, marketing and promotional capabilities inherent in a cable network can become much more powerful. Cable operators can tailor the system to provide feedback for marketing and promotional materials across the entire network in an automated fashion. Simply by knowing what content is already propagated to the edge and popular, your marketers can better promote it to grow sales.