Switched digital video (SDV) is a hot topic these days, with many North American cable operators either deploying the technology or getting ready to do so in the next few months.
Less extensively explored has been the question of how to gain overall visibility into the multiple and moving components of an implemented SDV delivery platform. Understanding these parts and the network related demarcation points is the first step toward devising the right troubleshooting methodology. Switched review SDV maximizes the HFC platform, while changing the way operators deliver digital video to customers. Decoupling fixed assignments of bandwidth to (typically long-tail) content, SDV allows operators to transmit to subscribers content that is being requested by those subscribers, in real time.
The efficiencies of migrating to this architecture are well-known. SDV enables service providers to reclaim spectrum, using 50 to 75 percent less bandwidth as compared to traditional broadcast models, and it provides an efficient mechanism for migration to MPEG-4 and advanced set-top-boxes.
Another benefit is that SDV facilitates the creation of new services. It boosts the total number of channels available, enabling market-specific niche (including high definition) programming; increases the level of concurrent video on demand (VOD) sessions being served to various nodes; provides methods for advanced targeted ad-insertion solutions; and frees up bandwidth for data delivery. Implementation Operators deploying SDV are faced with RF access network and long-haul transport issues, and also with the intricacy of the required signaling between many different components that must operate in unison.
Among the most common SDV implementation requirements are the following:
• Converting video streams from multi-program transport streams (MPTSs) to single program transport streams (SPTSs)
• Converting from variable bit rate (VBR) to constant bit rate (CBR)
• Increasing exponentially the number of multicast flows
• Increasing the required network computing resources of routers, switches, edge quadrature amplitude modulation (QAM) modulators and other components
• Reducing the relative throughput of those devices while increasing the chances of queuing overflows or underflows
• Increasing network communication overhead for both the video delivery (data plane) and control plane networks (control and signaling)
• Associating and switching video flows to the home to enable seamless channel changes
Here is an example (see Figure 1), based on a specific type of implementation, of the complexity of the signaling process required for a seamless channel change. The preferable window is only 350 ms, with total channel change time needing to stay within 500 ms (since there is a certain delay injected by the set-top box before it can start decoding the received video). There are also other factors that come into play that have a direct influence on troubleshooting. Architectures, for instance, may vary from vendor to vendor and operator to operator. Maturity levels of technology, products and integration can also differ.
Other questions surround which programs to switch: how many QAM channels to allocate per service group; how much bandwidth will be saved; and how to handle blocking factors, actual monitoring and troubleshooting tools and the customer who calls complaining about channel changing.
Getting a handle on SDV can be tricky. One effective approach is to segment different logical and physical areas of the network, analyze applications, scrutinize the service delivery model, and then define various demarcation points. These points can give access and visibility into the health of the SDV service delivery. Demarcation points There are many physical and logical network segments that need to be scrutinized for reliable SDV service delivery. Since different groups and units within a service provider’s organization often handle issues based on where they originate or where they are physically located, each of these segments must be fully understood.
Problem determination represents the first stage of service assurance. An operator must first determine "if" a problem actually exists before investing time and resources to tracking it down and resolving it. Video quality being so subjective in nature, it reinforces this need for this determination. "Good" video for one person might not be the same for another.
For example, a customer may complain of channel change not working while actually not being subscribed to the requested program. In order to provide adequate determination, the operator must have visibility into the various network areas and be able to measure performance accordingly across a wide variety of applications and services.
Headend: This is typically the interconnection between the integrated receiver/decoder (IRD), groomers, encoders and the network ingestion point. SDV implementations can sometimes require specific headend requirements for "clamping" and bulk encryption. These devices can become a possible point of failure and contention (shown as point 1 in Figure 2). Operators are free to centralize or distribute the localization of the SDV servers and the edge resource manager/session resource manager (ERM/SRM) servers. Figure 3 shows one model for logical connectivity. Core transport: This demarcation point represents the Internet protocol (IP) network transport of video from the headends to the regional area nodes. Such transport can be performed across a variety of technologies, including 10 Gbps links, synchronous optical network (SONET) rings, dense wavelength division multiplexing (DWDM), etc. (shown as point 2 in Figure 3). Regions and nodes: The third demarcation point represents the point at which video delivery transfers to the RF world. As a general rule of thumb, most optical nodes serve about 500 households (or 500 tuners), although this can vary anywhere from 250 to 1,500. Just like VOD, SDV puts collections of users in a service group. A regional area would also encompass the required technology to perform regional and localized ad insertion, grooming, and, if necessary, simulcast services.
RF access network: The fourth demarcation point covers the RF portion of the SDV video delivery platform. Many factors can affect quality of service (QoS) and quality of experience (QoE):
• Mismatch TSID to service group configuration
• RF interference on specific frequency/QAM channels
• Reed-Solomon error correction efficiency
• Over-subscription/blocking of SDV QAM channels
• Mini-carousel distribution and configuration
• Grooming/muxing issues at the edge QAM device
Control plane signaling: The fifth demarcation point represents the now complex nature of the SDV delivery platform control plane. There are many required components, from potentially many different vendors, that must all operate in unison:
SDV server: The quarterback of the SDV network, it tracks all of the channel-change requests and viewing patterns, helps broker bandwidth assignments to requested channels, and generates the SDV mini data-carousel.
ERM/SRM server: Controls and arbitrates capacity allocation among various application (such as VOD, SDV channels, etc.) and the edge QAM devices. The ERM/SRM keeps a stateful view of the spectrum allocation.
Edge QAM devices: special session-based QAM modulators that receive ingress video traffic through IP (using IGMPv2/v3, etc.) and then route and switch it across the RF/QAM network by interfacing with the SRM/ERM.
SDV client: This is special software residing on the set-top box that interfaces with the SDV server and the SDV network.
While each of these signaling components might provide great management tools, interfaces, alarms and logs, what happens when they provide conflicting views? Or worse, when they don’t report any problems, yet customers are complaining? Operators can entertain finger-pointing at this point or benefit instead from "on the wire" visibility. Quality of experience QoE is a subjective measurement of the perceived value of the overall service and customer experience. It is closely tied to QoS, whose role is to objectively measure the service delivery by the provider itself, but is different in that QoE measures from the point of view of the subscriber. QoE measurements are often done in the form of mean opinion score (MOS), a numerical value often ranging (worst to best) from 1 to 5.
In the broadcast IP video world, MOS-type measurements are yet to be fully understood and agreed upon, as opposed to the voice over IP (VoIP) domain, where the ITU has produced recommendations for its use.
The computing requirements to perform real-time MOS measurements for hundreds of simultaneous broadcast video streams (for both SD and HD) are nothing short of tremendous, and the costs involved are still prohibitive. This isn’t even taking into consideration the possible digital rights management (DRM) complications and requirements.
To understand how QoS video delivery relates to QoE, see Figure 3. Headend Beyond ensuring video integrity coming in from satellites, IRD, encoders, etc., we now have to inspect the same video stream at multiple points to better isolate video issues. This is to monitor and verify content quality before entrance into the IP network. Sample quality monitoring points should include the following:
• Prior to IP encapsulation and clamping/grooming/conversion to SPTS (MPEG)
• After clamping and IP encapsulation and prior to bulk encryption (MPEG and IP)
• After bulk encryption and before ingress to backbone transport (IP)
A good approach is also to mark, or embed, valuable content meta-data in the MPEG stream so that the content can be quickly identified and validated as it gets distributed and replicated to many regions. IP multicast address association to channel/program name no longer suffices because there are many points in the network where this association can be manipulated (PIM-SSM, multicast translation tables, edge QAM modulators, etc.).
There are different methods that exist for this, but one that is fairly common is the use of the MPEG service description table (SDT) tag. IP video probes can quickly validate the content by analyzing this value and comparing it to the expected channel association, thereby preventing little Johnny from accidentally receiving HBO instead of the Disney channel. This marking would normally happen in the headend, being inserted by the IP video encoders. Core transport Typically, less than 5 percent of customer issues can be attributed to problems in this part of the network. While this represents a fairly small percentage, it also represents the biggest single threat. A major problem here will cause hundreds to several thousands of support calls, in a very short time span, to the service provider’s tier 1 support help desk.
The important thing to remember is that only two things can affect the QoE in this part of the network: loss of packets and severe jitter (because of its potential impact on the edge QAM devices). Loss can be tricky to monitor for user datagram protocol (UDP) traffic (unless RTP encapsulated). Therefore, you must rely on analyzing the MPEG continuity counters, on a per PID level, for every single IP video flow. Your backbone network equipment must be able to handle hundreds it not thousands of SPTS flows simultaneously, and your IP video probes must perform line-rate IP and MPEG analysis, not just sampling. All this computation must be analyzed and correlated, from an end-to-end perspective, using a single unified video management platform. Regions and nodes Verifying delivery of video to a customer’s specific area, many things can affect video quality, including issues at the ingress of the national backbone, bad grooming/muxing of channels, ad-insertion issues, aggregation of multicast video and edge QAM modulator problems. While typical data/Internet traffic might be more resilient to intermittent issues and packet loss, when it comes to sensitive traffic such as video and VoIP, the problems are more serious. Metrics and measurements can include the following:
• IP and MPEG packet loss defined as media loss rate (MLR)
• Ad-insertion problems (content quality, splicing, signaling, etc.)
• IGMP response time (from edge QAM modulator to the IP network)
• IP router/switch performance issues (buffer overflows, jitter)
• High level of jitter in video traffic (MDI-DF measurements) RF access network A good portion of customer reported issues originates in this fourth demarcation point. It is important for the operator to analyze the subscriber’s problem description thoughtfully to determine if an issue really exists or if there’s a need to educate the customer. Most of the issues found in this demarcation point can apply to both traditional broadcast and to SDV. This can help in isolating issues to RF plant vs. SDV control plane. Measurements at this demarcation point can include:
• RF interference—carrier-to-noise ratio (CNR), modulation error ratio/bit error rate (MER/BER)
• Stat-muxing efficiency (bit starving caused by over-aggressive muxing of channels)
• TSID to service group mapping (for VOD and SDV)
• Efficiency of Reed-Solomon error correction
• The availability and accuracy of the SDV mini-carousel, whether muxed in-band or provided out-of-band through a quadrature phase shift keying (QPSK) channel Control plane signaling Demarcation point five is defined as the orchestration of all the following moving parts:
• IGMP v2/v3 (edge QAM modulator to network)
• RPC/RTSP (ERM/SRM to/from edge QAM modulator)
• Set-top channel change protocol (based on DSM-CC CCP)
• RTSP/SSP-SIS (SDV server to/from ERM/SRM)
• Generation of distribution of mini-carousel by the SDV server
• Set-top SDV client issues Conclusion As the technology matures and operators make selections on vendors and implementations, there will be a need to continuously adapt monitoring and troubleshooting solutions and methodology.
Every service group within a cable company, from operations to engineering to top-tier and customer support to the technician in the home, needs to be able to access the same data points and have the same service delivery views. The concept of having multiple technicians, engineers and service reps all troubleshooting the same headend bulk encryption issue in 50 different service areas without total visibility needs to be a thing of the past. All of this technology needs to be monitored, analyzed and correlated on a per SDV service group basis.
The possible combinations of implementation (vendors, protocols, architecture, etc.) are endless, with many SDV vendors trying to differentiate themselves with unique value-adds. A service assurance solution must be able to quickly adapt and cover all aspects of video delivery.
It’s no longer a matter of simply supporting TR101-290, the European Telecommunications Standards Institute measurement guidelines for digital video broadcast (DVB) systems. It’s about having end-to-end visibility. And for taking responsibility for those customers who call because they can’t see their favorite shows.
Gino Dion is vice president, business development, for IneoQuest Technologies.