The "VOD: Care and Feeding" article in CT‘s December 2008 issue (available online at http://www.cable360.net/ct/operations/bestpractices/32737.html) generated a lot of interest and followup questions. Several of the most frequent will be answered here.
But first, let’s review a couple of concepts around operational improvement efforts and the analytical reporting that is necessary to correctly focus them.
First, anything you are doing, whether reporting, analysis, process improvement, performance management, and other operational improvement tactics, must be results focused, not activity focused. Too often, we get lost in producing the report du jour or documenting a process that either no one will follow or does not drive expected improvements. Encourage your team to constantly question "why" they are doing an activity, and validate its efficacy in driving continuous improvement.
Second, remember these "four A’s": performance data must be available, accurate, analyzed and actionable.
The philosophy and tactics described in the December article and in the Q&A that follows reflect these premises. By definition, they are generic; your specific systems, network, and analytical reports will be different, and weightings and/or measures will need to be calibrated accordingly. Many of the questions pertain to the example VOD performance scorecard (Figure 2 in the December article), so you may want to have that handy while reading.
So, on to the questions.
Can you walk me through an availability calculation for an example system and what components are measured – ingest (catchers), backoffice, pumps, HFC network, etc.?
Availability of a discrete product is typically defined in the telecom industry as only those core subsystems necessary to ensure the product is "working" at the customer site. In the case of VOD, it would include all the components necessary for a customer to order and receive a VOD stream, but not components where a temporary problem would be invisible to the customer (like a catcher problem). It also would not include portions of the network necessary for aggregate distribution such as the HFC network. Think of it this way: If an outage would have affected customers independent of VOD (such as a blown fuse in a line extender), that would not count against VOD network availability.
One approach to measure the total impact of various sub-network availability on the customer experience is to add their impact on top of necessary core systems. For example, if your HFC availability is ~99.95 percent, the average customer experiences a loss of all services for about 20 minutes a month. If you have an additional loss of the cable modem termination system (CMTS)/high-speed Internet platform of 4 minutes per customer per month, your average high-speed Internet customer experiences 24 minutes of downtime. If on top of that your telephone environment upstream of the CMTS experiences 2 minutes of downtime per customer per month, then the average telephone customer would experience 26 minutes of downtime (20 minutes HFC + 4 minutes CMTS/HSI +2 minutes Telephone complex)
Now to the calculation.
Let’s say you have 10,000 VOD enabled customers on a site. Total possible available customer minutes in a 30-day month would be 43,200 minutes (30 days x 24 hours x 60 minutes) x 10,000 customers = 432,000,000 VOD customer minutes possible uptime (which is what you would have with no outages). Now let’s assume you had two outages, one of which impacted the entire system for 20 minutes affecting all customers, and another that only impacted one service group of 1,000 customers for 100 minutes.
Outage 1: 10,000 customers x 20 minutes of downtime = 200,000 customer minutes of downtime
Outage 2: 1,000 customers x 100 minutes of downtime = 100,000 customer minutes of downtime
So total downtime is 300,000 customer minutes. Dividing this by the customer count (10,000) yields 30 minutes of average downtime per VOD enabled customer. In the sample VEI in the December article, the minutes of downtime were weighted to get them in line with the other two metrics by dividing the average by 10, so our weighted metric would be 3.0, which is slightly worse than goal in the "Guarded" threshold in Figure 2 in the December article.
What is a "VOD enabled customer"?
This is a video customer who: (1) has a digital set-top capable of VOD (soon to include privately owned true2way devices); (2) is attached to a segment of the network where VOD has been launched; and (3) is enabled in the billing system to receive VOD. So for example, a digital customer off a headend without VOD connectivity would not be considered VOD enabled.
How does one measure and calculate VOD service group congestion?
The first component is by measuring the loading of the service group over time. To keep the math simple for example purposes, consider a three-stream quadrature amplitude modulation (QAM) service group that can support up to 30 standard definition (SD) VOD streams or up to 6 high definition (HD) VOD streams (or some combination not exceeding the bandwidth-per-QAM). We will threshold congestion "hits" at any time interval that 50 percent, 70 percent and 80 percent of the bandwidth is used. If the service group is simultaneously streaming 15 SD channels, a "50 percent hit" would be registered (typically measured in 10- or 15-minute time increments). If two HD streams are added, the congestion goes to 76 percent (the same as 23 SD channels, since each HD stream is about four times the bandwidth of an SD stream) so we would get a "70 percent hit." One more HD stream takes you into an "80 percent hit," which means that you can provide only two more HD streams. The third will get a "try again later" error.
Table 1 reports the congestion levels of 10 VOD service groups feeding 10,000 VOD-enabled customers. Ten percent of the customers exceed the 85 percent congestion target noted in the original scorecard.
Could you give an example of calculating stream success ratio?
Sure, but first let’s define stream success ratio: the total attempted streams divided by the total streams either attempted and not delivered or abnormally terminated.
Let’s assume 35,000 attempted streams over a month. 1,250 errored and were unsuccessful. 1,250/35,000 = 3.57 percent, which is weighted by using the number (3.57) and dropping the percent, which puts this metric slightly worse than goal in the Guarded range.
Our VOD product is the most compelling differentiator we have from satellite competition, and it truly gives our customers the convenience, choice, and control that they want over their video content. It is up to us to make sure that when they want a stream, we are ready to deliver!
Keith Hayes is VP of network operations and engineering services for Charter Communications.