The building blocks of an HFC network are nodes; they can be added or split, whatever the design requires. Capacity is really the driving factor for this kind of node work. Cable operators are always adding new services and new users, consuming capacity, which leads back to more node work. Over the past few years, PacketCable is one of those services being added. It provides high quality phone calls over the HFC network.

At the headend, nodes are connected to a cable modem termination system (CMTS). As capacity drives adding nodes, that capacity can also drive adding another CMTS. When a CMTS is added, typically existing nodes are moved to the new CMTS to help balance future growth. When nodes are moved, a vital association in PacketCable can break.
   
When first provisioned, a multimedia terminal adapter (MTA) is associated to the CMTS to which it is connected. This association is made in the call management server (CMS). During HFC network growth, when a node is moved to a different CMTS that association needs to be updated. To update it, you can look at the provisioning system or the CMTS to tell exactly which MTAs are moving. But updating the association presents a problem. Most importantly, the period of time that the MTA cannot correctly make or receive a call should be minimized. Realize that when an MTA comes online and goes into service, it can take just a few seconds. If a node that contained 100 MTAs is moved, you would need to make the 100 CMS updates in just a few seconds. Even harder, you would need to synchronize the node move and the CMS updates at the same time.
   
On a PacketCable network, when a CMTS is first installed it is set up to peer with the CMS; likewise, the CMS is set up to peer with the CMTS. This peering session is specifically for bringing up PacketCable dynamic quality of service (DQoS). All traffic regarding MTA QoS runs over this peering link.
   
PacketCable uses DOCSIS 1.1, which provides service flows; these flows allow for QoS across the HFC network. Typical DOCSIS traffic is "best effort," which cannot be relied upon for delivering high quality calls. When PacketCable DQoS is built up, it is unique to details of that particular call.
   
Bringing up a call involves multiple PacketCable network elements. DQoS is actually a subset of call negotiation. To bring up PacketCable DQoS, the CMS, CMTS and MTA all work together. (See Figure 1) Initially, the CMS and MTA negotiate that there is a call beginning. The CMS contacts the CMTS requesting the required bandwidth and QoS for the call. The CMTS responds with an identifier (ID) associated with the QoS. The CMS sends the ID to the MTA, and the MTA uses the ID to request QoS from the CMTS. The CMTS sends a report to the CMS of the QoS being active. Finally, the call is up, and voice data begins flowing. Change  PacketCable calls without QoS are subject to chronic and intermittent voice quality issues because of dropped packets, network delay and jitter. There may even be audio dropout in one or both directions. Sometimes, if conditions are right without QoS, a call may seem perfect to the customer.
   
PacketCable DQoS ensures that the voice data is given high priority DOCSIS QoS. This high priority helps minimize loss and handles jitter or delay caused by different situations. The CMS acts as controlling agent, authorizing QoS only for legitimate voice sessions. The CMTS peering with the CMS prevents any unauthorized QoS requests. When correctly configured, PacketCable DQoS is transparent and a key enabler of high quality voice service. We can change that provisioning step to address the problem of updating the CMS once a node split has occurred. It is possible to use the concept of DQoS routing to help maintain the CMTS and MTA association. DQoS routing requires us to add a network element called a policy server. With a policy server added, we change the peering a little. In turn, each CMTS peers directly with the policy server. Now, when the MTA is provisioned on the CMS, is it associated with the policy server. From the perspective of the CMS, it does not matter which CMTS the MTA is connected to since the CMS never communicates with the CMTS, only with the policy server.
   
The policy server will maintain a database of which subnets are on which CMTS. When the CMS contacts the CMTS requesting the required bandwidth and QoS for the call, that message actually contains the Internet protocol (IP) address of the MTA. The policy server would key off the IP address and consult the subnet database and forward the QoS message to the proper CMTS. The policy server would also forward any responses coming back from the CMTS to the CMS.
   
Figure 2 shows DQoS setup with the policy server inserted. Again, initially the CMS and MTA negotiate that there is a call beginning. The CMS sends a message to the policy server requesting the required bandwidth and QoS for the call. The policy server consults the subnet database and forwards the request to the proper CMTS. The CMTS responds with an ID associated with the QoS, and the policy server forwards the response back to the CMS. Just like before, the CMS sends the ID to the MTA, and the MTA uses the ID to request QoS from the CMTS. The CMTS sends a report of the QoS being active, which the policy server forwards to the CMS. Finally, the call is up, and voice data begins flowing. There are a number of different messages that will flow between the CMS and CMTS transparently through the policy server; all will be sent unaltered. Some of them are requests, acknowledgments, opens and closes. For each call, there are just a couple of messages at the beginning and end of a call. A policy server should easily keep up with traffic even during peak call times. Updates The policy server subnet database is now crucial in the messages between the CMS and CMTS. It must be kept up to date, and it could be maintained in a number of ways. The policy server database could be maintained manually, but that will not scale well with a large number of CMTSs and many node splits. It would be better to automate the database updates.
   
There are several options. One solution would be to have it listen to a routing protocol such as "open shortest path first" (OSPF). Or it could use simple network management protocol (SNMP) to query the subnets from each CMTS. The list of CMTSs in a policy server could be maintained manually or possibly automatically with an auto-discovery over a small subnet.
   
However it is maintained, it should be simple for it to be updated. There definitely should be an error system to catch a request with an IP address that cannot be found in the database. Any error or missing information in the database will most likely result in call problems.
   
The policy server is a critical network element of the PacketCable network, where policy server failure would result in widespread service outages. Therefore, the policy server should be deployed in a high availability configuration providing low latency fail-over. Adding a policy server Adding a policy server to a PacketCable network can be done with just a few steps. First, install the policy server and set up the peering session just like you would for a CMTS. Then direct the peering of the CMTS to the policy server; usually, there is a list of peers in the CMTS, so just add the policy server to that list. Be sure the policy server database is set up with the subnets on the CMTS. And last, in the CMS change the MTA’s association from the CMTS to the policy server. Now, when a call is placed, the DQoS messaging will be sent to the policy server.
   
At www.packetcable.com under Multimedia, there is a description for a policy server that is similar to what is described here. PacketCable Multimedia (PCMM) is associated with gaming and maybe session initiation protocol (SIP) voice. Cable operators are currently marketing bundled services that are very likely to include traditional PacketCable voice. A DQoS routing solution must be a simple fit into a traditional PacketCable network.  Testing and service assurance There is cable-specific test equipment available that can do voice testing. Instruments like this are ideal for troubleshooting or even comparing call quality on different parts of the HFC plant.
   
This type of test equipment often is required to work across a large territory, possibly spanning many CMTSs. When these instruments move from CMTS to CMTS, they suffer from the same problem of MTA and CMTS association. Without DQoS working correctly, the instrument may show a false positive of a voice problem that doesn’t exist when DQoS is established. With DQoS routing, the instrument’s test calls would have DQoS established no matter what CMTS it is connected to.

A policy server provides a point of PacketCable service assurance. The DQoS history could be logged and put into a database. This database could be used for troubleshooting past customer complaints or checking to see whether a call had DQoS. This data could also be used in historical graphing and capacity planning. Since the data has the start and end of each call, it would also be possible to generate Erlang graphs for each port. Another use would be in pairing this data with call quality data. It would be possible to generate reports demonstrating the quality of calls, which could provide useful data for dispute resolution, marketing and budget processes. Conclusion With DQoS routing, there is a way to eliminate the MTA and CMTS association in the CMS. The association is turned into an MTA and policy server association, where the CMS sees only one policy server network element. With DQoS routing, we can minimize the time an MTA cannot make calls because of ongoing node work; that time is the physical node move itself and the MTA coming online and going into service. DQoS can recapture this potentially lost time.

Craig Lien is a network engineer for Midcontinent Communications. Reach him at [email protected].

The Daily

Subscribe

People

Paramount Chief Program Acquisitions Officer Barbara Zaneri is retiring after 25 years at the company. During her tenure, Zaneri secured deals that led to the acquisition of shows such as “The Big Bang

Read the Full Issue
The Skinny is delivered on Tuesday and focuses on the cable profession. You'll stay in the know on the headlines, topics and special issues you value most. Sign Up

Calendar

Sep 11
2025 Faxies Awards Faxies Nominations Open! Final deadline: 4/4/25
Full Calendar

Jobs

Seeking an INDUSTRY JOB or hiring for one?
VIEW JOBS

In conjunction with our sister brand, Cynopsis, we are offering hiring managers a deep pool of media-savvy, skilled candidates at a range of experience levels and sectors. The result will be an even more robust industry job board, to help both employers and job seekers.

Contact Carley Ashley, [email protected], for more information about posting a job on the website and our Jobs newsletter, sent twice weekly to 85,000 media professionals.