A couple of days ago, I was reminded via a tour that building a headend isn’t what it used to be. Bright House’s VP of Engineering and Society of Cable Telecommunications Engineers Board member Gene White took me through his Brandon, Fla., facility, which supports voice, data, and of course, video. What goes into the building, even before the equipment, is worth a column. Telephony is a big part of the reason for changes. Grounding Begin with grounding. Bright House was planning to offer telephony before voice over Internet protocol (VoIP) reached market-readiness, so the Brandon headend is designed to house a 5ESS telephony circuit switch, including being wired according to the 5ESS grounding specification. That means special considerations to individual equipment frame grounds, the method of interconnecting ground buses, and bonding to building ground. Though Bright House telephony is now packet-based rather than circuit-switched, Gene’s engineers have chosen to continue the stringent 5ESS grounding. As it turns out, this additional protection is another step to increased reliability for all digital applications. Building ground is another consideration. Because Florida has one of the highest ratios of lightning strike per square meter in the United States, the Brandon headend is grounded at multiple points. This provides additional insurance against surge damage from the difference in potential at different points in the building. Further protection against surges is provided by isolation transformers at the point of power entry. Powering In the event of a power outage, backup is provided by an uninterruptible power supply (UPS) good for 1.5 hours and a system of diesel generators. Gene pointed out that the UPS, in addition to providing a backup, acts as a filter for commercial power. "Commercial power is inherently dirty, in terms of transients, which can adversely affect computer equipment," he noted. "As the power goes through the UPS, the transients are removed." Another function of the UPS is to protect against the surge that would occur if power were switched directly from commercial to generator. Backup power and transient protection in the event of a switchover has always been a need, but as the total power consumed by a headend increases with the addition of equipment needed to support digital applications, these problems become more severe. In fact, the scenario is similar to switching back to the commercial electrical grid after a power outage in one metro area. Unprotected, an abrupt switchover could damage the entire headend. As for the generators, the headend is migrating toward a system that provides full redundancy in an A-B mode for critical communications equipment, where only half of the equipment would be powered by one of two generators, but the second generator could handle a full load should the first fail. The sizing and maintenance of the generators present some interesting challenges. The servers required for a packet network are power hogs compared to video receivers. For a large headend, total power requirements are in the megawatt range. Above a couple of megawatts, diesel generator motors become the equivalent of jet engines, with associated noise levels. Although backup by definition means that the generators only run when commercial power fails, most headend neighbors would not appreciate this noise level even for short times. Electrical capacity planning that includes a limit on total requirements below the need for turbine-powered generators is therefore advisable. Air conditioning When you think of power generation requirements in the megawatt range, it should also raise the question of heat dissipation. New equipment needed for telephony and data drives a major change of operation for headend air conditioning. As Gene indicated on my tour, "Heat dissipation in video receiver racks typically follows a vertical airflow pattern, but servers tend to cause a chimney effect." The usual cooling configuration for a headend is a "hot aisle/cold aisle" approach, where the aisles for the front side of equipment racks are cooled from the floor and the back side provides a hot air return path. This works for aisles with racks full of video receivers, but when the servers used for telephony and other applications are added to the picture, their higher power dissipation tends to keep the upper ends of the racks warmer than desired. To keep things in balance, cooling requirements can be tremendous, 8,000 W or more per equipment rack. Steve Duchene, a Bright House Senior Division Engineer, pointed out that airflow rate, as well as cooling capacity, is important. For an energy-efficient solution, Bright House worked with vendor Emerson/Liebert to implement an overhead cooling system that supplements cooling and air flow to guarantee that even the highest units on an equipment rack remain within temperature tolerances. By this point you should have realized that in a triple play headend, the relationship of cooling to power is a sort of "chicken and egg" paradox. Because of the danger of thermal runaway if air conditioning fails, air conditioners may even be powered with the same backup as communications equipment. This is not the case for the Brandon headend, however, where equipment density is controlled to provide extra tolerance. Cable management Another change from "the old days" is cable management. A triple play headend can’t tolerate "rat’s nest" connections on the back of frames. Every connection in the Brandon headend is labeled and documented because of the complexity of equipment interconnection and the sheer volume of signals handled by one wire or fiber. Cable runs are neatly bundled, and Gene’s division has taken cable management one step beyond documentation, by standardizing colors for upstream and downstream cabling. This makes it easier not only to manage equipment changes, but also to troubleshoot when required. We’ve come a long way from the headend in a trailer. New applications drive not only new hardware, but also new support systems. We’ll save discussion of the validity of "five 9s" for another time, but meeting carrier-grade standards includes redundancies and backup unthought-of only five years ago. Those who ignore the impact of change upon headend physical design jeopardize not only equipment, but also market share. An unrelated addendum: As I have mentioned before, cable’s architecture for wireless telephony will most likely include a home network. Cisco Press (www.ciscopress.com) has published two soft cover texts that effectively explain the basics of home networks. "Home Networking, a Visual Do-It-Yourself Guide" by Brian Underdahl is an overview of the technologies, with step-by-step directions covering planning, device interconnection and security. For those who want more detail on technology without getting lost in text, I recommend "Home Networking Simplified" by Jim Doherty and Neil Anderson. This book covers the same topics, but gets into how things work. Both books have excellent screen photos, equipment pictures, and tables that provide a solid foundation for understanding some of the issues addressed by CableHome and the most recent iterations of PacketCable. Justin J. Junkus is president of KnowledgeLink and telephony editor for Communications Technology. Reach him at jjunkus@knowledgelinkinc.com.

The Daily

Subscribe

Supply Chain: Fiber Demand Skyrockets in Age of COVID

Broadband and cable operator are running into supply chain problems as they embark on construction—particularly for fiber.

Read the Full Issue
The Skinny is delivered on Tuesday and focuses on the cable profession. You'll stay in the know on the headlines, topics and special issues you value most. Sign Up