For apps such as video streaming, what matters are not only low-cost bytes but also efficient bits-per-second.

In the 10 years since MSOs rolled out video on demand (VOD), hundreds of hours of content have grown to thousands. There is a new reality to "television" and the expectation that content will be accessible on demand, regardless of the screen we use to watch it.

Until recently, legacy servers, with their spinning disk drives, managed to keep up. But "three-screen" strategies, high-definition video, new remote storage (RS)-DVR ambitions and expanded libraries require powerful ingest capabilities and dynamic content distribution. Architectures are evolving, as are yesterday’s storage solutions.

Storage choices

Mechanical hard drives have made impressive improvements in storage density. However, with spinning platters and read/write heads, they have maintained fairly flat bandwidth — or disk input/output (I/O) — performance. Simply put, the physics of spindle speeds and seek times have not kept pace with the improvements in silicon technologies. For bandwidth-intensive applications, hard drives have become the anchor holding back performance.

With service providers advancing toward infinite libraries and ubiquitous HD content, speed is now a storage imperative. Silicon-based flash technology is now beginning to supplant traditional hard drives for video delivery applications.

Over the past few years, the popularity of iPods, camera phones and other devices has fostered significant advancements in flash memory technology. Volume markets have driven rapid increases in flash storage capacity with steep reductions in pricing. Laptops and enterprise IT applications are now leveraging these benefits. (See Figure 1.)

High-speed applications, such as video streaming, benefit from the use of solid-state storage. Unlike traditional storage-heavy database applications, video streaming and caching benefit when the cost-per-bit-delivered is optimized, not necessarily when the storage size grows. In other words, streaming and caching applications are more concerned with the cost of bits-per-second than bytes of storage. Flash memory technology delivers on this requirement.

In today’s IT market, flash memory has taken a foothold within data centers through solid-state drives (SSDs). Disk drives that are comprised of flash memory chips, or alternatively, dynamic random access memory (DRAM), SSDs are packaged into an interface suitable for replacing hard-drives, such as serial advanced technology attachment (SATA) and small computer system interface (SCSI).

Still more expensive than hard drives, solid-state drives have significant benefits, including access speed, reliability and lower energy costs. For many applications, these benefits outweigh the cost of reduced storage capacity. Vendors such as EMC now employ SSD storage tiers in the data center to improve I/O performance and reduce the need for dozens of inefficient 15K rpm hard drives

Flash primer

Flash memory is non-volatile. Unlike other solid-state memory technologies such as DRAM, one of Flash’s greatest benefits is that it does not lose data when powered off. This has led to its success in the consumer electronic market where storing mp3 files and videos are the prime objective.

Although flash chips are not particularly high-speed devices, this has not been a requirement for the consumer market. While there are several variants of flash memory technologies, for a majority of these storage applications NAND (logical: not AND) flash memory is used. It’s cheaper, denser and provides other attributes appropriate for this usage.

Diving deeper into NAND flash memory, we find that the technology is offered in two flavors, single-level cell (SLC) and multi-level cell (MLC), each with different characteristics. SLC flash memory stores a single bit of information (0 or 1) within each memory cell. MLC flash memory stores two bits of information per cell by allowing up to four different levels of voltage to be stored (corresponding to 00, 01, 10 or 11 binary values).

This allows MLC to offer higher storage densities than SLC flash parts when using the same silicon fabrication geometries. As a result, SLC flash cost at least 2x more than MLC flash for a given amount of storage.

Due to the differing memory cell usage between MLC and SLC, the technologies exhibit different characteristics. SLC is a higher endurance chip technology, often specified in the range of 100,000 erase-cycles, while MLC targets 10,000 erase cycles. In addition, due to the higher densities of MLC, and the multiple voltage levels, MLC is more prone to errors than similar SLC flash chips. Based on these factors, the use of MLC vs. SLC flash as a storage mechanism is an architectural design tradeoff between storage, reliability, durability and cost.

As mentioned, flash components are not particularly high-speed, but through intelligent design — simultaneous reads and writes of parallel components — flash can provide the capabilities that high-throughput applications require. With sophisticated optimization techniques, it’s a logical choice for video delivery.

The target volume market for SSDs is for laptop drives. The design goals for this broad market are primarily focused on cost per gigabyte (GB). In addition, due to the cost sensitivity of this market, it is not necessary to over-design for enterprise-class reliability and fault resiliency. MLC can be reliably designed into the laptop SSD while retaining the cost goals of the design.

Alternatively, in the enterprise market, such as corporate databases, product architects focus on reliability and write-longevity, trading off absolute cost savings as appropriate. This has led to a slow adoption of MLC flash technology in the enterprise space.

Maximizing durability

In content delivery applications, the design goals differ slightly from those of the general IT market.

The ratio of cache-reads-to-writes is far higher in a streaming video cache than in typical IT applications. The flash memory sub-system design becomes an issue of optimizing flash read performance while allocating a ratio of bandwidth for cache writes that satisfies this market’s requirements. Design techniques can be employed to take advantage of this fact.

If we estimate that 95 percent of the flash memory part is used for streaming, then 5 percent remains for content ingest. Calculating for this ratio, flash storage can be tuned to reserve 200 Mbps of available read and about 6.5 Mbps of available write bandwidth.

At this rate, a 256 GB flash part, which is market-available, will require 11.37 hours to be fully written. With its 10,000 erase-cycle limit, this part may then be written in a sustained mode for 12.98 years before hitting its erase-cycle limit. This means that flash can be used to easily meet the useful lifetime of a video cache platform.

Other design techniques may also be employed to achieve flash life span goals. One such technique is wear-leveling.

Wear-leveling techniques ensure that writes to the flash chips within the storage system are spread out evenly. This ensures that the 10,000 write cycles that a single flash component can achieve occur evenly across all the chips in the storage sub-system. As a result, no single chip achieves its threshold before the others, maximizing the overall storage product’s expected lifetime.

Storage reliability

Flash memory exhibits characteristics called read disturb errors and write disturb errors. These errors can cause bits to switch state, and therefore lose data, when adjacent memory cells are read or written. Due to the higher storage density of MLC (recall, they store 4 bits per cell) it is more likely to exhibit these errors than SLC. Many enterprise-class SSDs simply choose to use SLC memory to offset these errors. However, this design decision comes at a significant storage cost.

To compensate for these errors, in both SLC and MLC technology, flash memory chips and controllers may also employ error correction code (ECC) bits in their design. ECC bits allow the controller or chip to seamlessly correct errors on the fly during read operations. Flash controller design allows for varying degrees of ECC functionality to compensate for MLCs — achieving higher reliability with an overall lower cost per GB.

Another practice that can be employed by flash memory controllers compensates for read and write disturb errors through scrubbing algorithms.

As error correction bits (ECC) are used to calculate and correct for read-errors, a flash controller can pro-actively re-write the corrupted storage block once certain ECC error thresholds are achieved. This act repairs the corrupted memory cells and further reduces the chance of errors being exposed outside the storage sub-system.

Today’s requirements

With purpose-built design, low-cost MLC flash technology provides unprecedented performance and reliability for streaming media applications. The storage density of MLC flash, together with the cost benefits of volume markets, offers a compelling storage solution to today’s dynamic content requirements.

With TV everywhere and anywhere now a common objective, optimizing storage for streaming applications has become an industry imperative.

Tom Rosenstein is VP product marketing and Chris Lawler is VP engineering of Verivue.

The Daily

Subscribe

Editor’s Note

Your next issue of Cablefax Daily will arrive Monday. Stay safe and enjoy

Read the Full Issue
The Skinny is delivered on Tuesday and focuses on the cable profession. You'll stay in the know on the headlines, topics and special issues you value most. Sign Up