Commentary by Steve Effros
What’s a “television”? Who’s a “television viewer”? Seems like pretty straight-forward stuff, but even though entire industries are based upon the answers to those questions, the terms are getting almost impossible to define, or at least different folks want to define them for different reasons and therefore have different answers!
It used to be simple. We had television broadcast stations (so-called “free TV,” which the public now pays billions of dollars for) and we had television sets to watch them. The entire television advertising industry was premised on being able to determine who was watching what. That, of course, includes most local broadcasters, because after all, that’s their basic business; selling advertising. The ad prices were based on the “reach” of each show and the projected demographics of the particular audience. That’s still pretty much the way it works, but everyone knows that the projections and viewer counts are flat-out wrong. It’s just very hard to change the system, so the broadcasters have stuck with the creative math, and the advertising industry, reluctantly, has gone along because for a long time there was nothing better.
But now the whole system is falling apart. First, the question of what a “television” is. When my son went to college more than ten years ago we bought him a computer, no TV. That computer screen doubled as a “television set.” It had inputs for a cable box or broadcast or satellite tuner and he was able to watch what he wanted. Was he counted as a “television viewer?” Today we hear more and more that there are fewer folks watching “television,” but more watching “video” on screens variously described as “mobile” or “smartphones” or “tablets.” But essentially they are all very similar screens, and they are still receiving and displaying the same programs. Some of those programs are delivered via a “stream,” linearly, and others are “on demand” via cloud servers. But so what? The viewer is still the viewer, the program is still the program, and yet some insist on making fine distinctions based on how they are being viewed. Why?
Well, it has a lot to do with various companies not wanting to see ad revenues shift along with the viewing market, whatever that is defined as being. And it also has to do with the law and regulations not in any way keeping up with what is going on, so maintaining or changing definitions to fit the business plan or the regulatory structure are the name of the game.
The FCC, for instance, wants to keep regulatory control of broadband, so it has to continue to claim that broadband is not being efficiently and widely delivered nationwide. If it was, they would have no rationale to regulate. The solution; keep changing the definition of “broadband” to insist on higher and higher benchmarks, thus maintaining jurisdiction.
The advertisers would prefer to pay less for “broadcast” program ads, and they’re happy with the older decreasing measurements of “live” viewers. But technology has moved on, and lots of us use DVRs or servers to watch programs when we want, even if it’s days after the program was originally “broadcast.” Could that viewership be technically counted? Sure. In fact, it could be counted to a scary precision—down to the individually owned device. But that would lead to all sorts of privacy, business and regulatory issues, thus the ability to actually be precise would be very inconvenient for those who wanted to maintain the status quo. So the game of definitions continues.