The Principal – Provocateur Advisory of BARC India shares significant insights on using the ratings data to benefit the broadcast industry
BestMediaInfo Bureau | Delhi | May 13, 2015
As Broadcast Audience Research Council (BARC) India enters its third week after it began releasing television viewership data on April 29, 2015, the rating agency has released significant insights on ‘robust data and resisting the temptation to slice it too thin’ with the industry.
The insight has been authored by Paritosh Joshi, Principal – Provocateur Advisory, BARC India to benefit the television industry at large. Read the insights below:
All maps need cautionary notes
Remember high school geography? One of the topics that I found engrossing was topography. Hills and valleys, plateaus and plains, deserts and deltas, inclines and anticlines; and to depict their relative elevations, contour lines that connected all points at the same height above MSL. Topography made it clear that any location on the map only made sense in context. A point in isolation offered no clues about itself. If you knew, however, that it was along the banks of a river where it formed a horseshoe, that the river was perennial originating from snow melts in the high mountains of the north, that there were wide, flat plains on either side of the river and a major bridge ran across connecting the two banks, you might reasonably assume that this point was actually an important city for a bountiful, agriculture-based economy.
A digression? Not at all. If there is a single metaphor for what to make of television ratings – or any continuous measurement system for that matter – a topographic map would be it.
Consider a phrase used commonly to describe the media environment: Media Landscape. If media represent a landscape, measurement systems are the maps that enable you to navigate it.
It is only two weeks since you started seeing BARC’s long awaited ratings. Why was there such an urgency to immediately compare them to the legacy system? Precisely for this reason. BARC offered a number of points with absolutely no context just yet. Scarcely surprising, you would agree, given that the ‘map’ that BARC’s ratings will draw of the landscape of ‘What India Watches’ will only fill out as data continues to flow, week after week. In its absence, reference points had to come from the only other map available.
Natural as this impulsive reaction was, it was also misguided. The temptation to continue doing this will persist for some more time, so it is useful to understand why it will remain inadvisable.
So, back to the map metaphor now. Maps, being representations of a reality that is thousands of times larger and endlessly variegated, have to make several compromises. In this instance, the differences between BARC and the legacy system begin from the Establishment Study. In BARC’s case, this is based on a combination of the Census of India 2011, IRS 2013, TRAI data on television connectivity and a set of interpolation and extrapolation algorithms that result in estimating population and segment sizes as at 2015. We are not privy to the exact equivalents for the legacy system. Also, BARC uses NCCS segmentation while the legacy system uses the old and superseded SEC. Another significant difference arises from BARC’s much larger household panel and intent to cover all India, including rural India.
BARC is, in its first year, attempting to track the television viewing behaviour of over 150 million television owning households by metering ~20,000 chosen in accordance with a carefully designed sample that represents the universe in terms of geography, language, town or village class and NCCS segment. While the number sounds sizable enough, it is no more than one home in 10,000. The number does not get distributed in a linear way, though. Urban agglomerations like Mumbai, Delhi and Bangalore being much more heterogeneous than the hinterland need disproportionately higher samples to capture their diversity while the hinterland of Chhattisgarh or Telangana being largely homogenous can do with a lower sampling proportion. It is apparent that this is a simplifying assumption, but it is made inevitable by the pragmatics of establishing and managing a ratings system.
This brings me to another characteristic of maps. You can’t enlarge even the best maps endlessly. You reach a point when the map starts to pixelate and all you can see is a blur. Unfortunately, data do not blur automatically when users attempt to interpolate within them beyond the point of reason. A number is produced which has the false allure of exactitude and precision but is statistical nonsense. Let me illustrate. Take a market with a 1,000 metered homes. Now imagine a channel that delivers a 1 per cent weekly reach in this market. You are already down to a mere 10 homes. Assuming that NCCS A is 20 per cent of the population, that gives you no more than two such homes in the sample. Given that 1 per cent is the weekly, not daily reach, these two homes only each need to tune into the channel once in the week for one minute for the event to reflect in the ratings. Unfortunately, this physicality of the panel is entirely ignored when the data is parsed, or rather, minced to the point of absurdity.
And so to the morals of this story
Moral No. 1: Compare apples to apples, not oranges. Given that apples only arrived in the market two weeks back, you need to be patient for a while.
Moral No. 2: Enlarge any picture beyond the scale for which it was designed and all you will get is inchoate noise. Resist the temptation to slice data too thin.
The resistance to temptation is difficult…but should now be easy!