News as it is -

Best Media Info

Partner Content

BARC lays down rules to prohibit misuse of TV ratings

Concerned over the frequent leadership claims of channels, the rating agency has laid out the ‘rules of the game’ that must be followed before making such claims

BARC lays down rules to prohibit misuse of TV ratings

Concerned over the frequent leadership claims of channels, the rating agency has laid out the ‘rules of the game’ that must be followed before making such claims

BestMediaInfo Bureau | Delhi | July 23, 2015


Broadcast Audience Research Council (BARC India) has come out strongly, exercising the  powers vested in it by its constituents as a Joint Industry Body for the measurement of broadcast audiences, to prohibit the misuse of TV ratings.

The rating agency has laid out ‘Principles of Fair and Permissible Usage of Ratings’ in a document explaining the need for establishing the “rules of the game”.

This has become necessary as often two or more channels have been claiming leadership on the basis of data during a week. BARC is clear that its ratings do not make for multiple leadership basis data for the respective week.

Following is the detailed note from BARC:

Viewership Research – A domain of Statistics

BARC India has been established in pursuit of the vision of measuring “What India Watches”. At current reckoning, India has over 153 million TV homes, about 77 million each in rural and urban households. No presently available technology can capture and report every home on a ‘census’ basis. In the event, the only way of approaching the measurement task is by carefully recruiting and then closely tracking a representative sample drawn from this huge population.

The scale of the sample bears a direct relationship to width and depth of coverage it can realistically provide. One comes at the cost of the other. The greater the width over which a sample is distributed, the less the depth of coverage that will be available for a particular geography. Statistics provides reliable techniques of sampling to best capture diversity in populations and analytical techniques to quantify the errors in the estimates produced. It should be intuitive that errors tend to ‘average out’ across large aggregates but get amplified when small slices are examined. A well-designed sample makes the sampling logic and the errors of estimate associated with it, explicit.

Measurement of television viewership boils down to answering the following questions:

  • What was watched?
  • Who watched?
  • When and for how long was it watched?

In measurement jargon, these three questions map, respectively, to:

  • Content attribution
  • Reach
  • Time spent

The concept of ‘rating’ is merely the product of the second and third.

Rating = Reach x Time Spent

BARC collects data and publishes measurement statistics on all these variables at both Household and Individual levels. Sampling ratios vary across different geographies and town classes. What may be measurable as a slice or segment in one market or geography may be too small to measure in another.

Challenges of sampling India

While India is only the second most populous country, its economic, ethno-cultural, geographic, social and demographic diversity is by far the most multi-hued on the planet. A wide and constantly expanding spectrum of television channels seeks to slice and segment this variegated audience. Disparate rates of economic advancement across linguistic/geographic segments are echoed in the range of broadcast content that courts them. More simply, greater prosperity cues greater choice. Though a large proportion of cable or DTH homes pay a monthly subscription, only a small portion of this reaches broadcasters. Not surprisingly, a numerically dominant majority of mostly small channels realizes nothing from subscriptions and is wholly advertising dependent.

By its nature, Advertising is data driven. Ad placement is based on finding the right segment at the right time at the most competitive price. The first two considerations are all about audience measurement while the third reflects commercial negotiation, which is also inextricably linked with it.

This, then, is the great measurement conundrum. The more desperately a channel needs measurement to survive commercially, the harder it is to measure.

Priorities and Choices

Panel size, while designed to grow steadily over the years, is defined at a given moment. BARC India assigns responsibility for assigning measurement priorities and making allocation choices to its Technical Committee. The Committee comprises representatives drawn from the stakeholder community and has to do the intricate balancing act between keeping the coverage wide enough to justify the “What India Watches” vision and delving deep enough to find and measure the burgeoning ‘long tail’.

BARC India’s panel is already without precedent in terms of its coverage of Urban India. With its imminent expansion into Rural India, it will be entering virgin ground for Television measurement. As new markets get covered, or previously covered markets are put under higher magnification, many new audience segments, and by implication, content delivery opportunities are bound to be revealed. More measurement and better measurement will fire up the creative engine and a feedback loop will raise the bar further on future needs from the BARC India panel.

We are entering a virtuous cycle of better measurement leading to more content differentiation leading to even better measurement and so on.

Measurement and Comparison

The two are inseparable. The moment anything is measured it becomes possible to compare it with another thing measured using the same metric. With Television viewership, it is almost a reflex. Any content producer, or advertising inventory trader, starts comparing her reach, time spent and ratings with those secured by her competitor(s) no sooner than the week’s data are published. On the one hand, it serves a crucial function in terms of content evaluation and planning. On the other, it helps set prices for trading advertising inventory. In both instances, the key players are looking closely at their “Share of Market”; creative content professionals seek to lead/dominate share of time spent, at least within their genre and ideally across multiple genres; advertising sales people want to win the maximum and highest-value-per-viewer revenue and by implication starve their competition. This is fine so far as it stays within the broadcast organisation. Issues begin only when these professionals use the data for establishing their leadership to their respective ‘customer’ communities. When a television station announces that it is “Number 1” in its genre and offers BARC India data to substantiate this claim, the claim is no longer an internal issue but has entered public discourse.

Ratings Leadership

In its most essential sense, television measurement is just a special case of attempting to make sense of human behaviour. The constant battles between fickleness and loyalty, emotion and intellect, frivolity and seriousness play out vividly in the way in which we wield the remote. As is commonplace in Nature, order eventually arises from this chaos.

One aspect of this order is a marked propensity to Inertia. Purchase behaviour, of which viewership behaviour is a special case, is known to fall into two broad patters, ‘Repertoire’ and ‘Subscription’. ‘Repertoire’ purchasing is when a consumer has a set of acceptable, quasi-peer, brands across which she switches. Conversely, ‘Subscription’ connotes a high level of loyalty to a single brand. In general television viewing falls in the ‘Repertoire’ basket. Only the rarest content gets into ‘Subscription’ when it gets seen as ‘appointment viewing’.

It is with this context that ‘Leadership’ in television must be understood. A leader is not created overnight. A given moment or in a given day part on a particular day, may show someone ahead or someone behind. This does not constitute leadership. Using such a momentary blip is a very weak foundation on which to base a leadership claim.

The following tests must be applied before making a claim of leadership:

  • The period of comparison must cover at least four consecutive weeks of data.
  • The period of comparison must cover at least four consecutive clock-hours of data.
  • The tabulations used must be direct outputs of BARC India’s BMW user interface. Any number derived by extrapolating or interpolating BMW outputs is not permitted for use in the public domain.

While there can be no objection to how much a user may parse BARC India data for internal use (within reasonable limits e.g. Relative Error levels etc.), the rules established in the next section must be strictly adhered to in all commercial applications. Any use of BARC India data for supporting an appeal to viewers, advertisers or their designated agents and representatives constitutes commercial use.

Rules for commercial use of BARC India data

1. All BARC India data is based on a sample, not census, of India’s television viewing population.

a). Samples produce estimates of population parameters that lie within a range or ‘interval’. The midpoint of the range is used as the point estimate but what the sample actually produces is an ‘interval estimate’. Table placed at Annexure I illustrates this point.

b). Some events are commonplace in the population; others appear less often. The rarer an event is, the harder it is to detect in a sample. Here is an example. A Cricket match is viewed by 30% of all viewers in a population of 10 million. A Golf tournament is viewed by 0.1% of all viewers in the same population. A sample of 632 individuals would suffice to estimate the Cricket match viewership with a 10% Relative Error, i.e. ±3% of the population parameter, or between 27% and 33%. To get the same relative accuracy for the Golf tournament, i.e. to get an estimate within ±0.01%, we would need a sample of over 263,000 individuals. However, if we were prepared to accept a 100% Relative Error, i.e. range of ±0.1% or 0%-0.2%, the sample size comes down sharply to 2687 individuals. Table placed at Annexure II illustrates this point.

c). Two events cannot be meaningfully contrasted if both are rare. Imagine comparing the Golf tournament cited above with a Chess Championship also watched by 0.1% of the population. Assume that we are working with a sample of 2703 to keep both estimates in the 0%-0.2% range. Let us say that the sample produces an estimate of 0.05% for Chess and 0.17% for Golf. It would be tempting to declare Golf more popular by a factor of 3:1 but this would simply be a trick played by the sample and a grievous falsification of reality. Table placed at Annexure III provides more examples of such ‘Paired Comparisons’ as they are called in Statistics.

2. BARC India data are best understood as ‘Time Series’ data and not ‘Point’ data.

a). Aggregating across periods, for example by using moving totals or moving averages damps out random variability. Imagine two coins, one ‘true’ where both heads and tails have exactly equal probability, 0.5 each and the other one ‘loaded’ where heads will appear with a probability of 0.6 and tails with 0.4. Look at the table below to see how the coins behave after 1, 10, 20 and 50 tosses.


Even after 20 tosses, it is hard to determine which coin is more likely to be loaded. It takes 50 tosses to decide with some certainty. This example is exactly analogous to viewership behaviour across periods being ‘smoothed’ out by using moving, long period, totals. BARC India encourages you to use 4-, 8- or 12-weekly moving totals or moving averages when evaluating a proposition.

b). Time series data provide insights that a point does not. While unusual, extraordinary events will trigger the occasional spike in viewing, most viewing follows almost metronomically predictable patterns. The illustration below tracks overall viewership measured across the entire BARC India panel for four consecutive weeks between May and June 2015.


Every genre/type of content creates a mix of appointment and occasional viewing. Plotting the viewership across multiple weeks helps to visualize the direction in which its popularity is headed. Two points on the path may suggest a pattern contrary to the broad trend and only plotting multiple periods can reveal this. See points marked A. and B. on the charts below. Selective use of BARC India data to bestow an artificial advantage on a channel is not permitted.


c). ‘No. 1’, ‘Leader’, ‘Winner’ and such like adjectives make sense in an Olympics athletic event but only serve to mislead in the context of viewership measurement. When Usain Bolt, Yohan Blake, Justin Gatlin and others exploded from the starting blocks for the Olympics 100 Meters Finals in London on August 5, 2012, they were less than 10 seconds away from one of the most eagerly watched results in world athletics. Viewers do not tune into a winning or losing channel. For a viewer, the channel she chooses to watch at a particular moment, however popular or not it might be with the rest of the universe of viewers, wins her attention for as long as she stays on it. As options multiply, programming targets ever more tightly defined audience/need combinations. Even the biggest entertainment channel may not appear at all in the viewing repertoire of an International News addict. Audiences can and will be defined in endless combinations of gender-age-NCCS segment-geography-town class. Even if two channels pick nearly identical target audiences, they will attempt to differentiate their content from one another. While some viewers may consistently pick one over the other, there will be many who will distribute their time across both.

Claims of leadership must meet the following standards:

  • Clear definition of target audience within BARC India audience taxonomy
  • Clear definition of comparison set
  • Period of comparison to cover at least 4 consecutive weeks
  • Period of comparison to cover at least 4 consecutive clock-hours
  • All data must be available directly and without interpolation or extrapolation from the BMW.

d). Audience shares are designed to mislead, particularly when comparing small channel platforms. Consider again the example of the Golf and Snooker events in point c. above. Assume that random factors resulted in a panel composition with more Golf enthusiasts than Chess fans. Even after many weeks of measuring both genres of programming, Golf will continue to look optically larger than Snooker. This might be illustrated thus in a Pie Chart.


Such a chart fails to meet the standards prescribed for comparison in point c. above and is not permissible under these rules.

3. Visual representations help the intended recipient of a commercial message. However, they can be used to misguide as shown in 2.d. above. They must adhere to the following principles.

a). Axes of a graph must be orthogonal, at 90°

b). Scaling must be linear

c). The Y-axis must generally intercept the X-axis at value 0. If an alternative value is used, this must be clearly indicated both on the chart and in its legend

Annexure I

The tables below highlights the Upper and Lower bands of a Rating.


Annexure II

The tables below highlights the Sample required for a Rating with an acceptable error/variance of 10%


Annexure III

The Table below highlights the variance in ratings



Post a Comment