Interview: Sundeep Nagpal, Founder-Director of Stratagem Media, on IRS 2013

“The differences could be the consequence of sampling design – in terms of spread of the sample. The sample may not have been spread evenly enough across socio-cultural areas, resulting in a possible imbalance of readership”

author-image
BestMediaInfo Bureau
New Update
Interview: Sundeep Nagpal, Founder-Director of Stratagem Media, on IRS 2013

Interview: Sundeep Nagpal, Founder-Director of Stratagem Media, on IRS 2013

“The differences could be the consequence of sampling design – in terms of spread of the sample. The sample may not have been spread evenly enough across socio-cultural areas, resulting in a possible imbalance of readership”

BestMediaInfo Bureau | Delhi | February 17, 2014

Sundeep Nagpal Sundeep Nagpal

As a media practitioner who has dabbled with research studies for over two decades, let us put a direct question to you first. Which is the correct study – the new IRS 2013 or the previous IRS? In other words, which do you believe is a better representation of market reality?

I'm sure many of us have pondered over this question considerably over the last two weeks – and yet I wonder whether any of us has enough of a basis to make a conclusion. Having said this, in my opinion the benefit of doubt goes to the previous study. That's because it has remained the basis of decision making over the last decades. Of course, this does not mean that it has always been perfect – just that it has been consistent over several years, and was consequently considered to be more concurrent with observations of market reality. Most changes that it reflected over time were not radical and seemed quite rational.

I also understand the counter point of view, which says that if we claim to be true professionals, then we must learn to accept change, but the unstated caveat is that any such change must be palpable.

There is another point. I think it's only natural for anyone to compare the findings of the two studies, despite the difference in methodology. Because, let's not forget that after all research, irrespective of methodology, is meant to reflect reality. And so it's only natural to compare/question which is 'more real'.

There are a few well-known newspapers that are certainly visible in some markets. According to the latest IRS, they seem to have no readers in those markets! How is that possible?

That's probably because their readerships may not be reported or whatever other reason, even though they may have been covered in the survey. In other words, it's difficult to accept that their readership has vanished in just one year.

What are the possible reasons in terms of research design and sampling could there be for such large differences and how could these wide variances have been avoided?

Apart for the difference in measurement methodology, i.e., DS-CAPI method, the differences could be the consequence of sampling design – in terms of spread of the sample. The sample may not have been spread evenly enough across socio-cultural areas, resulting in a possible imbalance of readership.

Do you think the problems that have cropped up are a result of lack of validation? Who should be held responsible if the validation at every step and at the last stage before release of the final outcome was not done?

I'm not entirely familiar with the details of any validation process that may have been deployed this time, but I do believe that validation, and perhaps even more so, periodic checks during the course of the field work, can be a valuable input in navigating such a large-scale study.

And I do believe that validation is as integral as methodology, so it is the collective responsibility of the project designers/managers.

Do you think that MRUC was in too much hurry as a result of which it couldn't dig into the details to notice the anomalies in the data?

Again, in this case, I'm not familiar with the resources that were deployed, so it would not be fair to comment on this.

If MRUC indeed noticed some unacceptable anomalies, why didn't it consider them seriously?

We can only assume that the number and extent of anomalies were known prior to the release of the data, and if so, I would give them credit for their conviction in the approach.

Can this readership be correlated with ABC data? Would that be a way to identify anomalies more specifically?

For media practitioners, this has been a widely practised (and perhaps best available) way to “cross check” the data, but needless to say, even this has to be done very carefully, to ensure accuracy – because, for example, sometimes not all editions of a newspaper are covered/reported in ABC.

If this study were to be the de facto option for the industry, what are the likely after-effects on business?

I think we've all seen the immediate after-effect, but even if these issues are resolved, I feel that such hiccups do have an adverse effect on the sanctity of the decision making process. I'm afraid that any loss credibility can make advertisers more cynical.

 If the data is voted to be unstable, or even selectively stable, then subjectivity would play a greater role in the media decision making process and consequently, not only lengthen the process but make it more cumbersome.

Since claims would not be as substantiable, pricing could dominate discussions between buyer and seller, price elasticity could increase and transparency could decline even further, and this would commoditise media space/time – making it somewhat akin to the Realty space.

And media practitioners would have to spend more time working outside their offices, and observing behaviour patterns of media consumption.

To say the least, this would make the media business less attractive for qualified post-grads.

Now that a Gwalior district court has stayed the publication and use of IRS until next hearing, who will lose finally – the planners, the publishers or the data?

God help us! I think that this would be the loss of the entire industry.

While we all know that the new findings can be quite detrimental to Print media in general, are there other sections in the industry that are likely to benefit from this result?

The media buyers' role would become relatively more prominent.

Since you have been observing the industry for so long, do you feel the choice of Nielsen for the survey was not 100 per cent right considering the fact that this agency was under a cloud when it was delivering NRS results seven years ago?

Nielsen had done a good job with the NRS in the early 2000's when NRS was jointly conducted by three agencies, under the leadership of Katy Merchant (IMRB), Partha Rakshit (Nielsen) and Askok Sethi (TNS MODE).

As we all know, it's not difficult to go wrong with research. Don't we see so many opinion polls going completely awry in trying to predict election results, like the recent case of Delhi elections? Yet, sometimes they've got it right, haven't they? So, this leads me to believe that it's people who can connect with reality and that can make a difference.

So, to answer your question, it's not the organisation as much as the people – the entire team which is involved in a study determines the quality of the output. And this is as true for any product in the world.

(Sundeep Nagpal is the Founder-Director of Stratagem Media Pvt. Ltd, which is a media specialist firm predominantly engaged in media analytics as well as media planning & buying for brands of mid-sized, owner driven businesses. Stratagem Media has also partnered with research agencies on media studies and conducted numerous training programmes on the technical concepts in media planning and media sales business over the last 22 years.)

Info@BestMediaInfo.com

Info@BestMediaInfo.com

Advertisment