It’s difficult to comprehend how much data we now have access to, how much more is being created each day. It’s used to be that if your target market was in the 18-24 demographic you might go to a TV network and establish which programs were most likely to appeal to that group then target your ads to a specific broadcast time for best results. Simple. But the use of the term ‘specific’ in that example seems out of place now, a distance away from the insights available via online data sources. Right now, from my home PC, I can locate fans of a specific TV show and use that info to build a list of target prospects that’d likely be more precise than any traditional demographic bracketing. Better than that, I can extend that data to locate what other topics they’re interested in, what content they’re reading, when they’re reading it, the exact words and phrases they use in discussion and their general opinions on political or social events. And that’s just for starters.
But of course, that’s not likely to give me the same reach as a TV ad campaign targeted to my intended audience, right? TVs are in every house, are watched by millions of people, with ad breaks every ten minutes – the potential for brand exposure to my target demographic is much higher with TV. But then again, we don’t know for sure, do we?
An Inherent Vice
Traditional methods of audience measurement are fraught with pitfalls and assumptions based on the data available to the researching group. TV ratings, for example, are based on a very small sample size of people who’ve been chosen as participants. Ratings agencies choose a cross-section for their sample, and by all accounts, those figures are indicative, but the actual number of participants is, contextually, minute. How minute? In the US, around 5,000 households are chosen to be part of the representative sample. The population of the US is 316.1 million, so in percentage terms, that sample size is only 0.0016% of the total potential audience. Of course, you’d extrapolate that each household would have, maybe, four people, so that percentage is multiplied, but four times 0.0016% is still not a large sampling. Measurement agencies do employ other processes to enhance their accuracy, and the data is generally accepted as a solid indicator of overall audience, but when that total audience is broken down into smaller demographic segments, the numbers become a little less reliable.
For example, in Australia, measurement company OzTam has a total survey group of 6665 homes – potentially around 0.12% of the population. That sample size is considered quite big and provides a reliable baseline of overall viewing habits. But if you break that sample size down further, the data becomes less relevant – of those 6665, some 2000 represent regional Australia. Of those 2000 households, let’s say 1000 of the people living in them are aged between 18 and 25 years. Of them, half might be female – the more you break down the specific segments, the smaller the sample size you’re looking at – It’s unlikely that 500 females aged18-25 are indicative of the entire regional Australian population, estimated to be more than 8 million.
The same issues arise for all the various data gathering methods we have, all are susceptible to a level of error, particularly when narrowing down to specific groups and subsets. This is not to criticise the data companies, there’s no better way to do what they do, and they’re always seeking to innovate and improve their numbers (Nielsen recently started tracking Twitter data to improve their TV ratings info), but the existing systems do have flaws. It’s not a definite known what audience you’re reaching via traditional methods.
And then there was big data…
The advent of social media has unlocked an unprecedented amount of consumer information. And by ‘unprecedented’, I mean exactly that – no one has ever had access to the amount of data you have available to you right now. The possibilities are unexplored, there’s no way to fully understand how beneficial big data is or will be. Your targeting and focus is limited only by your imagination, and the more you track and correlate, the more user behaviours and buying signals you’ll find. It’s somewhat reminiscent of a sci-fi film – we’re moving towards a reality where the data we provide will fuel predictive algorithms that’ll understand what we want and need before we even know we need it. Sounds far-fetched? LinkedIn have used algorithm analytics, based on data from their 313 million+ users, to correlate the likely career paths of university students, based on their individual skills, attributes and interests. The results get more accurate every day, and as their data pool expands, it’s not a stretch to imagine that one day they’ll be able to accurately predict what career paths will lead people to their happiest, most fulfilling and most productive lives, whether that person knows it or not.
Is that scary? Concerning? The thing with data is that it’s fact. It’s based on actual results, on actual preferences and details that people have logged. The thought that this information might be used to predict a person’s decision making is something of a concern – no one wants a computer to tell them what they should do with their life (and, of course, while data can correlate trends in large sets, everyone is an individual and capable of breaking the mould). But data does not lie. Patterns happen for a reason and the better you are at determining those reasons and translating them into actionable insights, the more successful you’ll be at mapping consumer pathways and addressing needs – maybe even before those needs exist.
From ‘broadcast’ to focussed
‘Broadcast’ is the term that perfectly illustrates the shift in targeting mindset. Broad. Cast. The very construction of the word relates to communicating as far and as wide as possible, spreading information broadly in the hopes of hitting as many people as you can. And reach is a desirable outcome – the farther your message spreads the more likely you’re going to hit your target audience, but we’re moving away from a broad messaging focus. Based on the digital breadcrumbs and data trails people leave online, we can learn exactly where people are spending their time, specifically what they’ll be interested in seeing.
Broadcast. It’s hard to understand how different targeting and reach is and will be, as we’re still in the midst of the shift, but the term ‘broadcast’ is ageing, symptomatic of the rising clarity of the connected era. Broadcast is gathering round the radio and listening through hours of songs you hate in the hopes of hearing that one song you love. Focussed is saying the name of that song into your device and listening to it, right there and then. Broadcast is advertising in a magazine most likely to reach your target audience. Focussed is knowing exactly which sources your audience is reading, when they’re reading them, and getting your message in front of them at the time they’re most likely to see it. Broadcast is reaching out to anyone and everyone, as far and as wide as you can in the hopes of hitting those most likely to respond and buy your products. Focussed is reaching your exact target consumers at the precise right time in their purchase process in order to maximise conversions and build ongoing business relationships.
It’s difficult to break out of how we do things, of the way we’ve always done them, to imagine how our processes will be changed in future. But it is near enough to see it, if you’re looking. It’s near enough. The connections are already being established, communities and affiliations are already being formed through updates and posts and tweets. Broadcasting, in marketing terms, is losing ground as the path to focussed marketing becomes more clear. How we refine our approach, how we change established mindsets built through years of common process, that’s likely to be the more challenging aspect in the evolution of the communications landscape.