Late last year, I attended an education session on Facebook’s News Feed algorithm, conducted by a social media lecturer of relatively high standing in the field. The session sounded great – insight into how Facebook’s News Feed algorithm actually works, the ‘hows’ and ‘whys’ of what appears in your News Feed and what brands can learn and implement in order to boost their organic reach. Organic reach, as anyone with any exposure to social knows, has been declining at a rapid rate – brand Pages these days are lucky to reach 10% of their total fans with each of their Facebook posts.
The info session sounded like a great learning opportunity, a great way to get some insight into how to work with the algorithm to maximize Facebook performance.
Except, the information presented was largely wrong.
This person, who speaks and presents to a great many people on social media best practices, outlined strategies that were either out-dated, ill-informed or just plain incorrect, yet stated them as total fact. And as other attendees narrowed their eyes and nodded along, I felt like standing up and saying ‘no, that’s not right’. But then that would assume I was right, and given Facebook’s secrecy around the specifics of their News Feed algorithm and how it works, maybe I actually had it wrong. Maybe what was being presented here was the correct info.
In order to get to the bottom of this and clarify for all those looking to maximize the performance of their Facebook content, I did some research into what’s known about Facebook’s News Feed algorithm and how it selects what content will be shown to each user, every time they log on. And while we can’t know every specific factor that plays a part in how content is distributed on the platform, there are quite a few well established principles that clearly indicate the path to best performance.
First off, a bit of history.
When Facebook launched News Feed back in 2006 it was a straight-up, chronological feed of all the activity of your connections.
Remember that? The basic looking blue links, the green speech bubble comments.
The ‘Like’ button was introduced a year later, giving Facebook its first insight into what users were actually interested in, and as Facebook became more popular, and more people started using the service – and the News Feed, logically, got more cluttered – Facebook started using those Likes (along with other measures including shares, comments and clicks) as indicative signals to prioritize the content appearing in each users’ News Feed to ensure posts from Pages they’d indicated interest in appeared higher in their stream.
This worked for a while, but there were a couple of problems with this basic approach.
The first issue was that people clicked ‘Like’ for different reasons – funny cat pictures were getting heaps of Likes, and thus, flooding peoples’ News Feeds, while more serious news content, which people weren’t clicking ‘Like’ on (because they didn’t necessarily ‘Like’ it), was being totally buried. Publishing click-bait style headlines became a key tactic as these garnered lots of Likes and clicks, pushing them higher in News Feed ranks – eventually Facebook was at risk of losing their audience because people’s Feeds were being crowded with junk and there was no way, under that system, for Facebook to filter and uncover better, more relevant information for users.
In 2013, Facebook acknowledged it had a problem on this front and sought to correct it with a new algorithm that would uncover ‘high quality content’, the first iteration of the News Feed algorithm.
The second issue confronting The Social Network was that Facebook was getting big. Really big. People were adding more friends and Liking more Pages, meaning there was more and more competition for attention within the News Feed listings. But people only have so much time in the day to check their Facebook updates – according to Facebook, an average Facebook user is likely to have around 1,500 posts eligible to appear in their News Feed on any given day, but if people have more connections and Likes than average, that number could be more like 15,000.
Given this, it’s simply not possible for every user to see every single relevant post, based on their connection graph, every day. Facebook’s challenge with the algorithm was to create a system that uncovers most relevant content each day to provide every user with the best possible experience in order to keep them coming back. But that would also, necessarily, mean that Facebook would have to show some content people had indicated an interest in while excluding others which may also be of interest. The system needed to be incredibly clever to get this balance right.
“If you could rate everything that happened on Earth today that was published anywhere by any of your friends, any of your family, any news source, and then pick the 10 that were the most meaningful to know today, that would be a really cool service for us to build. That is really what we aspire to have News Feed become.” – Chris Cox, Facebook’s chief product officer (to Time Magazine in July 2015)
These were the two major challenges facing Facebook in developing the News Feed algorithm, and despite the protestations of brands who were forced to sit idly by as their organic reach slowly declined (and who were rightly annoyed at Facebook for promoting Likes as a means of reaching audience, then reducing their relevance), the numbers show that Facebook’s machine learning curation process for the News Feed is actually working. In their most recent earnings report, The Social Network reported that engagement was now up to 46 minutes per day, on average, across Facebook, Instagram, and Messenger, with Monthly Active User numbers also continuing to rise.
The continued rise of Facebook shows that they’re getting the user-experience right – brands don’t like it, many users don’t even know it’s happening, but the News Feed algorithm is working as a means of rationalizing and boosting user activity.
This finding, in itself, highlights just how much Facebook understands about their users and their likely preferences.
Inside the Machine
So how does the News Feed algorithm actually work? While the company’s understandably tight-lipped about the specifics of the News Feed calculations (largely because it’s continually evolving), the basics have been communicated by Facebook several times over the years.
Back in 2013, when Facebook introduced the first version of the News Feed algorithm, The Social Network noted four key points of focus for people creating content on the platform:
- Make your posts timely and relevant
- Build credibility and trust with your audience
- Ask yourself, “Would people share this with their friends or recommend it to others?”
- Think about, “Would my audience want to see this in their News Feeds?”
Those core principles remain the fundamentals of the News Feed – in a 2014 interview with TechCrunch, Facebook News Feed Director of Product Management Will Cathcart outlined a similar listing for the ‘most powerful determinants of whether a post is shown in the feed’:
- How popular (Liked, commented on, shared, clicked) are the post creator’s past posts with everyone
- How popular is this post with everyone who has already seen it
- How popular have the post creator’s past posts been with the viewer
- Does the type of post (status update, photo, video, link) match what types have been popular with the viewer in the past
- How recently was the post published
Cathcart’s advice lead to development of this equation, which is a basic overview of how News Feed prioritizes content:
(Image via TechCrunch)
Of course, as noted, there are many more factors than these at play, but at its most basic, this is the logic behind how Facebook chooses and displays the most relevant content to each user. But that system is always being refined.
Those refinements are borne of necessity – more people using Facebook means more content and more variables to take into account to ensure the best possible user experience for each individual. To get an insight into just how complex that equation is, take a look at the documentation behind Facebook’s ‘Unicorn’ social graph indexing system. While Unicorn was built to power Facebook’s Graph Search engine, the way that system works highlights just how many factors can come into play when trying to uncover the most relevant content for each user – particularly when you consider that a typical Facebook user’s relationship graph looks like this:
In the Unicorn documentation, Facebook refers to the amount of ‘nodes’, signifying people and things, and ‘edges’, representing a relationship between two nodes:
“Although there are many billions of nodes in the social graph, it is quite sparse: a typical node will have less than one thousand edges connecting it to other nodes. The average user has approximately 130 friends. The most popular pages and applications have tens of millions of edges, but these pages represent a tiny fraction of the total number of entities in the graph.”
And in the introduction to the document, Facebook notes that:
“Unicorn is designed to answer billions of queries per day at latencies in the hundreds of milliseconds”
Even without a full grasp of the technical complexities of such inter-connectivity, you can still imagine how complex Facebook’s algorithm needs to be to serve up the most relevant content, and how many potential variations need to be taken into account.
This is why it’s almost impossible to explain the full extent of how the algorithm works, and why Facebook largely avoids doing so. It also enables them to make changes without worrying about what they’ve said previously – if Facebook were to say ‘this is how the system works’ then make a change that altered that, brands that had structured their Facebook strategy around the previous rule would be disadvantaged (which is pretty much what happened with ‘Likes’ when they changed the rules). As such, the core principles noted above remain the driving force, and the key elements marketers should logically be focused on. The further complexities and refinements work to support these fundamentals – adhering to them should keep you in good stead.
In line with this, Facebook’s always seeking to refine and update the News Feed algorithm to better serve their users and deliver an evermore relevant on-platform experience. Time Magazine recently reported on how Facebook uses two primary devices to help refine and improve the News Feed algorithm – a team of around 20 engineers and data scientists who assess and evaluate the results of tests and updates to determine the best evolution of the system, and a group of some 700 reviewers, called Facebook’s ‘Feed Quality Panel’, who deliver real, human feedback on their News Feed results, which then help the data team make more informed choices.
“…[members of the Feed Quality Panel] write paragraph-long explanations for why they like or dislike certain posts, which are often reviewed in the News Feed engineers’ weekly meetings. Facebook also regularly conducts one-off online surveys about News Feed satisfaction and brings in average users off the street to demo new features in its usability labs.”
Through this process, combining feedback from real people and improved machine learning, Facebook is continually moving the News Feed algorithm forward and uncovering new best practices. This is why we see so many changes and updates to the algorithm rules, newer factors like ‘time spent reading’ are brought in as Facebook learns from user behavior – content that people click ‘Like’ on before reading, for example, is not given as high a rating as content that’s Liked after reading (after a person has clicked through on a link), because if you’ve taken the time to read something and then Liked it, that’s considered a stronger endorsement of of quality than a knee-jerk response to a headline. Such refinements are logical and thoroughly tested, and Facebook’s gone to efforts to underline that the way the system is weighted is entirely dictated by each individual users’ actions and preferences.
The way Facebook’s algorithm defines ‘high-quality’ in this sense is entirely user driven – if you like cat memes but hate posts from The New York Times, you’ll be shown more of the former and less, if any, of the latter.
“…there’s a line that we can’t cross, which is deciding that a specific piece of information – be it news, political, religious, etc. – is something we should be promoting. It’s just a very, very slippery slope that I think we have to be very careful not go down.” – Adam Mosseri, Project Management Director for News Feed
Due to this, it’s up to each individual brand and business to create content that appeals to their specific audience, and caters to that audience’s needs.
It’s worth noting too, in considering Facebook reach and how to worth with the system to maximize reach and performance, that the actions users take after exposure to your content are far more important than them seeing it in the first place.
This was pointed out by Facebook marketing expert Jon Loomer, who noted that even if your Page reach has declined, that’s not really relevant – what is relevant is whether your website clicks have also declined as a result.
“Let’s assume for a moment that reach actually did drop. If all engagement remained healthy — including website clicks and conversions — what does that drop in reach mean? It would mean that Facebook was showing your content to people most likely to engage favorably — which is what we as marketers and users would want.”
It may just be that, as a consequence of Facebook improving their algorithm, that your Page reach will inevitably drop, because your content’s being shown to a more targeted and focused audience based on their behaviors. And that isn’t necessarily a bad thing.
In all, the main thing to focus on in order to maximize Facebook reach and response is quality content, as defined by audience response. The more utility and value you can provide for your audience, the more likely they’ll want to see more information from you, which they’ll indicate through their Facebook actions – be those direct (Likes, shares, comments, clicks) or indirect (time spent viewing, word-of-mouth via off Page comments). Facebook’s tracking all of it, and in this sense, the core fundamentals of Facebook content remain the same as they did the day the News Feed algorithm was introduced back in 2013:
- Make your posts timely and relevant
- Build credibility and trust with your audience
- Ask yourself, “Would people share this with their friends or recommend it to others?”
- Think about, “Would my audience want to see this in their News Feeds?”
The News Feed is constantly evolving, but its fundamental principles remain the same. Understanding your audience is key to maximizing your Facebook reach.
The first examples of the new Twitter/Google partnership are staring to filter through, with Search Engine Land providing screenshots of confirmed Google tests of tweets in search results:
As you can see from this example, a search for ‘#maythe4thbewithyou’ on Google has provided results from Twitter, where the topic was trending. You can see too, the option to click through for more tweets. This example is via mobile, where the current testing is taking place, but it provides our first insight into how Google may be looking to incorporate real-time tweets.
The first question I had about the new Google/Twitter partnership was whether this would provide SEO value. If Google opted to show tweets high in search results, then definitely, there’d be SEO interest there – showing up at the top of the SERPs, in any form, is a big win for brands – and these early examples show that there is, indeed, clear SEO value. Twitter results may only appear for trending issues or maybe there’ll be a recognition value placed on Twitter activity to determine whether listing the Twitter results is likely relevant to the user query, but these screenshots show that it may be possible to reach high-visibility areas of Google’s SERPs via your Twitter presence.
This will invariably mean more brands will be investing more into their Twitter presence, as it significantly increases the audience reach potential of tweets. The change also underlines the fact that social search is going to be a significant battleground, and one that organisations will need to take into account.
A likely element of Google calculations on when and where to display tweets in search results will be the relevance of the person or people tweeting about the topic. In the example above, #maythe4thbewithyou was a trending hashtag (and the search is specifically for that hashtag), so it makes contextual sense that Google consider this relevant to the users’ search, and thus, would show the user tweets relevant to the topic. But in one of the other examples provided by Search Engine Land, the logic behind why the tweet was shown seems slightly different.
As you can see, beneath the first result, the search conducted was ‘mayweather pacquiao’ and a tweet from Gary Valenciano has appeared in the results. Gary Valenciano is a verified account with 2.43 million followers, so while the correlation between the tweet itself and the search term isn’t as clear as the first example, it does seem that a profile’s social clout will play a part in Google’s logic on what tweets to show and when. The first contention is supported again in the third example shown in Search Engine Land’s post:
Steve Benfey has 286 followers and isn’t verified, but #CarlyFiorina is a trending topic, so just like #maythe4thbewithyou, it’s the popularity of the topic that’s dictated its relevance in the SERPs, not the tweet originator. This would suggest there’s at least two different logicalities that will dictate the appearance of tweets in search results – there’s a ‘Popular on Twitter’ break-out, which’ll show tending tweets related to the search query, and another option which shows related tweets based on the social standing of the tweeter (or possibly the engagement levels on the individual tweet).
In the case of trending topics, this is effectively word-of-mouth SEO. You’re getting a display of real-time discussion – the more discussion about the topic, the more likely the searcher will be shown tweet results in the SERPs. From a marketing perspective, this addition will likely increase the rate of newsjacking and brands trying to tag onto trending topics, as, if successful, they’ll get the double-benefit of appearing not only in the trending discussion on Twitter, but also in related Google search results. It’ll also highlight the importance of brand awareness efforts in regards to trending topics – imagine if you were searching for ‘Nike basketball shoes’ and a trending topic was how an NBA players’ Nikes fell apart on him during a game. That sort of discussion would be hard to ignore for a prospective customer – it’ll be more important than ever for brands to be monitoring Twitter trends to manage or remain aware of such occurrences in order to mitigate potential negative associations.
Of more marketing value, however, is option number two presented here – appearing in the search results based on tweet mentions from prominent users. This will amplify the importance of influencer marketing on Twitter – using the same example as above, what if you were searching for ‘Nike basketball shoes’ and a tweet from NBA star Kobe Bryant appeared high in the results, thanking Nike for making him such great sneakers? That could play a part in your decision making process, right? Of course, as with everything, staged responses or canned endorsements will be obvious to the searcher, and it’s likely people will filter out any such tweets that are overly promotional. But real responses, from real influencers on Twitter, might just have a whole new value proposition for brands, depending on how these tests play out.
The Sleeping Giant
Social search, elaborating on the context of your search results with the real-time discussion from social media platforms, is fast becoming a big deal. People are placing less trust in brand messaging these days, and a significant impetus for that change may be that they simply no longer have to. In times past, brands had more control over the flow of information, they told consumers what they wanted them to hear and managed the message according to their own strategic goals. But in the connected era, in which people have access to all the information, all the time, consumers can inform themselves. Studies have shown that people are already more than halfway along the purchase cycle before they even get in touch with brands, they’re not coming to your sales reps looking for more info the way they used to do. The value is in relationships, in having a higher value proposition than the product itself. In this context, social search is more important than ever – because what’s more valuable than a recommendation from the people you know and trust?
The Google/Twitter partnership only underlines the rising importance of social search and of adding that additional context to the search process. But Facebook knows this too, and you can bet, they’ll be planning their own response.
Graph Search 2.0
Facebook Graph Search was largely seen as a failure. Or not a failure, as such, but a glitchy system that never quite delivered on its massive potential. Facebook acknowledged this – Mark Zuckerberg himself has noted that the results weren’t consistent. But just as Google and Twitter move to stake their claim on social search, Facebook will be looking to roll out Graph Search 2.0, and it will be a massive improvement on the first iteration.
Facebook’s been quite overt in its efforts to keep its audience within its own walls – most specifically with its push to get major publishers to post first-run content direct to Facebook. A big part of holding audience attention and maintaining user experience is search, giving users the ability to easily find what they want within the Facebook eco-system. Facebook has been cautious about how they roll out Graph Search due to privacy concerns and the need to protect the value of their treasured user data, but a new version of Graph Search will be coming soon. The Google/Twitter partnership will only hasten its arrival.
Whatever comes, it’s going to be interesting to see how the digital marketing world responds to having real-time tweets in Google search results. These first examples show that the new partnership could have significant implications, and will likely raise the value of Twitter as a marketing and brand-relevance platform. It’s an exciting development to watch, and I’m looking forward to seeing how it all comes together.
Not a great day for Twitter. After the micro-blogging giant’s first quarter earnings report was leaked an hour earlier than expected, Twitter stock dropped by 6%, and finished the day down close to 20%. The losses were on the back of a less than spectacular earnings report, where Twitter reported revenue of $436m – around $20m below estimates. Twitter’s CEO Dick Costolo, in the company’s official release, said the gap was ‘due to lower-than-expected contribution from some of our newer direct response products’ – these would include some of Twitter’s latest product offerings, like changes to direct messaging, native video sharing, and live-streaming, via Periscope. Twitter’s report also outlined the areas where growth has been solid – but of more interest at this stage is what this result will mean for the future development of the Twitter platform, particularly when considering the rate at which they’ve been pushing out changes in the last few months.
Change is as good as…
Twitter is pushing out more changes, additions and updates than ever before. Senior Vice President of Product Kevin Weil was appointed in October 2014, and his leadership has seen a significant shift in momentum for Twitter products. Whereas once there were long delays in testing before rolling out, Weil appears to have streamlined the process – this is evident in the array of changes we’ve witnessed, from new advertising options to improved embedding options in order to spread the reach of the platform’s properties. While every platform change is approached with some scepticism – every platform has its traditionalists, overly protective of their cherished user-experience – most of these updates have been integrated and adopted well by the growing Twitter community. The latest move on this front was the recent unveiling of Twitter’s new home page for non-users, an attempt to entice more people to sign up and build its overall audience.
While these changes have gone well within the overall scheme of things, one concern stemming from the latest results is that the company will be under pressure to move even faster and seek more ways to monetize the platform. The last thing Twitter users want is to see it go the path of Facebook and start restricting reach in order to incentivize ad buy-up, but that’s invariably one element that could be considered. This is where the delicate balancing act has to be maintained – how do you incentivise new users, monetize the audience you already have, and at the same time, maintain harmony amongst your existing user-base? It’s a challenge facing every major platform, and one which is in stark view for Twitter today as it weathers the backlash resulting from its numbers.
Plenty to smile about
But it’s not all bad news. Twitter’s official report actually painted a fairly strong picture, with monthly active users up 18% and advertising revenue up 72% year-over-year. There’s little doubt the company is in a good position – it’s not as if people are turning away from the platform – it’s just not moving at the rate many (including Twitter itself) had hoped. But there’s a range of solid options coming up that may help the company turn the results in its favour very quickly. The recent growth of its new live-streaming company Periscope is a big positive, particularly the rate at which it’s increasing its market dominance over early-released rival Meerkat. The first element of Twitter’s deal with Google has been announced, with Promoted Tweets now available via Google’s Doubleclick ad platform. The Google deal, in itself, is loaded with potential and could see a significant boost in new users and user engagement, particularly if there’s an SEO value linked to tweets. There’s also the additional search functionality likely to be included as part of the partnership, and the subsequent ad options that would go along with that capability. Twitter’s overall picture looks good, despite this tremor in investor confidence. But tremors can cause lasting impacts, and it will be interesting to see what happens next.
The next battleground
One of the biggest user concerns stemming from unsteady results is the fear that the platforms will change, and the service they know and love will be impacted. Twitter is acutely aware of this, and over time they’ve shown their understanding of the value of user-experience by not making large scale changes and not balancing too far in favour of ad dollars or new users. An imperative on every listed company is the need to increase revenue, a need which always puts pressure on the way things are. But social media networks know that users can and will migrate, attention is the true currency of the social media industry. As such, I wouldn’t expect to see massive changes in user-experience, though I am looking forward to seeing what new products and options come about in the coming months – particularly as a result of the new deal with Google. One of the next big battlegrounds will be social search, an area Facebook is already pushing into with the refinement of Graph Search. The Twitter/Google partnership is likely to be their biggest competition on this front, and as social search becomes more important, as people look to validate more of their search queries via their social graphs and groups, the competition in that sector is will become significantly more intense. I, for one, am pretty interested to see where it goes.
I wrote a piece recently questioning whether the rise of social media has been a positive or negative for our overall levels of political engagement. The idea for that post came from the general level of ambivalence to a recent election in my home state, and how that same sense of lessened political impact seemed to be pervading through social networks and online conversations. The question, really, was about whether giving the audience more specific control over their news inputs would mean we they would actively tune-out content that was of little interest, and whether political news would suffer as a result.
My findings in that investigation were that social media is not necessarily lessening political engagement, but that political parties do need to consider where and how the audience is interacting in order to keep them engaged and maximise the potential of their messaging. In large part, it seems many political organisations have not advanced their communications and outreach strategies in-line with the social media communications shift, and as such, they’re not reaching their audiences as effectively – a concern that will exacerbate as the next generation of digital natives move into more politically and socially aware phases of their lives. Failing to reach them on the platforms where they are most active will lead to political failure – the numbers do indicate that political campaigns that had received traction in social media were significantly more impactful and ensured wider awareness of local political issues.
In order to extend this further, I decided to investigate political and news engagement based on Google search trends – the news stories people are seeking to learn more about via online search. While not definitive, Google search patterns can provide a indicative measure of the public ‘pulse’, the issues of most relevance to any given region. By looking at what we’re searching for, I hoped to get an idea of what issues were gaining the traction amongst Australian internet users and build an understanding of what that means for how we communicate and engage with digitally savvy audiences. What I found was both obvious and enlightening, in equal measure.
What People Want to Know
To start with, I wanted to get an idea of internet news trends, of the stories have gained the most traction over time. My suspicion was that by looking through the most popular Google searches, year-on-year, I’d find that we are, indeed, far less politically engaged or news driven overall, as I suspected the charts would be increasingly filled with searches for Justin Bieber and One Direction as time went on. That wasn’t the case – the above chart looks at the most popular Google queries globally. I shaded each topic in a colour – blue for tech, pink for entertainment, orange for news and current affairs and green for sport. As you can see, if anything, people are searching for news content more than ever in the last few years, which suggests the interest in news and current affairs is still strong, or at least on par with gossip and entertainment.
What I also found interesting was that tech queries on Google went way up in the mid-to-late 2000’s, dominating search in 2006-07, but have died down since. Now, given the growth of social media since then, I don’t think this suggests people have become any less engaged in tech – I think it’s more likely that this exemplifies a change in search behaviour. These days, you’re much less likely to go to Google to search for ‘Facebook’ because everyone knows where to find it. Everyone accesses it through apps or links – social media and apps are definitely more prevalent now than they were in 2007, but the way we come to them has changed. That behavioural shift is indicative of the larger trend of how search is being used – it’s hard to say people are searching for news content more frequently in 2014, despite these numbers, because the way people come across news content online has totally changed.
In any event, looking at global trends only forms part of the overall picture – news stories that are relevant to people in Australia might be totally irrelevant on a global scale. Breaking down the search to a regional level would provide more indicative insight into how politically engaged Australians are.
First, I looked up the trending Google searches for Australia over the last four years. What was most interesting about this is how few local news stories made the cut – the mentions of ‘RFS’ and ‘AEC’ in 2013 are related to bushfires and elections, and the mention of ‘MyGov’ in 2014 is news related, but the rest is dominated by entertainment. This suggests that maybe we’re not seeking more information on important local issues, but then maybe, I thought, generic search is probably not the most indicative measure of news engagement. I switched the analysis to searches conducted in Google News instead – the news stories Australians have been seeking more information on in that same time period.
Again, not much local content in that list – I highlighted the local stories specifically to better exemplify the data. As you can see, we searched for ‘Julia Gillard’ and ‘Qantas’ in 2011, ‘Mysogynist’ is related to Gillard also in 2012, and we have the ‘Melbourne earthquake’, but outside of that it’s all world news. The most searched for news content by Australians is rarely even about Australia – which is concerning, considering the impact local news issues have on our day-to-day lives. The question is, are we paying more attention to global news to the detriment of local issues?
The Currency of Clicks
Here’s the thing: there’s been much angst in recent times about the negative affect online media is having on journalism, and the quality of journalism in general. Just recently, Edelman published a study on modern media consumption and part of their findings were that 75% of journalists now feel more pressure to think about their story’s potential to get shared on social platforms. Whether you like it or not, the media economy is now driven by the currency of clicks – the website that gets more traffic, makes more money, and those signals, the stories that are generating clicks, are now being used to decide what stories get covered and what gets more attention. You see this every night in the evening news, there’s far more entertainment and gossip type stories making it into the news feed because that’s the content that’s generating clicks online. News outlets want to provide the audiences with what they want, therefore more of this content, of arguably lesser news-relevance, is being reported.
In considering this, and looking at the Google search data, what I think we’re seeing is the effect of a more connected global community. Social networks have provided us with unprecedented access to the global conversation – just last week, I tuned in and watched a building fire in Brooklyn being streamed live via Periscope. The connection is immediate, we’re more connected to the wider world than ever before, but as a result, our attention may be being dominated by global stories, while local issues fade into the background. To clarify this further, I sought to match up Australia’s news search habits with those of other nations to see whether we, as a smaller news nation, are seeing less local content than others.
A Question of Relevance
Using Google Trends, I looked up the most searched terms in Google News – Australia for the past 6 years. As you can see, the local issues (pink) were still not largely prevalent, with world news dominating in 2014.
I then compared that to the US:
Local news searches in pink.
Now, it makes sense, to a degree, that there would be more local news stories searched in the US, as many of these stories are of international relevance. But have a look at the political discussion in America. Politics features prominently, a lot more prominently than it does in the Australian topics.
In the UK, political issues also feature, though their news searches are dominated by sport, particularly in the latter years. But even when it is sport, that’s still local discussion, something largely absent from Australia’s news searches. For comparison, I charted the mentions of local news from each region:
Comparatively, the volume of local news searches in Australia is well down on the US and UK, especially when you add-in local sport as an extension of local news content. What this suggests is that we are, in fact, becoming more global in our approach to news – which is undoubtedly a good thing, greater global awareness leads to increased understanding overall. But our newfound connectedness with the global conversation may mean we’re becoming less engaged with the not-so-shiny, less attractive, more boring local news content. But those local issues need our attention and interest.
Relevance vs Popularity
So let’s say this is indicative, that we’re losing the local audience on the news and current affairs issues of significant relevance to them and their day-to-day lives. What then? What can we do to address the regional news attention deficit? The stories that people are clicking on and searching for are the ones they’re interested in, that engage them, so how can we make local politics or societal concerns more popular? This is a question that all political groups need to be considering – in no way should issues be made more divisive or sexy through artificial means, but there is a legitimate concern that local issues are going to receive less and less attention over time, and that’s incredibly bad news for the advancement and improvement of our immediate surroundings. Political groups need to be working to integrate social media and social media communications into their overall mix, into reaching their audiences where they increasingly are. Many are doing this, there’s a whole range of politicians who are actively engaged on social platforms, but there’s a definitive need for politicians to be using social media to connect with their audiences and increase awareness of issues. If the public loses interest in politics, we lose in general – we need our elected officials and leaders to be representing the views and interests of the wider community, and to be relating their messages back to the people in the methods and means they are most engaging with.
While only one part of the puzzle, the Google trends shown here indicate local news engagement is slipping. There’s no definitive answer as to how to combat this, but it’s a question all communicators should be considering – if global news items are dominating attention, how do we tell stories that raise attention and awareness among our audiences? How do we ensure important local issues remain at the height of public interest?
So Meerkating is now a thing. The immensely popular live-streaming app Meerkat has timed it’s rise to prominence in alignment with the annual South by Southwest Festival, leading to a perfect storm of Meerkats streaming from every talk, launch and dinner event. And it’s fun – it’s amazing to have such a level of access to the festival and it’s participants – the closest many of us, particularly those of us in other parts of the world, will ever get to actually being there and experiencing the event as it happens. I’ve loved jumping onto a Meerkat stream and getting Brian Fanzo’s perspective or Gary Vaynerchuck’s insight, all happening right there, as I watch. There’s a lot to like about Meerkat – but it’s time in the sun may be short-lived.
In January, Twitter purchased Periscope, a video-streaming service that offers the exact same capabilities as Meerkat, and then some. Twitter’s been working with Periscope since November 2014, and was reportedly polishing the beta version when Meerkat – which was built in just 8 weeks – was released into the social sphere. Reports thus far have indicated that Periscope operates in much the same way as Meerkat – it will function as a separate app and enable Twitter users to create live streams, the links to which are tweeted out to your Twitter followers (or to selected users). Periscope will also give users the chance to view live streams or watch previously recorded ones, something not on offer via Meerkat. Another point of difference is that comments posted on Periscope won’t show up in your Twitter stream – not sure if this is a positive or negative at this stage. While it is odd seeing half messages or seemingly random interactions show up in your Twitter stream – which are actually responses to a Meerkat that user is watching – those conversation fragments can also spark interest in checking out the link yourself – time will tell if this has any effect on viewers.
Reports have suggested that Periscope is a far more polished and functional affair – which makes sense, given the short dev time for Meerkat – but has Periscope missed the boat and enabled Meerkat to establish a following?
Riding the Blue Bird
There is one other thing working against Meerkat – it’s been built on the back of Twitter’s network. As stated in the Meerkat documentation ‘everything that happens on Meerkat happens on Twitter’, and this could work against them as, effectively, a competing service. Already, Twitter’s moved to restrict Meerkat’s access to it’s social graph. While it’s unlikely Twitter would cut Meerkat off completely, building their network on Twitter’s land could prove problematic when Periscope does, eventually, get released – though some have also noted that this strategy may end up working in Meerkat’s favour.
The Race or the Service?
There was a question posted in a SXSW event over the weekend – an event I was watching via Meerkat – and it somewhat gets to the heart of the questions over the future of Meerkat and whether the app will exist long-term. The question, posed by Bryan Kramer, was:
My response to this is that the functionality of Meerkat is an extension of social connectivity – it brings everything another step closer. That’s really the ultimate goal of social media, to facilitate connections between people and groups and enable everyone to be part of the wider conversation. That’s the ethos that Mark Zuckerberg stands by, the mission to connect everyone and harness the power of collaboration to bring about real connection and, ideally, real change. In this vein, Meerkat is a perfect extension of such capacity – it’s the next step, allowing anyone to broadcast easily and in real-time to the rest of the world. And in that sense, the platform itself isn’t really the thing.
Whether it’s Periscope or Meerkat – or something else we haven’t even heard of, Meerkat’s live-streaming functionality is exciting and innovative – and it’s already got of the world’s best social media minds enamoured and thinking about how to utilise it in new ways. While I anticipate Periscope being being a great product, even if it does succeed Meerkat, time spent learning and seeing what you can do via Meerkat won’t be wasted. And maybe there’s room for both apps in the market – maybe some people will better align to the DIY-feel of Meerkat and refuse to use Periscope even if it is better. It’s likely that this window of opportunity Meerkat’s been afforded will enable it to establish a loyal audience of some kind – but regardless of how it pans out, the important element to note here is the functionality, the new capability and capacity being offered by live-streaming video. Network capacity of the past would’ve meant such innovation was simply impossible. But now, you might get the opportunity to experience celebrity events from the front row, live streamed by your favourite celebrity him or herself, access you’d never have dreamed of – and a powerful vehicle for engagement and building community.
Rather than worrying about who’ll win the race, take a moment to take in the spectacle of the event. It’s a fun ride that’s worth getting onto.
Have you heard of LinkedIn’s University Finder app? It’s something LinkedIn released last year that aims to help students determine which university they should attend in order to reach their career aspirations, based on the job they want, the subjects they’re studying, the companies they’d like to work for and where they want to live. The app does this by utilising LinkedIn’s masses of user data, highlighting where people who’ve studied at different institutions have gone on to find employment.
While that functionality in itself is pretty great, it’s also one of the best ways to access LinkedIn’s comprehensive data banks. You see, LinkedIn is pretty guarded with their API. This is understandable – many users don’t want their personal career info to be available to anyone and everyone – but anonymised data doesn’t subvert any privacy restrictions. And that data, being able to sort and sift LinkedIn’s info, can have significant value for more than just prospective university students – here’s a few ways you can use LinkedIn University Finder to better inform your own marketing and outreach efforts.
1. Where to Focus When Building Relationships with Future Decision-Makers
So, the filters of LinkedIn University Finder are:
Based on the selection you make here, the top universities for your chosen career preferences are displayed below:
Pretty simple, right? But access to all that data also means you can gain insights beyond academic recommendations. Let’s say, for example, you were in charge of an up-coming IT company and you wanted to boost your profile in order to attract the best candidates and enhance future business prospects. You could choose the specific area from the study field:
Then the specific location you’re interested in:
And the results will show the most popular universities for your chosen interest in your chosen region:
Now you know, based on actual employment histories, where the majority of students in your area of expertise are studying within this region. Using this, you can work with the relevant universities to establish connections with your business – develop sponsorship or graduate programs, arrange to do talks or work with students. All these efforts can help build connections with your business, boosting brand awareness and, ideally, making your company a desired employment option among top graduates. Even if winning over future employees isn’t your goal, by making connections with future business leaders at this stage, you’re helping establish connections for partnerships when those graduates progress to decision-making levels.
This is a long-term strategy which can help strengthen community and brand awareness and can help you gain an audience within the circles of your target customers, before they’ve reached that next stage.
2. What Skills to List on Your LinkedIn Profile
This is a quietly brilliant strategy, and one I can take absolutely no credit for it. In a recent post on Mashable, Joshua Waldman outlined how you can use LinkedIn University Finder to locate which skills you should list on your LinkedIn Profile to maximise your chances of gaining employment in your preferred industry and role. The strategy (which Joshua has detailed more comprehensively in his post) goes like this:
- Find the schools with the most graduates progressing to the roles and companies you want to work for
- Go to the LinkedIn Pages for those schools and filter their listings by the company and role
- This will then show you what skills those graduates have listed on their profiles, in order of frequency – these are the skills you want to be listing on your profile to increase your prospects (assuming you have the ability to back-up these skills, of course)
Waldman recommends doing this same research with several schools and job functions to get a more comprehensive idea of the skills those who are being employed in the roles you’re seeking are listing. Once you have a spreadsheet of all the listed skills, sort them by frequency of mentions, and you have a list of what you should be including on your profile to increase your chances of getting the role you want. Pretty clever, huh?
3. Where to Target Your Ads
This data also, inadvertently, gives you an insight into where you should consider targeting your advertising. For instance, if you were trying to reach marketing consultants in Melbourne, Australia, LinkedIn University finder tells me that the most likely places those people are employed are:
I could then use this info to target my ads on Facebook or Twitter, or I conduct research into whether I could advertise in their internal publications, find ways to reach them where they’re more likely to see it. Alternatively, it also highlights where prospects are not – if I do a search for people who’ve studied ‘Hair Styling/Stylist and Hair Design’ in the Melbourne area, I find that there’s really not many of them listed on LinkedIn. Not a big surprise really, but if I were considering advertising on LinkedIn and I knew the job titles of the people I was targeting, I could enter that into the filters and work out whether there’s a sizeable enough audience on the platform to focus on.
There are a range of other ways to utilise LinkedIn University Finder’s data – the amount of professional insights available via LinkedIn is unparalleled, and being able to filter the information in a quick and user-friendly way like this can be extremely valuable. If you haven’t used University Finder yet, I recommend you check it out.
Do you ever come across a business profile or page and think ‘what the…? How did they get 3,000 followers?’ As with most things in life, if something seems fishy, there’s a good likelihood that it probably is, and with the fake social media profile industry worth hundreds of millions of dollars per annum, it’s not hugely surprising to find out many individuals and brands have taken this route. Like, a heap of them have – just take a look at the results from the recent Instagram fake profile purge, where a whole range of celebrities took big hits in their follower counts.
And it makes sense, having more followers and likes can definitely improve your brand position – if you’re looking for a service online and find two similar providers, one with 38 likes and another with 3,000, the latter one’s gonna’ stand out – but with the practice of buying followers and likes so widespread, it’d be great to also have a way to work out who’s telling the truth, right? Here’s a couple of ways to work out if they’re telling you the fibs.
How to work out is someone’s Twitter followers are fake
Twitter is the open network, the one where people go to broadcast their thoughts and voice their opinions on the happenings of the world. As such, the biggest advantage of Twitter is that most of their data is publicly accessible, which makes it easier to work out what brands are doing, what strategies their employing – and also, whether they’re faking. It’s actually pretty easy to spot on Twitter, even without any significant investigation.
When looking through Twitter, it’s not uncommon for a celebrity to have a follower to following ratio that looks something like this:
Gotye’s not a prolific tweeter, and as such, he’s not following a heap of people. But he’s Gotye, he’s a world-renowned musician, and his fans are keen to hear whatever it is he has to say – hence, despite him not following back many folk, he still has 414,000 followers. That makes sense for a public figure with a large fan base, but when you come across a non-public figure, someone you’ve never heard of, with a similar follower/following ratio, that’s a pretty clear indicator that something’s amiss.
There are a couple of options for testing this on Twitter – Status People’s ‘Fake Follower Check’ is one, Social Bakers, too, has a free fake followers test you can use – but my favourite is Twitter Audit, also free, very quick and very easy to use. The difference between each of these, and why I prefer TwitterAudit, is the number of records they check to get an indication of how many fake followers each profile has.
Of course, the accuracy of each is relative to the amount of followers the subject has – the percentage of followers you’re testing decreases in-line with increases in follower count – but generally this data has been found to be indicative, when compared with tests on a more comprehensive scale.
To conduct a Twitter Audit, you just enter the handle you wanna’ check, sign-in with your Twitter credentials, and away you go. How the test works is, it takes a look at that random sample of up to 5,000 of the person’s followers and it looks at a range of factors for each – number of tweets sent, date of last tweet, follower/following ratio, etc. From this, the system determines which of those tested profiles are likely fake, then gives you a percentage and pie chart based on those findings:
There is, of course, a margin of error in this data, but it’s normally a fairly accurate indicator, particularly when analysing profiles with less than 5k followers.
To clarify and confirm the data further, you can conduct a manual check – paid tools like Followerwonk or Socialbro provide in-depth reports on follower growth over time. If you look up a profile and find a big drops or jumps in their follower numbers, like this:
Pretty safe to assume those followers didn’t all randomly switch off in the same week (unless, of course, there was an offending tweet or similar logical connection).
Using the available apps, it’s pretty easy to work out Twitter fakes. Twitter’s always working to eliminate illegitimate profiles, so we might one day see an Instagram style purge with a heap of celebrities taking hits. But till then, if you ever need confirmation, just run ‘em through a Twitter Audit, then sit back and scoff at their vanity.
How to work out is someone’s Facebook likes are fake
Facebook fakers are a little harder to pin down. Unlike Twitter, most of Facebook’s data is locked up or hidden behind privacy settings, making it a bit more difficult to determine, definitively, if someone’s cheating. There’s a few ways to go about it and while none of them will provide as clear a result as the Twitter audit options, they will give you some idea as to what’s going on with any given page.
Find out where their fans are from
So, let’s say that the Facebook page you’re looking at is a local business – they work within your local region, they not a subsidiary of a larger international corporation – the people they work with are, logically, going to be based in the local area. The people who sell Facebook likes tend to be from third world nations – as noted in this piece from Copyblogger. Most of the fake likes you’ll come across originate from Bangladesh, India, Egypt, Pakistan, Afghanistan, Syria and Indonesia. Now, that’s not to point the finger and say all of the ‘click farms’ in the world are based in these regions, but if our local business has a heap of likers from these nations, that’s a likely indication that their faking it. So how do work this out?
Facebook’s graph search enables you to search for a heap of different parameters. The one we can use in this case is:
You insert the name of the business page at the end and it’ll give you a display of all the hometowns of people who like that page. The problem with this is that Graph Search results are sorted based on affinity – how they’re connected to you – not by total number, so you can’t necessarily determine where the majority of this page’s likes come from, but if it’s a local business and they have a range of the above mentioned nations among the hometowns of their followers, you may have reason to question why they’re showing up there.
Extra note: In this piece by Miguel Bravo, Bravo also suggests that the results of Graph Search queries like:
‘Pages likes by people who like [insert page name]’
‘Countries of people who like [insert page name]’
‘Languages of people who like [insert page name]’
Can also produce telling results (and they definitely do in the example he’s provided).
Check their interaction versus their Likes
This is a more tenuous linkage, but it can provide some insight. So, if the page you’re looking at has 3,000 likes, you’d expect them to have a reasonable level of interaction on their posts, some discussion about their brand, right? You can do a quick assessment of their posts to see what sort of engagement they’re getting on each – fake profiles are not going to interact with posts, so if they’ve got a crazy amount of page likes but are getting no action on their updates, they may have bought likes. Or they’re not very good at understanding their audience.
By clicking on the actual ‘Like’ number on the page, you get a graph like this:
Now, dependent on other factors, this could be telling – a huge jump in likes on any given day indicates either a really popular post or promotion, or that the page has bought likes, you’ll only be able to determine this by cross-checking the data against the posts. The other metric to consider is ‘People Talking About This’ – so, in this case, I’d be a little suspect, given they have 3.7k total page likes, a big boost in likes in the last week, yet only one person ‘talking about this’. Again, these are not definitive measures – they can often end up being fuel for your own conspiracy theories, where you’re really seeing what you want to see. But having a look at the numbers can be revealing on a page that’s clearly purchased fake likes.
Extra tip: Fake profiles tend to have no profile image, or odd-looking, copied images – this is another element to check to further your investigation.
Really find out where their fans are from
If you’re really serious about finding Facebook fakers, paid app Fanpage Karma will give you a breakdown of the location of any page’s likers.
This is one of the clearest indicators you can use to determine if the page has purchased likes – if the top countries are nations where the brand doesn’t even operate, that’s a fairly large red flag waving in your face.
On one hand, it’s frustrating that there’s not an easier way to determine Facebook fakers, as there is with Twitter, but on the other hand, it doesn’t really matter either way – if they’ve purchased fake likes, there’s not a heap you can do. I mean, you could, theoretically, go through their list of fans and report each fake profile one-by-one (which you can also do on Twitter) but obviously, that’s pretty time consuming and with Facebook already dealing with thousands of reports per hour, it’s hard to know if those efforts will actually cause any effect – that, and the fact that some like sellers offer a ‘guarantee’, where they’ll replace removed spam accounts, lessens the potential impact of conducting your own faker crackdown. The ongoing updates to Facebook’s news feed algorithm mean that purchasing likes will hurt pages more than help in the end, and Facebook’s always working to eliminate fakes where they can. While a higher number of likes is better looking, as with most measurements in social, it’s only one part of the larger picture, one indicator of potential success. You might have ten total likes and that could be more effective than a thousand, if those ten fans are engaged, paying clients, responsive to your messaging.
Quality Vs Quantity
And this is the key element in the popularity contest – the metrics only tell a part of the story. While I can understand why businesses might consider boosting their numbers, metrics are only one element of the social marketing puzzle. What’s more, fake likes and followers hurt the core product of social platforms – there’s already been questions about Twitter’s actual user numbers with reports suggesting that 9% of profiles are fake. That sort of speculation hurts their brand sentiment and turns off potential investors – the fake profile industry is bad for social media business, and you best believe the platforms are doing all they can to identify and eradicate imposter accounts. As with Instagram, at any time you could see a similar cull on any platform – buying popularity could end up very embarrassing if you get caught out.
Any measurement is an indicator – Likes, followers, Klout, Kred – each, in itself, is something to consider, but the only way to confirm the true social credentials of a person or brand is to investigate them yourself. Look at their posts, their content, assess what they’re doing. There may be a logical reason why their numbers are the way they are. Or there may not. ‘Influence’ is relative – conducting your own analysis will show you whose earned it and whose bought something resembling what influence should be.
Ever since my early teens I’ve been a big basketball fan. I played football when I was young, but a playground accident (in which I broke both my arms at the same time) meant full-contact sports were off the cards for an extended period. During my recovery, I found basketball, and I never looked back. This was also right in the midst of the Michael Jordan-era – Charlotte Hornets jerseys were everywhere, Shaquille O’Neal was smashing backboards on TV. Basketball was blowing up in the early nineties, and like many passions of our formative years, it took hold and has stayed with me ever since.
One aspect that really captured my imagination was statistics. I collected NBA cards, poured over the numbers and info on each one. I developed an encyclopedic knowledge of useless facts about players and their outputs – you wanna’ know who had the best three-point field goal percentage in the 1992-93 season – I got you. Need to know the career averages of Bill Wennington? Right here. I wasn’t alone in this, there were a heap of people more informed and more detail-oriented than I, but what I didn’t know was that that very passion, that interest in obscure details and numbers, would one day change the very way the game was played.
Evolution Through Analysis
At the 2012 Sloan Sports Analytics Conference, cartographer Kirk Goldsberry gave a presentation on what he called CourtVision, an advanced basketball analytics system he’d put together in his spare time. Goldsberry had worked out how to extract data from ESPN’s shot charts – which showed where each player had made and missed shots from during each game – and he’d put all that data into a comprehensive set for each individual. He’d mapped every shot taken in every NBA game from 2006 to 2011, a huge data bank which, when filtered down to specific players, highlighted tendencies, weaknesses and strengths.
Basic field goal percentage data was nothing new – as noted earlier, any kid intoxicated by the smell of a freshly opened pack of basketball cards had some level of similar insight, but Goldsberry had taken it to the next level. He’d sought to show why this data was important, how it could be actioned. And as he presented, an audience full of NBA owners all sat forward in their seats.
Data analytics in sports has become the “in” thing in recent times. Growing from the success of Billy Beane’s “Moneyball”, analytics is now big business – virtually every major sports team now employs some level of data analysis in their preparation and evaluation process. And it makes sense – winning is everything in professional sports. More than pride or showmanship, it’s winning that makes money for pro athletes. Careers depend on it, clubs rely on the ability to perform. Winning teams get better attendances, more TV coverage, more success as a business overall. And it is just that – business. While it’s sports and it may not seem so different from your local leagues, where participation in itself is seen as a level of success, professional sport is a massive industry, and winning is a fundamental requirement. You’re either winning now or you have a plan to win in future. Or you’re done. With so much riding on the result, every little bit matters, every advantage you can get helps – if deflating the ball by one p.s.i can provide some tiny advantage, you best believe someone will try it.
With every detail under so much scrutiny, professional sports teams need to get things right. You could fly blind, stick with the way things have always been done – rely on your gut instinct, as many traditionalists still uphold. But the fact of the matter is data has become a critical part of modern pro sports. Numbers don’t lie, statistics are fact, and while it takes more than mere numbers to build any actionable insights from the info, used well, data can unlock the secrets that lead to that one goal – winning.
Data vs Instinct
Goldsberry’s formulas, or variations of them, have been adopted by players and coaches all across the NBA. The actual results of this are difficult to definitvely pin down, fuelling critics of the advanced statistics and data approach. Some, like TNT commentator and NBA Hall of Famer Charles Barkley, have come out strong with their opposition:
All these guys who run these organizations who talk about analytics, they have one thing in common — they’re a bunch of guys who have never played the game, and they never got the girls in high school, and they just want to get in the game.”
Barkley’s view is simple – all the numbers and all the data have not yet lead to a team winning a championship. And he’s right, but still, many clear winners have emerged.
Shane Battier defined his career by being a defensive specialist, someone who’s sole aim was taking on unglamorous task of shutting down opposition scoring threats. Battier was also an analytics advocate, someone who’d seen the power of numbers and had been using similar statistical correlations for some time. Battier became renowned for his success in stopping or slowing the game’s biggest stars, most notably Kobe Bryant. What Battier had determined with Bryant was that he was no where near as efficient when he shot from particular sections of the floor – so rather than work to stop Bryant, as such, Battier tried to keep Bryant out of his hot spots and shepherd him into taking bad shots. The tactic was a success, but one which isn’t necessarily quantified in the box score.
This sort of basic extrapolation of the data highlights the subtleties of utilising performance statistics as a predictor of successful behaviour. The data itself was never going to alter the nature of the game, but the accumulation of those subtle complexities, when used and applied in the right way, can sway the outcome and deliver results. The problem is that you a) need to know the right data to analyse and action, and b) need the right personnel to action it. Those two variables are what leads to data being seen as an inexact science – generally, it’s not a case of 1 + 1 = 2 – it’s more like 1 (in the right scenario with the right preparation) + 1 (with the correct understanding of the specifics of the moment) = 2. This is where there’s some truth to the old ‘go with your gut’ way of thinking – you need people who can ‘go with their gut’, but that gut needs to be informed and to understand the variables of overall success.
For instance, let’s say you have the ball and your team’s down by one with only seconds remaining and you’re rushing up court for the last play when you spot your teammate open for a shot on your left. An informed, analytical, mind will know how good that shot is, how good a shooter that player is at this stage of the game. Through understanding the shot charts, like Goldsberry’s CourtVision stats, the informed player can make a smarter decision and either execute or switch the play, and that quick thinking can win or lose the game. Such interpretation is both gut and analytics, and that’s more likely where you’ll see success in the world of data – human interpretation layered over informed insights. One without the other is an inferior approach.
New Ways of Working Require New Ways of Thinking
This is an important distinction in the intersection of big data and human analysis. Right now, the business world is trying to understand the implications of all this new data we’ve been given access to. The proliferation of social media has fed an explosion of online tracking and data systems and most business haven’t yet been able to get a grasp on what all this new information means, where it might lead. We know it’s important – if professional sports teams are effectively entrusting their success to the numbers, then it’s surely valuable – but because there are so many variables, because it isn’t so black and white, many are opting to stick with the ‘go with your gut’ approach, the ‘we’ve done it this way for years’ ethos.
So a heap of people on Facebook click ‘Like’ – so what?”
Established mindsets pose the biggest challenge to the possibilities of data, because it’s hard to see the logic when we’ve never been asked to look at things from a wider view. As with the quote above, a single person clicking ‘Like’ on your Facebook business page is virtually meaningless in the larger scheme. But we’re not talking about one thing. Often we go looking for simplicity because it’s what makes us comfortable, it’s logic we’re familiar with. But new ways of working require new ways of thinking, and we need to break out of what we know in order to break through.
Here’s an example in practise:
- Person A has 500,000 followers on Twitter. Person B has only 5,000.
- Person A has followed a heap of people and gained these followers over time by collecting as many people as possible, following whoever will follow back, actively seeking to up their follower count at every opportunity. Person B has never focussed on followers, but has instead focussed on community and having genuine interactions with the people to whom she’s connected.
- Person A has a Klout score of 55. Klout score, whether you agree with it or not, is an indicative measure of how many interactions a person has within their community, how many times they’re mentioned, the impact of their actual conversations. Person B has a Klout score of 75. This would suggest that despite Person B only having 1% of Person A’s following, Person B is actually more influential in their community and more likely to have her message reach a wider audience.
Knowing the above details, I’d be willing to be large sums of money that most people would still pick Person A and his 500,000 followers to be their brand ambassador over Person B. Because Person A has the biggest reach. The fact that they’re not listening to him is largely irrelevant – because we’re used to seeing things as we know them. What we know is that reaching more people is better – years of marketing and advertising theory has taught us this. We know that the chance of reaching 500,000 is better than reaching 5,000, because the audience is so much bigger. So what if not all of them are listening to Person A – even if you can reach 1% you’re still beating Person B, right? Even though, through the logic detailed above, we can see that partnering with Person B is probably more likely to generate better results, the majority of people will still go with what they know. The unknown is exactly that, and despite our data getting more informed, our approach isn’t quite there yet.
Data Analysis and the Evolution of Expectation
So going back to Goldsberry’s CourtVision stats – what if there was a way to correlate that same info, but for people who are buying or are interested in your products? What if, rather than shots made and attempted, you were looking at actions taken online – pages liked, interests listed, relationships. One of those things in isolation is nothing – someone who buys your stuff also happens to like Nirvana, so what? But what if, like Goldsberry, you could collect a wide set of data, a range of actions and preferences and map those on a chart which suggested that a person who undertakes certain, specific actions is highly likely to be interested in your stuff? You can do this. You can do this right now with Facebook data and Twitter info – you can correlate all the info from your pages and fans and you can build your own data sets that will map out the people most likely to be interested in buying from you. The trick is in finding the right data, the data you need.
For instance, correlating all the data from all the people who’ve liked your page might not be beneficial, because many people like pages for different reasons – they might be friends or family, they might have done so to enter a competition. Those people are going to skew your data, because they’re not the people who are most likely to buy. But you can narrow it down, specifically, to people who’ve made a purchase, to people who’ve interacted with your content. You can choose the specific info, most indicative of your typical customers, then build your datasets based on that. As noted recently, Facebook likes can very accurately indicate a person’s personality or leanings, when applied on a wide enough scale – those findings are the perfect business-case for conducting your own analysis and working out your own most relevant audience. Once you know this, you can target your marketing accordingly, you can focus your questions based on the queries amongst this sub-set, you can calibrate your focus around expanding your reach to people similar to this, people with the highest probability of being actual paying customers.
But that’s not broadcast reach, right? That’s not hitting the widest audience possible, which, as we know – as we’ve learned – is how to succeed and sell more stuff. And of course, that may well be the case – focus your dataset wrong or too narrow and you could miss out on an entire market of other buyer personas you’re not catering for by honing in on one group. Narrowing focus is a risk, and that risk is going to enflame oppositional forces, the old-school chiefs who know how things are done. This is the challenge of being an innovator, and has always been the challenge. You’re presenting a new way of thinking, and people aren’t necessarily going to like it. When you’ve achieved success by doing things a certain way, do you appreciate it when someone new comes in and suggests something different? No. Because you’ve done it, you’ve got the runs on the board, you have the experience, and experience is concrete. You know what works. Social media and big data are new, they’re different, and they’ve got a lot to prove – this means you, by extension, digital marketers have a lot to prove also.
But it can be done. The stats and figures can be located and correlated, you can work out the most minute and specific details about your target customers, and those details will inform the future of your audience approach. As communications become more individual, as more and more people grow-up online and develop their interactive and communicative skills via social media platforms, people are also growing to expect their voices will be heard. This is what social media is about, empowering people by giving everyone a voice – the brands respect and listen to those individual voices will advance and move ahead, in-line with customer expectation. Targeted advertising, for example, is becoming so specific that it’s scary – but to the next generation it won’t be scary, it’ll be how it’s always been. Brands responding in real-time will be standard, individual preferences will orchestrate the detail of each person’s media experience. What we know and have always known is evolving, whether we like it or not.
The possibilities of big data are amazing, the breadth of social media is hard to get your head around. But what we can say for sure is that people’s experiences and expectations are moving away from what we’ve always known. The businesses that can move with it, will.
Recently, I got to thinking about how social media and the transformational impact it’s having on our broader communications process might be affecting overall political awareness. This came up during the election lead-up in my home state – throughout much of the campaign the general consensus of people I spoke to was that they didn’t really have much of an opinion either way on who won. Of course, the people I spoke to are not indicative of everyone – a great many were very invested in the outcome – but in seeing the low levels of engagement around me, and the sense I got about the campaign overall, I wondered whether social might be lowering our levels of political engagement.
The arrival of social has given people a whole new way of consuming media. Online sources are now among the main players in news media, and through social media, people can now curate and customise their own info feeds. This enables people to choose which outlets they read, where they get their news from – and it also means people don’t need to see content they’re not interested in. For many, this may mean cutting out politics, which effectively weakens political influence and leads to a less politically engaged society overall – but is that what’s really happening?
The Numbers Don’t Lie
I sought to test my theory – if I was right, the easiest way to prove it would be to look at the rate of donkey and ‘informal’ votes in recent elections. If that rate was increasing significantly, year-on-year, that would suggest political engagement is falling, which would tie into my wider theory of the impact of social media. And in Australia it is – the rate of informal votes has jumped from 3.78% in 1998 to 5.55% in 2010, and it’s increased every year except 2007, which was the year that the Kevin Rudd won the Australian Federal Election – in which the ‘#Kevin07’ hashtag formed a key element of his campaign. This aligns with my theory – people are overall less interested in politics, but the incorporation of a social media element into Rudd’s 2007 strategy may have actually countered that and kept those less interested more engaged.
But there was a flaw. Yes, informal voting was increasing, but it’s been increasing every year since compulsory voting was introduced (rates jumped in 1984, but that’s attributed to a change in the voting process). Looking at the data, and considering social media’s influence, any real impact from social engagement would only possibly be significant in the last ten years, and the higher 2007 result is among the three elections held within that time, so it’s hard to draw any definitive conclusions from those figures alone. State-based elections provided no definitive logic either – informal rates had dropped in some, increased in others – there was nothing concrete in the numbers to conclude that the changing media habits, caused by social media, were impacting negatively on voter engagement. At least, not at this stage – in five years time, when the communications shift is really in full effect, we’re likely to have a better understanding of the potential impacts.
I found the same with US Presidential Elections – voter turnout in the United States has remained steady at around 55%, with an increase to 57.1% in 2008, the election in which social media was a key platform for eventual winner Barrack Obama (labelled by some as ‘The Facebook Election’). Other nations too showed no significant patterns – while the case may be that people are less politically engaged, the sample size, at this stage, is too small to draw and solid conclusions – though the increases in participation relative to social media activity did indicate the importance of engaging audiences on new mediums.
Of wider concern with the shift towards more customizable media inputs is the potential spread of reinforcement theory. Reinforcement theory is where people seek out and selectively remember only information that supports their pre-existing beliefs. You see and hear this all the time, people will pick and choose certain aspects of an argument in order to support what they choose to believe. And it’s damaging – people who’re locked into certain thought processes are not beneficial to the advancement of rational debate – you can’t argue with a mind that’s not open, you can’t reason with a person who won’t listen. If you’re stuck in your view of how things are, and you align with that perspective as indisputable fact, then there’s no way that you’ll ever be able to empathise or re-align your view if new facts emerge. It’s one thing to stand up for what you believe – that’s something that should always be encouraged and supported – but it’s another to stand up for what you believe while being closed-off to any other point of view. There’s an onus on everyone to learn the facts, to educate ourselves on all aspects of any particular issue before we set forth on solidifying what our opinion will be. But too often we see people accept a narrow perspective, form a belief based on a limited amount of information, and then perpetuate negative influence through their own confirmation bias, seeking out sources that support they’re stance.
While people have always been able to do this to some degree – you listen to the same radio presenter regularly or read the same newspaper and you’re effectively enlisting your own reinforcement theory on some level – there is a level of concern that the customisation of our media consumption might actually narrow people’s worldly awareness. While social media and the web are great for connecting with likeminded people and building communities around shared beliefs, the potential negative of that is that it may also embolden the disenchanted and facilitate more siloed cultures around limited and narrow viewpoints. If you choose, you can create a news feed of totally one-sided perspectives and shut out everything else. Whereas in the past people would need to watch the nightly news to get an understanding of the events of the day, many people now rely solely on their social feeds for the same info, which reduces the breadth of information being shared. Is that a good outcome? Is that what will lead us to a more understanding, connected society?
‘Is This Thing On?’
There have been various studies on the impact of social media on political consciousness, particularly among younger generations. In general, the findings seem to indicate that social media is good for political engagement because more people are talking about a wider range of issues online – trending topics, for example, inspire more people to evaluate their opinions on a particular subject. What studies can’t conclusively deduct is what impact those increased discussions are having on our wider political awareness – that can only be evaluated, effectively, by voter participation, which, as noted, is inconclusive given the data at this stage. What is clear, however, is that it’s becoming increasingly important for political parties to understand the growing reliance on social platforms as a means for building and fostering political engagement. It may be that the time for political jargon is dying out – it’s much easier in the connected era for people to tune-out anything that’s not engaging to them. Parliamentary Question Time, which is broadcast on TV in Australia, is a complex performance of political formalities and strategic doublespeak – you can easily see why people might opt to change the channel. The problem is, with the growing application of algorithms working to show users only the news relevant to them, based on their historical activity, the more people are switching away from politics, the less likely they’ll ever be switching back. Given that, it’s crucial for politicians to understand where their constituents are at, what they’re discussing, and importantly, how they’re discussing the issues of relevance to them. Just like businesses, politicians can access the abundance of audience data being logged every day online, the opportunity to build an understanding of the electorate is available and accessible to them. But it may mean a change of tact for the modern-day politician, a move away from the spin of old and towards a more connected process.