Tech giants need to take more responsibility for the advertising that makes them billions – with business ethics the only way that makes sense moving forward, argues Sarah Glozer.
Last week I was followed around the internet by a pair of shoes. I had looked at them online as a gift for my father-in-law, but he didn’t like them, and neither did I. Yet no matter what site I visited, there they were, staring at me in their full moccasin glory.
Facebook, which makes most of its $40bn a year revenue from digital advertising, has been in the firing line, but many other businesses across the digital advertising supply chain are now feeling the pinch.
These developments are hoping to offer a much needed safety net for the digital advertising industry, often described as the “Wild West” – a murky, lawless place where anything goes. Yet, digital advertising is probably a safer place to focus marketing spend now more than ever. A booming “ad verification” industry is instilling trust back into the complex digital supply chain by ensuring that adverts are correctly placed and targeted.
Self-regulation is also gathering pace. The Internet Advertising Bureau now offers Gold Standardcertification to firms striving for positive and safe digital advertising experiences, while the launch of a media industry coalition demonstrates collaborative efforts to increase transparency and accountability.
These developments are welcome and they are working. But their focus on brand safety overlooks some bigger questions.
First, there is the politicisation of advertising placement. For the brand of moccasins which stalked me online, it probably makes sense for their adverts not to appear next to toxic hate speech on YouTube. But what about next to a news article about alleged animal cruelty in the leather supply chain? Where would you draw the line?
Brands make these decisions constantly, through a complex process in which adverts are placed (or not) based on their association with “good” or “bad” key words used in websites and articles.
In the responsive digital world we live in, every user experience is truly unique. What I see online is different to what you see. We can never truly know what strategy lies behind the adverts we are shown.
Second, what is the societal cost of brands steering away from important subjects including race and religion through advertising placement decisions?
For example, an article about fashion in 2018 is a much safer place for the moccasins brand to advertise alongside, compared to the article on animal cruelty. The message to the digital platform? Content which is pedestrian, not polarising, pays. This brings powerful ramifications for democracy and freedom of speech. If content that doesn’t meet brand tolerance tests is not commercially attractive, are particular narratives suppressed?
It is exactly this point that leads many to question the sustainability of digital advertising. Indeed, Facebook is experimenting with ad-free subscription. Maybe content that brands consider “unsafe” will get pushed further behind pay walls.
Finally, in the increasingly automated world of targeted marketing, algorithms are not good at spotting context.
Said shoe brand might, therefore, choose not to be associated with content it considers inappropriate – such as commentary on “animal cruelty”. But algorithms cannot always differentiate content in a meaningful way. Is all content about “animals” problematic?
When outcomes are unknown, brands will opt for the safest and potentially most sanitised option. This may mean that algorithms shift from being neutral tools to value-laden ones.
Worryingly, the only real way to overcome this algorithmic bias is for real live humans to verify content. We are now seeing an increasing number of “commercial content moderators” doing our online dirty work by policing social media sites and removing harmful and distressing imagery.
Often poorly paid – and walking the “safe” and “unsafe” tightrope in a matter of seconds – what these people see can lead to serious psychological repercussions. The human toll of the safety drive should not be underestimated.
All of this raises serious questions about the role of marketing in society, and the ethics of big tech. For many, self-regulation is not enough. Politicians have called on brands to curtail commercial relations with the tech giants to address safety concerns in their platforms.
I agree. We have to push every organisation along the digital supply chain harder. The DARE approach I advocate (Digital Advertising Responsibility and Ethics) focuses less on demonising and more on humanising business. It advocates two vital actions.
Firstly, promoting the work of industry ethical game changers who are encouraging a new definition of responsibility – the Financial Times, for example, who stepped away from Facebook following controversial identity checks on advertisers (a move Facebook is now reconsidering), or Nestlé, which is looking into funding sustainable cocoa sourcing through ethical ad buying.
Other brands, such as Vodafone, are moving digital advertising in-house to have more control. Such examples demonstrate the trust deficit currently operating within the digital advertising industry.
Secondly, we need a bigger role for ethics in the digital supply chain. Ethics begin where the law ends, considering the right or wrong in any decision. Ethical thinking requires constant reflection in a changing digital landscape, rather than rules-based compliance. Yet while many advocate a code of ethics for the tech profession, I believe the key to to moving the field forward will come through education, open discussion and individual reflection.
It is only by making progress in these directions that we will be able to shift away from the murky culture that currently dominates the filtering of the online world. It’s not about sticking the boot into big tech. More the moccasin.