March 23, 2017 by Paul Dughi
The boycott Google movement may be gaining some ground. Ad Contrarian Bob Hoffman reports that AT&T has now announced it will pull all display ads from Google properties. This comes on the heels of some of the U.K.’s biggest advertisers, including the government and The Guardian newspaper, pulled its ads from Google and YouTube.
It all has to with where their ads are being seen. Google ad’s networks are under scrutiny – as are other third-party platforms – for allowing ads to be seen on scam sites, fake news site, and alongside what advertisers consider offensive content. In some case, ads have been shown on videos promoting anti-Semitism or terrorism.
I’ll say it again. Working directly with brand name publishers guarantees your ads will be seen in a quality environment. As good as Google is, ad networks have a built-in problem: they can’t safely police every site that’s in the network. The amount of content that is uploaded daily is staggering and the crooks out there keep moving the ball. Imagine the magnitude of monitoring every item to every site out there… every second… and determining its validity. I don’t know that there is AI capable of that task.
“Some of the world’s biggest brands are unwittingly funding Islamic extremists, white supremacists and pornographers by advertising on their websites, The Times can reveal. Advertisements for hundreds of large companies, universities and charities, including Mercedes-Benz, Waitrose and Marie Curie, appear on hate sites and YouTube videos created by supporters of terrorist groups such as Islamic State and Combat 18, a violent pro-Nazi faction.” – The Times
Google’s in major spin mode, apologizing and pledging to make changes. “Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values,” Philipp Schindler, Chief Business Officer for Google, said in a blog post. For this, we deeply apologize. We know that this is unacceptable to the advertisers and agencies who put their trust in us.
“We know advertisers don’t want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories.” – Philipp Schindler, Chief Business Officer for Google, via blog post.
Google has pledged:
- Safer default for brands so that ads show on content that meet a higher level of brand safety and excludes more objectionable content. That will be the default with brands making the decision whether to allow lower levels of scrutiny for broader reach.
- Simplified management of exclusions making it easier to exclude specific sits and channels. Up to now, it’s been kind of a pain.
- More fine-tuned controls for better control over where ads will show.
Some other recent stories I’ve written on the issues with Ad Tech Fraud:
Biggest ad fraud in history? Hackers stealing $3-$5 million dollars a day using video ads
White Ops security researchers have exposed the most profitable and advanced ad fraud operation ever seen by the industry. Dubbed “The Methbot Operation” after references to “meth” in the code of the bot itself, a group of operators has siphoned off as much as $180 million from major U.S. media companies and brand advertisers.
Controlled by a single group based in Russia and operating out of data centers in the US and Netherlands, this “bot farm” generates $3 to $5 million in fraudulent revenue per day by targeting the premium video advertising ecosystem, according to cyber security firm White Ops.
Digital ad market could be 2nd largest revenue source for organized crime
In a letter to Federal Trade Commission (FTC) Chairwoman Edith Ramirez, the Senators — both members of the Senate Banking Committee — pointed to studies that have found rampant fraud in the $60 billion digital ad market.
“…one pair of experts discovering that as much as 98 percent of all ad clicks on major advertising platforms such as Google, Yahoo, LinkedIn and Facebook in a seven-day period were executed not by human beings, but by computer-automated programs commonly referred to as “botnets” or “bots.”
These programs allow hackers to seize control of multiple computers remotely, providing them access to personal information as well as the ability to remotely install malware to engage in advertising fraud, entirely unbeknownst to the computer’s true owner.
Fake websites & bots generated $1.5 million a month for one Florida fraudster
A Florida company was generating $1.5 million dollars a month in online ads using fake websites, according to Ad Age. It wasn’t all that hard. They set up websites and then listed them as places to advertise in automated ad-buying markets, including some of the industry’s biggest names. Then, they hit the sites hard with bots — automated click programs — to drive up impressions. Then, they simply disappeared, presumably with the money on hand.