More than 250 companies have now pulled ads.  Ad losses for YouTube are now in the millions of dollars.  Analysts warn that the loss could escalate into the hundreds of millions of dollars if things don’t change.

Pepsi, Starbucks, and Walmart have suspended their YouTube advertising.  They’re joining similar moves from AT&T, Verizon, Johnson & Johnson, and Volkswagen in pulling their ads due to lack of control by Google’s site over where the ads appear.  Automated programs don’t always discriminate where the ads show, so some have shown up alongside or ahead of racist, hate-speech, or morally questionable content.

Pharmaceutical company GSK, car rental company Enterprise, the UK government, and at least one major international ad agency have also pulled all ads.

“We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate,.  Until Google can ensure this won’t happen again, we are removing our ads from Google’s non-search platforms.” – ” AT&T spokesman via statement.

Google has apologized and says they have put in place systems to prevent this from happening in the future, but it hasn’t seemed to help assuage doubts in major advertisers.

“The content with which we are being associated is appalling and completely against our company values,” – Wal-Mart via statement

I’ll say it again. Working directly with brand name publishers guarantees your ads will be seen in a quality environment. As good as Google is, ad networks have a built-in problem: they can’t safely police every site that’s in the network. The amount of content that is uploaded daily is staggering and the crooks out there keep moving the ball. Imagine the magnitude of monitoring every item to every site out there… every second… and determining its validity. I don’t know that there is AI capable of that task.

“Some of the world’s biggest brands are unwittingly funding Islamic extremists, white supremacists and pornographers by advertising on their websites, The Times can reveal.  Advertisements for hundreds of large companies, universities and charities, including Mercedes-Benz, Waitrose and Marie Curie, appear on hate sites and YouTube videos created by supporters of terrorist groups such as Islamic State and Combat 18, a violent pro-Nazi faction.” – The Times

Google’s in major spin mode, apologizing and pledging to make changes.  “Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values,” Philipp Schindler, Chief Business Officer for Google, said in a blog post. For this, we deeply apologize. We know that this is unacceptable to the advertisers and agencies who put their trust in us.

“We know advertisers don’t want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories.” – Philipp Schindler, Chief Business Officer for Google, via blog post.

Google has pledged:

  • Safer default for brands so that ads show on content that meet a higher level of brand safety and excludes more objectionable content. That will be the default with brands making the decision whether to allow lower levels of scrutiny for broader reach.
  • Simplified management of exclusions making it easier to exclude specific sits and channels. Up to now, it’s been kind of a pain.
  • More fine-tuned controls for better control over where ads will show.

Some other recent stories I’ve written on the issues with Ad Tech Fraud:

Biggest ad fraud in history? Hackers stealing $3-$5 million dollars a day using video ads

White Ops security researchers have exposed the most profitable and advanced ad fraud operation ever seen by the industry. Dubbed “The Methbot Operation” after references to “meth” in the code of the bot itself, a group of operators has siphoned off as much as $180 million from major U.S. media companies and brand advertisers.

Controlled by a single group based in Russia and operating out of data centers in the US and Netherlands, this “bot farm” generates $3 to $5 million in fraudulent revenue per day by targeting the premium video advertising ecosystem, according to cyber security firm White Ops.


Digital ad market could be 2nd largest revenue source for organized crime

In a letter to Federal Trade Commission (FTC) Chairwoman Edith Ramirez, the Senators — both members of the Senate Banking Committee — pointed to studies that have found rampant fraud in the $60 billion digital ad market.

“…one pair of experts discovering that as much as 98 percent of all ad clicks on major advertising platforms such as Google, Yahoo, LinkedIn and Facebook in a seven-day period were executed not by human beings, but by computer-automated programs commonly referred to as “botnets” or “bots.”

These programs allow hackers to seize control of multiple computers remotely, providing them access to personal information as well as the ability to remotely install malware to engage in advertising fraud, entirely unbeknownst to the computer’s true owner.


Fake websites & bots generated $1.5 million a month for one Florida fraudster

A Florida company was generating $1.5 million dollars a month in online ads using fake websites, according to Ad Age. It wasn’t all that hard. They set up websites and then listed them as places to advertise in automated ad-buying markets, including some of the industry’s biggest names. Then, they hit the sites hard with bots — automated click programs — to drive up impressions. Then, they simply disappeared, presumably with the money on hand.