In marketing parlance, "corrective advertising" are ads that a company must run in order to correct mistaken impressions created by prior advertising.

After a torrid week or so, YouTube could do with some "corrective advertising" of its own.

Because the video upload and sharing behemoth has attracted a considerable amount of negative publicity since it emerged that its clients' ads were appearing next to objectionable video content.

It all began when the Times of London reported on St Patrick's Day that taxpayer-funded British government agency adverts were unwittingly funding extremists on the video platform.

Its investigation revealed rape apologists, anti-Semites and banned hate preachers were getting money from the ads through the YouTube revenue sharing model.

According to the Times, a YouTube poster typically receives $7.60 for every 1,000 views of an advert displayed with it.

Multiply that by the hundreds of thousands of views that some of the videos had attracted and it translated into a nice revenue stream for the recipients.

The response from the advertisers, including the Home Office, Royal Navy, Royal Air Force and Transport for London, was swift.

They pulled their ads from the platform pending reassurances from YouTube – sending its parent Google an understandable and understood message that this was not acceptable.

Their reaction was quickly mirrored bylarge corporate advertisers around the world.

AT&T, Lyft, Johnson & Johnson, Verizon, GlaxoSmithKline, Volkswagen, Toyota, BBC, Ford and many others were among those to react similarly, with plenty of others currently seeking urgent clarification from YouTube.

Even here, the big advertising houses reacted with annoyance to the move, with Core Media even pulling all its ad campaigns from YouTube.

The problem focuses on so-called programmatic advertising – an automated software based system that allows companies to bid for or buy digital ad spots and in the process specify the profile of the audience they want to see it.

The ads then find, and in some cases follow, targets on the internet, ensuring maximum exposure at minimal cost and fuss.

Despite a complicated intermediary structure, it is in theory an excellent solution for advertisers – except, that is, when their ads end up appearing beside offensive content that is likely to harm their brand.

Google says it has safeguards in place to prevent that from happening, but as the events of the past few days show, they don't work effectively.

"What we do is, we match ads and the content, but because we source the ads from everywhere, every once in a while somebody gets underneath the algorithm and they put in something that doesn’t match," said the chairman of Google parent, Alphabet, Eric Schmidt in an interview with Fox Business Network.

"We’ve had to tighten our policies and actually increase our manual review time, and so I think we’re going to be okay."

We’ll see.

To be fair, Google has been reasonably quick out of the blocks on this, apologising to companies who have been damaged by the affair and outlining steps it is taking to remedy the situation.

It said it was taking a tougher stance on hateful, offensive and derogatory content and removing ads more effectively from content that is attacking people on race, religion, gender and other grounds.

The company also plans to introduce new tools to allow advertisers to more easily and consistently manage where their ads appear across YouTube and the web.

Its intent may be genuine, and granted, monitoring the 400 hours of video being uploaded each minute is a mammoth task.

But it does all smack more than a little of a case of nudging the stable door shut when the horse is already in the next county.

Either the company knew this problem was happening and chose to ignore it on the basis that advertisers weren't raising it as an issue.

Or it didn't know because it wasn't watching user uploaded content, as well as its end products and services closely enough.

Whatever the answer, it doesn't look good and is likely to have cost the company significant lost revenue.

It's also knocked 4% or well over $20bn off the company's value on the stock market in the past week.
Major advertisers will be correctly watching closely to ensure the issue is resolved before they dip their toe back in the market.

They will also be correctly asking searching questions of other platforms where their ads appear, like Facebook, for example.

It has come under not-unrelated scrutiny recently over the profits being made by fake news websites using the social network to spread their material.

Tech companies need to stop hiding behind fig-leaf excuses like their "rapid growth makes it difficult" or "the right to free speech has to be protected", and take responsibility for ridding their platforms of hateful and racist speech.

YouTube is thought to be a major cash cow for Google, because of the explosion in online video consumption.

Indeed, the digital advertising market last year was worth $178bn, up 17% on 2015, making it a hugely lucrative space for people to be operating in, particularly when video is involved.

But right now traditional media outlets like TV, radio and newspapers must be rubbing their hands, as they prepare to mop up some of that courtesy of a temporary (or perhaps longer term) bounce of advertisers shying away from online channels.

Video may have killed the radio star.

But right now online video is killing itself.

Comments welcome via Twitter to @willgoodbody