A couple of years ago, most of us marketers hadn’t the foggiest idea of how to use AI for marketing. Suddenly, we’re writing LinkedIn posts about the best prompts to whisper into ChatGPT’s ear.
There’s plenty to celebrate about the sudden infusion of artificial intelligence into our work. McKenzie reckons AI will unlock $2.6 trillion in value for marketers.
But our rapid adoption of AI may be getting ahead of important ethical, legal, and operational questions—which will leave marketers exposed to risks we never had to think about before (i.e. can we be sued for telling AI to “write like Stephen King?”).
There’s a metric ton of AI dust in the marketing atmosphere that won’t settle for years. No matter how hard you squint you won’t make out every potential pitfall of using large language models and machine learning to create content and manage ads.
So our goal in this article is to view some prominent risks of using AI for marketing from a very high ground. We’ve rounded up what we have read online from experts to help mitigate these risks. And we shall try to add plenty of link resources so you can dig deeper into the questions that concern you most.
Risk #1: Machine learning bias
Sometimes machine learning algorithms give results that are unfairly in favor or against someone or something. It’s called machine learning bias, or AI bias, and it’s a pervasive problem with even the most advanced deep neural networks.
It’s a data problem
It’s not that AI networks are inherently bigoted. It’s a problem with the data that’s fed into them.
Machine learning algorithms work by identifying patterns to calculate the probability of an outcome, like whether or not a particular group of shoppers will like your product.
But what if the data the AI trains on is skewed towards a particular race, gender, or age group? The AI will come to the conclusion that those people are a better match and skew ad creative or placement accordingly.
Here’s an example. Researchers recently tested for gender bias in Facebook’s ad targeting systems. The investigators placed an ad to recruit delivery drivers for Pizza Hut, and a similar ad with the same qualifications for Instacart.
The existing pool of Pizza Hut drivers skews male, so Facebook showed those ads disproportionately to men. Instacart has more women drivers, so ads for their job were placed in front of more women. But there’s no inherent reason that women wouldn’t want to know about the Pizza Hut jobs, so that’s a big misstep in ad targeting.
AI bias is common
The problem extends way beyond Facebook. Researchers from USC looked at two large AI databases and found that over 38% of the data in them was biased. ChatGPT’s documentation even warns that their algorithm may associate “negative stereotypes with black women.”
Machine learning bias presents several implications for marketers; the least of which is poor ad performance. If you’re hoping to reach the most potential customers possible, an ad targeting platform that excludes large chunks of the population is less than ideal.
Of course there are bigger ramifications if our ads unfairly target, or exclude, certain groups.
How to avoid AI bias
So what do we do when our AI tools run amok? There are a couple of steps you can take to make sure your ads treat everyone equitably.
First and foremost, make sure a someone reviews your content, writes Alaura Weaver, the senior manager of content and community at Writer. “While AI technology has advanced significantly, it lacks the critical thinking and decision-making abilities that humans have,” she explains. “By having human editors review and fact-check AI-written content, they can ensure that it’s free from bias and follows ethical standards.”
Human oversight will reduce the risk of negative outcomes in paid ad campaigns, too.
“Currently, and perhaps indefinitely, it is not advisable to let AI completely take over campaigns or any form of marketing,” says Brett McHale, the Founder of Empiric Marketing. “AI performs optimally when it receives accurate inputs from organic intelligence that has already accumulated vast amounts of data and experiences.”
Risk #2: Factual fallacies
Google recently cost its parent company $100b in valuation when its new AI chatbot, Bard, gave an incorrect answer in a promotional tweet.
Google’s goof highlights one of the biggest limitations of AI, and one of the biggest risks for marketers using it: AI doesn’t always tell the truth.
AI hallucinates
Ethan Mollic, a professor at the Wharton School of Business, recently described AI-powered systems like ChatGPT as an “omniscient, eager-to-please intern who sometimes lies.”
Of course, AI isn’t sentient, despite what some may claim. It doesn’t intend to deceive us. It can, however, suffer from “hallucinations” that lead it to just make stuff up.
AI is a prediction machine. It looks to fill in the next word or phrase that’ll answer your query. But it’s not self-aware; AI doesn’t have gut-check logic to know if what it’s stringing together makes sense.
Unlike bias, this doesn’t seem to be a data problem. Even when the network has all the right info, it can still tell us the wrong thing.
Consider this example where a user asked ChatGPT “how many times did Argentina win the FIFA world cup?” It said once and referenced the team’s 1978 victory. The tweeter then asked which team won in 1986.
The chatbot admitted it was Argentina with no explanation for its former gaffe.
The troubling part is that AI’s erroneous answers are often written so confidently, they blend into the text around them, making them seem completely plausible. They can also be comprehensive, as detailed in a lawsuit filed against Open.ai, where ChatGPT allegedly concocted an entire story of embezzlement that was then shared by a journalist.
How to avoid AI’s hallucinations
While AI can lead you astray with even single-word answers, it’s more likely to go off the rails when writing longer texts.
“From a single prompt, AI can generate a blog or an eBook. Yes, that’s amazing – but there’s a catch,” Weaver warns. “The more it generates, the more editing and fact-checking you’ll have to do.”
To reduce the chances that your AI tool starts spinning hallucinatory narratives, Weaver says it’s best to create an outline and have the bot tackle it one section at a time. And then, of course, have a person review the facts and stats it adds.
Risk #3: Misapplication of AI tools
Every morning we wake up to a new crop of AI tools that seemingly sprouted overnight like mushrooms after a rainstorm.
But not every platform is built for all marketing functions, and some marketing challenges can’t (yet) be solved by AI.
AI tools have limitations
ChatGPT is a great example. The belle of the AI ball is fun to play with (like writing how to remove a peanut butter sandwich from a VCR in the style of the King James Bible). And it can churn out some surprisingly well-written short form answers that bust up writer’s block. But don’t ask it to help you do keyword research.
ChatGPT fails because of its relatively old data set which only includes information pre-2022. Ask it to offer keywords for “AI marketing” and its answers won’t jive with what you find in other tools like Thinword or Contextminds.
Likewise, both Google and Facebook have new AI-powered tools to help marketers create ads, optimize ad spend, and personalize the ad experience. A chatbot can’t solve those challenges.
Google announced a slew of AI upgrades to its search and ads management products at the 2023 Google Marketing Live event.
You can overuse AI
If you give an AI tool a singular task, it can over index on just one goal. Nick Abbene, a marketing automation expert, sees this often with companies focused on improving their SEO.
“The biggest problem I see is using SEO tools blindly, over-optimizing for search engines, and disregarding customer search intent,” Abbene says. “SEO tools are great for signaling to search engines quality content. But ultimately, Google wants to match the searcher’s ask.”
How to avoid misapplication of AI tools
A wrench isn’t the best option for pounding nails. Likewise, an AI writing assistant may not be good for creating web pages. Before you go all in on any one AI option, Abbene says to get feedback from the tool’s builder and other users.
“In order to avoid mis selection of AI tools, understand if other marketers are using the tool for your use case,” he says. “Feel free to request a product demo, or trial it alongside some other tools that offer the same functionality.”
Websites like Capterra let you quickly compare multiple AI platforms.
And once you find the right AI tool stack, use it to aid the process, not take it over. “Don’t be afraid to use AI tools to augment your workflow, but use them just for that,” Abbene says. “Begin each piece of content from first principles, with quality keyword research and understanding search intent.”
Risk #4: Homogeneous content
AI can write an entire essay in about 10 seconds. But as impressive as generative AI has become, it lacks the nuance to be truly creative, leaving its output often feeling, well, robotic.
“While AI is great at producing content that’s informative, it often lacks the creative flair and engagement that humans bring to the table,” Weaver says.
AI is made to imitate
Ask a generative AI writing bot to pen your book report, and it’ll easily spin up 500 words that competently explain the main theme of Catcher in the Rye (assuming it doesn’t hallucinate Holden Caulfield as a bank robber).
It can do that because it’s absorbed thousands of texts about J. D. Salinger’s masterpiece.
Now ask your AI pal to write a blog post that explains a concept core to your business in a way that encapsulates your brand, audience, and value proposition. You might be disappointed. “AI-generated content doesn’t always account for the nuances of a brand’s personality and values and may produce content that misses the mark,” Weaver says.
In other words, AI is great at digesting, combining, and reconfiguring what’s already been created. It’s not great at creating something that stands out against existing content.
Generative AI tools are also not good at making content engaging. They’ll happily churn out huge blocks of words with nary an image, graph, or bullet point to give weary eyes a rest. They won’t pull in customer stories or hypothetical examples to make a point more relatable. And they’d struggle to connect a news story from your industry to a benefit your product provides.
How to avoid homogenous content
Some AI tools, like Writer, have built-in features to help writers maintain a consistent brand personality. But you’ll still need an editor to “review, and edit the content for brand voice and tone to ensure that it resonates with the audience and reinforces the organization’s messaging and objectives,” Weaver advises.
Editors and writers can also see an article like other humans will. If there’s an impenetrable block of words, they’ll be the ones to break it up and add a little visual zhuzh.
Use AI content as a starting point—as a way to help kickstart your creativity and research. But always add your own personal touch.
Risk #5: Loss of SEO
Google’s stance on AI content has been a little murky. At first, it seemed the search engine would penalize posts written with AI.
More recently, Google’s developer blog said that AI is OK in their book. But there is a significant wink with that confirmation. Only “content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness” will impress the human search raters that continually evaluate Google’s ranking systems.
Trust is clutch for SEO
Among Google’s E-E-A-T, the one factor that rules them all is trust.
We’ve already discussed that AI content is prone to fallacies, making it inherently untrustworthy without human supervision. It also fails to meet the supporting requirements because, by nature, it isn’t written by someone with expertise, authority, or experience on the topic.
Take a blog post about baking banana bread. An AI bot will give you a recipe in about two seconds. But it can’t wax poetic on the chilly winter days spent baking for its family. Or talk about the years it spent experimenting with various types of flour as a commercial baker. Those perspectives are what Google’s search raters look for.
It also seems to be what people crave, too. That’s why so many of them are turning to real people on TikTok videos to learn things they used to find on Google.
How to avoid losing SEO
The great thing about AI is it doesn’t mind sharing bylines. So when you do use a chatbot to speed up content production, make sure you reference a human author with credentials.
This is especially true for sensitive subjects like healthcare and personal finance, which Google calls Your Money, Your Life topics. “If you’re in a YMYL vertical, prioritize authority, trust and accuracy above all else in your content,” advises Elisa Gabbert, Director of Content and SEO for WordStream and LocaliQ.
When writing about healthcare, for example, have your posts reviewed by a medical professional and reference them in the post. That’s a strong signal to Google that your content is trustworthy, even if it was started in a chatbot.
Mitigate the risks of using AI for marketing
AI is advancing at an incredible rate. In less than a year Chat GTP has already seen significant boosts in its capabilities. It’s impossible to know what we’ll be able to do with AI in even the next six to twelve months. Nor can we anticipate the potential problems.
Here are several ways you can improve your AI marketing outcomes while avoiding some of the most common risks:
- Have human editors review content for quality, readability, and brand voice
- Scrutinize each tool you use for security and capability
- Regularly review AI-directed ad targeting for bias
- Assess copy and images for potential copyright infringement