11 ways AI can transform your content strategy and why it might not

Wendy Gittleson Avatar

In early 2023, I sat inside a windowless conference room with 200 or so of my SaaS colleagues. The topic at hand was a new product release that was built on top of OpenAI’s (ChatGPT’s parent company) platform. 

The product, then called, “AI Assistant,” wasn’t ChatGPT as the public knows it. Each customer had their own private instance to be trained on their data, ensuring confidentiality. 

The company’s Chief Product Officer, an engaging speaker even when talking about less groundbreaking technology, was particularly animated. I, however, was apprehensive and even a little worried. Whispers about AI’s potential to replace writers, designers, coders, and other professionals had already stirred anxiety. Many of us in that room wondered: Was our future at risk?

Two years later, my perspective has shifted. I’ve decided to embrace generative AI (GenAI) tools like ChatGPT. Instead of fearing obsolescence, I actively explore how these tools can enhance my work. AI has become an invaluable ally in my career.

Here are 11 ways AI enhances my work as a content strategist, along with a few caveats. But first, let’s take a quick look at how we got here.

A very brief history of AI

AI is hardly a new technology. The idea was first proposed by Alan Turing in 1950 in a seminal book titled, “Computer Machinery and Intelligence,” which pondered whether machines could think. 

Just two years later, Arthur Samuel answered Turing’s question by developing a checkers program in which the computer learned the game without manual training. 

By 1955, John McCarthy coined the term “artificial intelligence.”

AI in practice — the early years

The next decade and a half brought us a handful of innovations, including an industrial robot used on General Motors assembly lines, the first chatbot, machine learning (ML) chess, and even the first autonomous (but limited) vehicle

In 1979, The American Association of Artificial Intelligence, now known as the Association for the Advancement of Artificial Intelligence (AAAI) was founded. The AI dam might not have broken then, but 1980-1987 gave us our first AI boom. 

AI boom 1.0 — AI Winter

While it would still be decades before most AI innovations became available to the public, the 1980s brought us early versions of language translation, consumer customization, and a driverless car that could go 55 miles per hour on empty roads.

In 1987, an “AI Winter” hit. People lost interest. Funding for the exceedingly expensive research all but dried up and progress dramatically slowed.

The long thaw

Fast forward to 1997: IBM’s Deep Blue defeated chess champion Garry Kasparov, reigniting AI’s momentum. The 2000s brought voice recognition, emotion-simulating software, and AI-powered virtual assistants like Siri.

By 2012, Google AI demonstrated image recognition capabilities, and by 2018, AI was surpassing humans in language processing tests. OpenAI’s release of GPT-3 in 2020 brought generative AI into the mainstream, sparking debates and opportunities.

Is AI a friend or foe to content creators?

Is AI out to steal my job or make me better? I can’t predict the future, but here are the ways AI is helping me become more valuable to my clients now.

The 11 ways AI elevates content strategy and writing

  1. Helping create audience personas — AI can’t automatically create audience personas for you unless it can pull data from your CRM (content relationship management platform). However, if you upload CRM data, it can help you define your audience, identify patterns and trends, research publicly available competitor strategies, and suggest audiences you may not have identified.
  2. Content planning — Once your AI instance has the key data “in hand,” it can suggest based on customer personas, industry trends, audience objectives, and relevant keywords.
  3. Creating content calendars — When is just as important as what. AI can help you create schedules, find topics, channels, and timelines. 
  4. SEO optimization — AI can provide keyword research and suggest titles and meta descriptions.
  5. Brand voice development — AI can analyze content that has worked for you to help establish brand guidelines and consistent messaging.
  6. Conducting performance reviews — AI can analyze existing content for strengths, weaknesses, and opportunities.
  7. Gap identification — What topics or areas have you neglected in your content library?
  8. Data interpretation — AI can interpret content performance reports from Google Analytics, Semrush, or whatever tools you utilize, and suggest improvements.
  9. A/B testing — AI can recommend headlines, CTAs, and formats to help optimize engagement for emails, blog posts, newsletters, and more. 
  10. Competitive analysis — AI can even analyze your competitors’ content strategy and suggest ways to make yours stand out.
  11. Finding resources (my favorite) — AI can recommend books, articles, and experts to follow, providing a constant source of inspiration. 

AI’s risks and limitations 

Despite its strengths, AI is not magic, especially when it comes to content creation. In fact, its training methods are why it is a great tool for helping create content, but also why it may never write great, or even good, content. 

How AI is trained

GenAI is trained on massive amounts of data. Like your favorite search engine, GenAI has gathered enough data to answer most of your questions, but it’s not a search engine. It voraciously consumes written content, yet it can’t read. 

GenAI studies human language patterns, contexts, and ideas so it can one day mimic and predict human behavior. It studies grammar, syntax, context, tone and style for the most appropriate responses, but it’s definitely not human. 

Human trainers fine-tune GenAI’s training to help find errors, identify gaps, and make the model more accurate and ethical, but it would be impossible to keep up.

Moore’s Law and AI training 

You may be familiar with Moore’s Law. In 1965, Gordon Moore predicted that data storage requirements would double every two years. While many argue that Moore’s Law is dead, he wasn’t completely off the mark. 

When Moore published his theory, it was estimated that there was about a terabyte of digital data in the world. Today, a terabyte can fit on a flash drive and the estimated amount of digital data worldwide amounts to about 120 zettabytes, which is equal to 120 billion terabytes. 

To be fair, most data was analog in Moore’s day, but the rise of the internet and social media, smartphones, video streaming, cloud computing, etc. is the primary culprit. 

Data overload is good and bad for AI

Imagine a person who has read every book in the Library of Congress. They would undoubtedly be one of the best-informed people on the planet. Now, imagine if that same person got half its information from the internet and social media.

AI’s training is largely indiscriminate. It’s almost as likely to train from misinformation as that which is credible. It picks up human behavior cues from hostile social media transactions as well as friendly ones. I’ve even seen AI misspell words. 

You may have heard of AI hallucinations, which are incorrect responses. AI doesn’t make the responses up, exactly. It learns from its teachers.

Unfortunately, there are simply not enough AI supervisors to ensure accuracy, quality, and professional tone. Even when an AI response seems accurate, it might have biases that come from biased training. 

AI plagiarism

Many say that all AI-generated content is plagiarized because it takes its “ideas” from other publicly available sources and repackages it as new content. But it doesn’t always repackage it. 

Engineers at the University of Mississippi and Penn State analyzed 210,000 ChatGPT-2 generated texts and compared them to 8 million training documents. It found multiple occurrences of three types of plagiarism: Direct copying of content, paraphrasing, and copying ideas from text without proper attribution.

The New York Times is currently in a legal battle with OpenAI and Microsoft, alleging that AI had illegally used copyrighted Times content for training. The Times further alleged that AI deleted the evidence its legal team had spent 150 hours gathering. 

While this might not directly affect you now, it can. If plagiarized content were to appear on any of your marketing materials, you could land in legal hot water. There are several plagiarism checkers in the marketplace, but they’re AI models trained to look for matching words, not meaning.

If, for example, your company manufactured dog collars, it wouldn’t be unreasonable to imagine that your direct competitors would use many of the same words in their marketing. That doesn’t necessarily mean they’re plagiarizing you. It might only mean that they write a lot about dogs and collars. 

Any savvy middle school student can outwit a plagiarism checker by rewriting existing content with direct synonyms. 

My final thoughts

GenAI is still a relatively new technology with a lot of unknowns and it’s growing far faster than regulators or trainers can keep up. AI has tremendous potential for data analytics, ideation, and more, but its content writing capabilities still need a dominant human hand to ensure accuracy, quality, and originality. 

AI helps me almost daily, but it doesn’t write for me. If you’d like to know more uses for GenAI, contact me. I’d be happy to share some tips. 


Discover more from

Subscribe to get the latest posts sent to your email.