How AI amplifies ‘fauxthenticity’

By Kath Pay

Chad is quickly becoming a valuable member of my agency’s team for our planning, creative development and optimisation. You won’t find his photo on our company website, however. That’s because “Chad” is the name we’ve given to our private version of ChatGPT.

Plus, he’s still a work in progress. We’re training him on our content, our brand voice, and our unique and proprietary needs. While he’s a fast learner, we find we need to keep an eye on him because he sometimes gets things wrong when he checks outside sources.

That happens when he picks up bad information. It could be wrong, biased, self-serving, too general, outdated, or irrelevant for our needs, or just too flash-in-the-pan trendy. We tolerate Chad, though. For one thing, he won’t drink the last cup of coffee and then neglect to make another pot.

Could he ever become a senior staff member? Probably not. He needs a lot of oversight, and that’s as it should be. Chad and his fellow chatbots in training in our private instance of ChatGPT aren’t there to do our work for us. They’re there to help us do our work better.

Chad’s work also highlights a new problem that AI poses in email marketing – the ever-wider circulation of bad information and the appearance of authenticity it can give.

The consequences are significant. If we as marketers rely on bad information or unqualified advisers for everything from general how-tos to strategic planning, we could end up making disastrously wrong decisions that could cost revenue, customers, and jobs.

‘Fauxthenticity’ makes everyone an expert

Chad’s struggle with bad information highlights the limitations and dangers of over-reliance on large language models (LLMs) and under-supervision for authenticity and accuracy.

Email marketing is susceptible to this potential double whammy. We don’t have the same canon of knowledge as traditional general marketing does. Email pioneers learned from each other through in-person or virtual conversations and advice columns like what you read here in Only Influencers.

>That’s the upside of our DIY learning environment. But the downside is that it fuels a universe of misinformation and passed-along knowledge not backed up with facts or evidence.

>This makes email thought leadership prone to the peril of “fauxthenticity,” a term I wish I had coined because it perfectly captures the problem – a quality attributed to content that appears to be authentic but isn’t.

I found it in this quote from writer and philosopher Julian Baggini:

“Our problem is not that we have too much individuality, but we have the wrong kind, an ersatz version that leaves us closer to the dystopia of uniformity than we dare to believe. I call this a condition of faux-authenticity or ‘fauxthenticity’.”

With the increased use of generative AI to create blog posts and the like, incorrect information passed off as facts is one obvious problem. Another is that anyone can now claim to be an expert even if they excel only at search and cut-and-paste without evidence that their information is correct.

Fauxthenticity in action: the email open rate

The email conventional wisdom holds that, when testing subject lines, the one with the highest open rate is the winner. But I find time and time again that a high open rate doesn’t always generate the most conversions or the highest campaign revenue.

A higher open rate doesn’t equal high conversions. If conversions are the objective for your campaign, then using the open rate means you’ve potentially just optimized your email for the wrong result.

Yes, the open rate is easy to track. But it’s well known for being unreliable and can mask a campaign’s true performance.

Apple’s Mail Privacy Protection feature finished off the open rate for anything other than a trend indicator. Yet I still see advice claiming it’s the yardstick for testing campaign success, whether generated by humans or AI.

3 stages of thought leadership: regurgitation, adaptation, innovation

I’m going to take a moment here to explain why it’s important to know how information gets passed along because it’s one step in the process of evaluating the information you use in your work.

1. Regurgitation

This first stage is more about passing along information learned from others. It’s the beginner’s level of thought leadership. Most people who aspire to be experts, or who are paid to make their companies appear that way, start here.

When you don’t have much practical experience, it’s easy to do a quick search on popular topics like “email success metrics” or “email ROI” and cobble something together. As long as you use authoritative sources, you can produce something valuable for people who know less than you do.

The problem arises when you include information that’s wrong or out of date, and you – like my AI friend Chad – don’t know from experience that the information is bad. So it goes into your blog post, or your company newsletter, or your Slack/LinkedIn post, and the bad information gets a new lease on life.

Here’s where AI takes the whole regurgitation game up several notches: Including it in the content or recommendations it spits out can give it an undeserved patina of authority and authenticity – qualities that can make it rank high on search engines like Google.

When you rely on AI-generated content without fact-checking it with reliable and trustworthy sources or applying critical thinking, you pass on this bad information and give the original source material credit it doesn’t deserve. Someone who picks up your information and shares it perpetuates the problem.

After years of sifting through email-related content on the web and other sources, I find most of what I come across is in this stage.

2. Adaptation

As you become more experienced in the ways of email marketing, you learn to filter out what’s true and what isn’t for yourself. You can apply critical thinking when consuming information. You know, for example, that a campaign with a high open rate can end up with a low conversion rate because opens should not be used as a proxy for conversions.

You’ve either defined, or you’re well on the way to defining, whether you are a purist or a pragmatist marketer and thought leader, and your content supports this.

Instead of just passing along links and quoting others, your material begins to evolve and reflect your own experiences through outlets like your own case or use studies and research reports.

You still call upon others’ work, but you add your own interpretations. Further, you participate in face-to-face, online or virtual discussions, learn from others, apply what you learn, and report on the results.

3. Innovation

Now you’re in front of the pack. With your experience, previous writings and speaking engagements, and your varied interests that influence your email philosophy, you can look ahead and spotlight what we need to know. You challenge the status quo, bring new concepts and philosophies, and change people’s mindsets.

In my thought-leadership model, “fauxthenticity” can flatten distinctions among these three groups. AI compounds this problem by making it relatively easy to produce a full speech on a topic without any practical experience.

This can increase the volume of misinformation on the web and hasten its spread as others pick it up and share it without question.

A fourth consideration winds its way through all three stages and changes as you move from regurgitation to innovation: attribution.

  • Regurgitators might link to the source material but not mention who provided it, especially if they’re trying to build a reputation or career in thought leadership.
  • Adapters will sometimes attribute the sources of their inspiration, whether to support or disprove it, or to align themselves with a respected authority in that topic, to provide more credibility to their content.
  • Innovators, who likely produce the source material that everyone else quotes, borrows, or steals, are usually more likely to mention who or what got them thinking.

Attribution is important as email has many rules and so-called “best practices.” But not all best practices deserve the name. Combine this with underbudgeting and overworking email marketers, and you have the perfect storm.

I’ve worked with many email marketers who have said they seek advice on the web and latch onto a practice, even if it’s misinformed or archaic. I feel that if all authors were held to attribute and state the source of their advice, it would be much easier for a marketer to decide whether that content stands up to scrutiny and is worth heeding.

4 reasons why AI makes fauxthenticity a bigger problem

At this point, you might be shaking your head and saying, “I still don’t get the problem, Kath. Search engines have always turned up bad info. Why does AI make it worse?”

I have four answers to that question. But I also asked Chad for his views because he can be amazingly self-aware of his limitations and doesn’t pout if I disagree with him.

1. Noisier echo chamber: AI content that gets reused from one chatbot session to the next keeps that information viable, especially if its popularity increases a source’s ranking for authenticity. According to Chad, this can create an echo chamber of outdated practices.

Generative AI chatbots can amplify the problem because they pick it up from other sources, which then pick up the content you might produce based on this flawed information, and pass it off as true. It’s a vicious circle, perpetuated when marketers don’t fact-check results.

Chad says, “If AI continues to learn from existing content that may not always be updated or correct, there’s a risk of perpetuating outdated or ineffective practices. This could create a feedback loop where the same untested or misleading strategies are continuously recycled and reinforced.”

2. More face-value content: Chatbots don’t necessarily evaluate a source material’s trustworthiness. It’s your responsibility to vet what your chatbot tells you. Critical thinking is required.

Chad says, “As AI-generated content becomes more common, the ability to critically evaluate and test the information becomes even more crucial. Marketers will need to be more skeptical and discerning, relying on empirical evidence and testing rather than taking any practice at face value.”

3. Bad raps: When you rely on generative AI to create content for you, you run the risk that others will criticize your work for being incorrect, outdated, or plagiarized. That damages the reputation you are trying to build and taints everything else you say, even if it’s authentic and authoritative.

4. More bad decisions: One reason why generative AI has become so popular is that you can use it to create everything from an individual campaign to a complete email strategy for your brand or company to analyzing campaign and program data. It’s all too tempting to use the material as your chatbot session writes it.

However, you can shoot yourself in the virtual foot if you rely on outdated, false, or irrelevant material derived from a chatbot.

This is where fauxthenticity becomes a real danger. It’s bad enough if you write articles and blog posts. But if you use that material in creating campaigns, strategies or advising others, it can be a disaster.

Chad says, “The ease of generating content with AI can lead to a significant increase in the volume of available content. While this can democratize content creation and provide a platform for more voices, the quality and accuracy of this content can vary, making it harder for marketers to discern reliable from unreliable practices.”

Finally: Protect yourself from bad AI content

My tips can help you not just with AI content but with search results you do on your own as well.

Structure your initial prompts so that your chatbot consults specific sources or looks for terms that are most relevant for your needs and instruct it to attribute and provide the source link, which you need to review.

Then go back and initiate another set of prompts or conversations that refine what you’re looking for and give your chatbot a better idea of where to go or what not to do.

Another solution: See whether a content source’s author attributes an idea to a person or links to another source like a case study, or research report. Does the content even have an author?

If the content has links, check them out before using them. They could be years out of date. This is especially important when quoting statistics. I have lost count of the news stories that quote email’s eye-popping ROI but end up citing the same decades-old research time after time.

My consultancy is seeing a good return on the time and money we invested in Chad and other specialist GPTs. They make our daily work lives easier and more efficient. But we train every GPT that that touches email marketing on our extensive library of original, Holistic-written content, learnings, and insights.

We don’t use material pulled from the web and pass it off as our own. We’re not afraid to go up against the conventional wisdom if we have evidence otherwise, no matter how many others might disagree with us.

But we always look over our AI workers’ virtual shoulders. We check out everything they do using our own experiences and firsthand knowledge so that we can give our customers trustworthy and reliable insights and advice.

P.S.: Chad’s only contributions to this post are the quotes I attribute to him. The rest is mine!

Originally posted on OnlyInfluencers

AI tools like ChatGPT are revolutionizing email marketing, but they also pose the risk of amplifying ‘fauxthenticity’—the appearance of authenticity in content that isn’t truly accurate. Understanding these risks is crucial for marketers who want to maintain trust and credibility. This article delves into the challenges and offers strategies to ensure your AI-generated content remains reliable.

Ready to ensure your email marketing remains authentic in the age of AI? Contact Holistic Email Marketing today for expert guidance on leveraging AI while maintaining accuracy and trust.