The Uncanny Valley Challenge for AI Marketing
The "uncanny valley" remains a challenge across both visual and written creative. What does that mean for marketers?
AI-generated content is unquestionably getting better, thanks in large part to the significant leaps forward in both text and visual content outputs we’re seeing in the most recent platform releases from Anthropic, OpenAI, and Google.
As good as it’s getting, the persistent “uncanny valley” effect remains something every marketer needs to be aware of and to keep in mind when planning campaign creative, messaging, and content.
The conversation about AI's "uncanny valley" in marketing and especially advertising has focused heavily on visuals: the distorted hands, the too-smooth faces, the just-on-the-edge of creepy symmetry. But that’s really only half the challenge marketers face. Research over the past year or so also shows the uncanny valley effect extends to AI-generated copy, content, and marketing communications.
Audiences can often tell when they're reading LLM-generated text, even if they can't quite articulate why. And it's their response that we need to be concerned about.
The Visual Challenge We Know About
The 2024 and 2025 holiday advertising seasons made the visual uncanny valley impossible to ignore. Coca-Cola attempted AI-generated holiday commercials two years running. The 2024 version featuring people and trucks drew immediate backlash: "soulless," "creepy," "artificial." For 2025, they pivoted to anthropomorphic animals, this time admiring AI-generated trucks, to avoid human close-ups.
The technical execution improved, but the public reception did not - though admittedly, it appears a good bit of the public backlash against the ad came out of the creative and marketing industries (the latter calling out the vast production effort and costs involved, which seemed to counter the notion of AI-generated creative being more efficient).
NielsenIQ research from December 2024 quantified what we're seeing. Consumers consistently rated AI-generated video ads as more "annoying," "boring," and "confusing" than traditionally produced visual creative. Even AI output deemed high quality did not leave as strong an impression as conventional advertising. The research found AI-generated ads proved taxing for audiences, as instead of absorbing the message, audiences spent mental energy trying to figure out what looked...off.
Or as the NN Group put it: “Audiences can perceive when the narrative is shaped around what the technology can do rather than what the story should be.”
The Text Challenge We Should Also Keep in Mind
What tends to get less awareness in industry media is that the uncanny valley effect extends to AI-generated text content as well, and audiences are getting better at detecting it.
A 2025 study by Hookline found 82% of Americans can spot AI-generated content at least some of the time. For younger consumers, perhaps unsurprisingly, that number climbs to 88%. A Bynder study from 2024 showed 50% of consumers can correctly identify AI-generated copy when comparing ChatGPT output to human-created content based on the same input brief.
People tend to identify specific patterns: overuse of em-dashes (ChatGPT's signature "tell" - which as a lifelong fan of the em-dash I personally mourn) of course gets plenty of attention, but also uniform sentence structure, bland neutrality, overly formal language lacking personality, and content that's technically correct but emotionally vacant.
An MIT thesis from 2025 confirmed the uncanny valley effect surfaces explicitly in text-only interactions. Participants consistently rated chatbots engineered to fall in the uncanny valley lowest in anthropomorphism, animacy, likeability, and perceived intelligence.
The business impact is measurable. Research published in the Journal of Business Research (October 2024) documented an "AI-authorship effect" where AI-generated marketing communications reduce perceived authenticity and trigger what researchers call "moral disgust," negatively affecting word-of-mouth and customer loyalty.
The Disclosure Problem
The Nuremberg Institute for Market Decisions found that just labeling content as AI-generated created negative reactions among consumers. They saw labeled AI content as "less natural and less useful" even when the content was identical to human-labeled versions. Only 21% of respondents trust AI companies and their promises, and only 20% trust AI itself, suggesting at least in this case it’s less about advertising and more about ongoing general unease about AI that persists among the wider population.
With the EU's upcoming requirement to label AI-generated content, or domestic US examples like New York’s new AI-in-advertising transparency law, this creates a strategic bind for marketers. Transparency is meant to build trust, but research suggests it may do the opposite, at least for the models studied in recent research.
What This Means for CMOs
The common thread across visual and text-based uncanny valley problems is inherently a human one, rooted in the almost impossible to mimic human ability to tease out emotional cues such as expressions, movement, timing, word choice, and rhythm. The technology is undoubtably improving. Anyone who has used modern platforms such as Nano Banana from Google, Sora from OpenAI, or Claude Opus 4.5 from Anthropic in recent months, and has experience in the original ChatGPT and MidJourney models know just how far things have progressed. But as Campaign Asia noted, generative models can produce polished output, but they still cannot create emotional coherence.
The practical implications:
For visual content: AI-generated creative may work in lower-stakes contexts. For brand advertising where emotional connection matters, we're still in a period where the technology may create more problems than it solves.
For written content: The 82% detection rate should give pause to anyone publishing unedited AI copy at scale. The efficiency gains disappear if your audience mentally flags your content as inauthentic before they finish reading. I haven’t found similar research focused on B2B buyer audiences consuming content in a business context, where the tolerance from AI may be different, but the old adage that even B2B decision-making is emotional at its core dictates we take a cautious approach here.
For disclosure strategy: If you're required to label AI-generated content, pair it with products or contexts where AI feels like a natural fit and not an unwelcome intrusion. The NIM research found AI-generated ads were more accepted when promoting high-tech products, for example, suggesting there are situations where the uncanny valley doesn’t matter quite as much.
For the hybrid approach: The research consistently shows that AI works better as a starting point than a finished product. Use it for ideation, drafts, and versioning. Keep human judgment in the loop for anything customer-facing where emotional resonance matters.
The brands continuing to push fully AI-generated creative despite documented backlash may be making a calculated bet that controversy drives visibility, or that the expected savings from production efficiency may be worth the risk. Those are valid tradeoffs to consider, just ones where CMOs need to fully understand both sides of.
Note: Portions of this post were written by Claude Opus 4.5, as something of a personal test to see how valid the uncanny valley concerns remain in text using the latest and most advanced models. Some paragraphs are entirely from Claude, most are hand-written. If you can't tell them apart it's either a positive sign of the quality of the latest Claude model, or a stinging indictment of my own writing style!