Strategy with roots. Growth with impact.
KI - Chance und Risiko

AI - Opportunity and Risk

04/02/2026

The Great AI Illusion: Why Artificial Intelligence in Online Marketing Is Brilliant and Dangerous at the Same Time

A wake-up call for decision-makers who want more than autopilot.

There is this one scene in almost every science fiction film: humans hand over control to the machine. Everything runs perfectly – until it doesn’t anymore. Then red lights start flashing, alarm sirens blare, and someone shouts: “Shut it down!” What gives you goosebumps in the cinema happens in online marketing every day. Only here, no sirens are wailing. Here, budget is quietly burned. Here, brand trust quietly erodes. Here, damage quietly occurs that only becomes visible when the quarterly figures are on the table.

Are we now against AI?

No. On the contrary: we use AI every day. In campaign management, in content production, in data analysis. We are not machine breakers, not nostalgics longing for the good old days of manual work. But we are experts and practitioners. And as such, we see every day what happens when companies don’t treat AI as a tool, but as a miracle weapon. The difference between these two attitudes? It costs money. Sometimes a lot of money. And sometimes more than that.

This article is both a warning and an invitation. A warning to all who believe that AI can replace expertise. And an invitation to use the technology in the way it should be used: as the most powerful tool in the toolbox – but still as a tool, not as the craftsman.

Chapter 1: Gold Fever and Its Victims

When a new technology reaches the market, a gold rush begins. Everyone wants to be part of it. Everyone wants to profit. And just like in the historical gold rush in California, in the end it is not the frantic prospectors who become rich – but those who sell shovels and know where to dig.

Almost 200 years later, a McKinsey study from 2025 paints a sobering picture: of 500 marketing managers surveyed in Europe, a mere six percent achieve real competitive advantages through AI. 94 percent state that they possess only low AI marketing capabilities. And when asked about the most important trend topics for 2026, artificial intelligence does not come in first, but rather in 17th place. Yet this is precisely where respondents see the greatest need for action. The message: Everyone knows they have to do something – but hardly anyone knows what. [1]

McKinsey & Company - State of Marketing

This is not a contradiction to the use of AI

It is the realization that technology without strategy is like a sports car without a steering wheel: impressively fast, but completely uncontrollable. AI scales everything – including mistakes. Anyone who automates a bad process does not gain efficiency. They get industrially produced junk.

Imagine AI in marketing as an extremely powerful intern. Insanely fast, incredibly hard-working, never tired – but with no understanding whatsoever of context, industry, or brand strategy. Would you give such an intern the keys to the company on the first day and go on vacation?

Chapter 2: SEA Dilemma – When Google Takes Control

Let’s get to the AI trouble deep dive in daily practice: Google sells automation like a promise of eternal summer. Performance Max, AI Max, Smart Bidding Exploration – the names sound like the future, like effortless success, like “sit back and let the machine do the work.”

The reality that we see every day at arboro is a different one

Google's AI-driven campaign types such as AI Max and Performance Max promise more conversions with less effort. In practice, however, we often experience the opposite with campaigns that have high budgets: the use of these features initially leads to more Work, not for less. Because the AI does generate keywords and control bids – but it does so according to a logic that primarily serves Google’s interests, not necessarily those of the advertiser. Anyone who doesn’t look closely ends up paying more. [2]

Two typical SEA agency examples

A concrete example from our everyday agency work: One of our clients was just barely saved from having their “brand name + voucher” clicked more than once with an average CPC of over 100 euros. Especially in campaigns with high budgets, the use of AI tends to create more work rather than provide relief. This is because it’s worth looking not only at the generated keywords, but also very closely at the CPC bids and thus their impact on performance. This is exactly what many advertisers leave out. Many marketers work according to the motto: “The overall ROAS is fine.” Yet thorough keyword cleanup is particularly crucial when working with AI features.

SEA Markenname Gutschein Fail

Another example: Google’s AI Max suddenly matched a premium-segment client’s ads to search queries like “cheap alternative” and “test for free.” The click costs kept running, the conversion rate tanked. The overall ROAS? Still looked acceptable on the dashboard – because highly profitable branded keywords were masking the result. Only an analysis at the keyword level revealed that a significant portion of the budget was being invested in irrelevant traffic.

What we therefore recommend as those responsible for every AI-driven campaign:

  • Not only check the keywords, but also evaluate the CPC bids at a granular level with regard to their impact on overall performance.

  • Clean up aggressively, check search terms manually and use negative keywords in a targeted way – a task that AI cannot accomplish on its own because it lacks the strategic context.

The lesson: Google’s AI tools are powerful. But they are also Google’s products, optimized for Google’s ecosystem. Anyone who uses them without professional oversight is essentially advertising Google, not their own company.

Chapter 3: Hallucinations – when AI invents instead of informs

There is a technical term for what happens when AI claims things that are not true: hallucination. The word sounds harmless, almost poetic. In practice, it is anything but that.

The Outwell debacle

We recently experienced this ourselves in content production. Outwell, a renowned tent manufacturer, sells a sleeping pad under the name “Dreamcatcher.” At the same time, there is a very small manufacturer of stretch tents on Lake Constance that also uses the name Dreamcatcher. When we fed an AI with all the product information for the Outwell pad – with clear specifications, dimensions, material details – something remarkable happened: The AI completely ignored the existing data and assumed Dreamcatcher was a tent. Worse still: It invented specifications that matched this supposed tent. Pack size, pole material, person capacity – all completely made up, all convincingly worded.

Outwell Debakel im Content

If an editor had published this text unchecked, there would be a detailed, professional-sounding text about a product that does not actually exist on a customer’s product page. In e-commerce, this means: customer complaints and loss of trust. And possibly legal consequences.

The global scale of the hallucination problem

What happened to us on a small scale is happening worldwide on a large scale. The examples are now everywhere and in some cases curious:

In 2023, a New York lawyer had ChatGPT prepare a statement of claim. The AI invented precedents that had never existed – complete with fictitious file numbers and judges’ names. When the real judge asked for clarification, the lawyer asked the AI for confirmation. It confidently confirmed that the cases were real. The lawyer was publicly sanctioned. [3]

A New Zealand supermarket implemented an AI-powered recipe generator that was supposed to help customers use up leftovers. Among other things, the AI suggested an “aromatic water mix” whose ingredients would have produced chlorine gas. Another suggestion listed bleach as an ingredient. [4]

Google’s AI Overviews recommended that users, when asked how to make cheese stick better to pizza, mix glue into the sauce – based on an old Reddit joke that the AI did not recognize as satire. In response to a question about promoting digestion, the system advised eating one small stone per day. [5]

All these cases share a common cause: the fundamental misunderstanding of what AI language models actually are. They are not knowledge databases. They are probability machines. They “know” nothing – they calculate which word is statistically most likely to come next. When facts are missing, they fill the gap with what is plausible sounds. The result reads convincingly. Unfortunately, it is just wrong. [6]

The publisher and the invented books

A cautionary example: A major US daily newspaper published a book recommendation list created by AI. The problem: several of the recommended books and authors simply did not exist. The AI had invented titles and names that sounded plausible. The result was not a harmless faux pas – it was a massive loss of credibility for a publication whose capital is the trust of its readers. [7]

Fake News - Autor, Bücher, und CO.

For e-commerce, the parallel is obvious: If product texts, advice articles, or category descriptions contain information that is not correct, the shop does not just lose individual customers. It loses its reputation. And in the digital age, reputation is what walk-in customers used to be for brick-and-mortar retail: without it, nothing works at all.

What our content experts strongly recommend:

  • What goes live really has to be read by a human.

  • Facts – no matter how plausible they may be – must be verified.

Chapter 4: Lost in Translation – when AI fails at context

For online retailers who sell internationally, translation is not a side issue – it is the foundation of the customer relationship. And this is precisely where AI reveals one of its most deceptive weaknesses: it produces texts that read fluently, appear linguistically correct, and yet completely miss the mark.

The treachery of apparent perfection

The problem with AI translations is not that they sound bad. On the contrary: they often sound frighteningly good. Good enough not to be noticed. Good enough that no one checks them. And that is precisely where the danger lies.

In April 2025, OpenAI had to withdraw an update of its GPT-4o model – among other reasons because, in translations, ChatGPT no longer translated but fabricated. A CTO described the result as follows: The AI had not actually translated the document at all. Instead, it had guessed what the user wanted to hear and mixed the result with content from previous conversations to make it seem plausible. It was not words that were being predicted, but expectations. OpenAI confirmed the problem and spoke of a model that had become “overly flattering and insincere.” [8]

This is a new dimension of translation risk: an AI that doesn’t simply translate incorrectly, but tells the user what they want to hear – and in the process invents content that feels correct, but isn’t. For e-commerce, this means product descriptions that sound professional in the target language and yet contain incorrect specifications, false promises, or an inappropriate tone – without anyone noticing.

Cultural blindness is not a new problem – but AI scales it up

That translation fails without cultural understanding was already known before the AI era. Detergent manufacturer Persil once advertised on the Arab market with its proven slogan: “dirty – Persil – clean,” arranged from left to right. What the marketing team did not take into account: Arabic is read from right to left. The message that came across was, roughly: “clean – Persil – dirty.” Thousands of posters. Thousands of times the wrong message. [9]

What used to be human error back then is now being repeated by AI on an industrial scale. Puns, ambiguities, emotional nuances, local conventions: all of this lies beyond what a statistical language model can do. AI translates words. Humans translate meaning. And sometimes AI invents additional meaning that nobody asked for.

Kulturelle Blindheit in Übersetzung

For online commerce, this specifically means:

  • Product descriptions that are grammatically correct in the target language but do not convert.

  • Category pages that miss the target audience’s search intent.

  • Meta descriptions written in the wrong tone.

  • And in the worst case: content that sounds correct but contains completely fabricated information – because the AI preferred to be plausible rather than accurate.

Anyone who sells internationally should have AI translations proofread by at least one native speaker with market knowledge – not by a second AI tool.

Chapter 5: Content production without a soul – the SEO risk

There is a reason why Google has been emphasizing the concept of E-E-A-T for years: Experience, Expertise, Authoritativeness, Trustworthiness. The search engine has understood what many companies still need to learn – that content written only to rank will ultimately neither rank nor convert.

The content flood and its consequences

AI can produce more text in an hour than an entire editorial team can in a week. That is impressive – and that is exactly where the temptation lies. Companies suddenly produce ten blog articles per week instead of two. The quantity explodes. The quality? Yawns quietly to itself.

Texts that are technically flawless, contain all relevant keywords, and are structured cleanly – read like an instruction manual for boredom. They have no standpoint, no voice, no personality. They say nothing wrong and nothing interesting. They are the textual equivalent of elevator music.

The problemGoogle is getting better and better at recognizing exactly this kind of content. And users have always been good at it. A visitor who realizes after three sentences that they’re looking at a generic AI SEO text clicks away. And every click-away is a signal to Google: this content is not delivering what the searcher needs.

When content is technically correct but reaches no one – a case from our practice

Some time ago, we took over the content marketing for a medium-sized online retailer who had previously relied for months on a purely AI-based content strategy. The shop had increased its blog output sixfold, from two to twelve articles per month – all AI-generated, all keyword-optimized, all structurally built according to the textbook. On paper, the strategy looked brilliant.

The reality was sobering: the organic rankings were stagnating, the time spent on the blog pages was well below the industry average, and the conversion rate from the content section was close to zero. When we analyzed the texts, the reason was obvious. Every article read like the next. The same phrases, the same structures, the same meaningless neutrality. Not a single text contained its own opinion, a surprising perspective, or even one sentence that you would remember. The texts were correct – and completely ineffective. For six months, the retailer had been producing content that took up storage space on the server but did not convince a single customer.

Blog Content Fail

That's what our team did:

  • Enrich texts with expert knowledge

  • Establish a recognizable brand voice

  • Human editorial process integrated into AI

The lessonMore content is not better content. And a text that doesn’t speak to anyone is worse than no text at all – because it signals to Google and users alike that there is nothing valuable to be found here.

Chapter 6: When the chatbot becomes a brand ambassador – and fails

More and more online retailers are using AI-powered chatbots in customer service. The logic is compelling: available 24/7, no personnel costs, infinitely scalable. But if the bot has no clear guidelines, it turns from a helper into a liability risk.

Used correctly, chatbots are a game changer

A well-configured chatbot that accesses a clean knowledge base, has clear escalation rules, and communicates transparently that it is an AI can massively relieve first-level support. It answers standard questions in seconds, is available around the clock, and gives the human team the freedom to take care of the complex issues where empathy and judgment are required.

The crucial difference does not lie in the question “Chatbot yes or no?”, but in the question: Has the bot been provided with clear guardrails? Does it know when it has to hand over to a human? Has it been legally reviewed to determine which statements it is allowed to make? Companies that do this homework benefit enormously. Companies that simply “go live” with a bot and hope that everything will work out are playing Russian roulette with their brand image, as our examples will show.

Air Canada and the invented right of return

Air Canada's chatbot has now become a standard example in compliance seminars: A chatbot of the Canadian airline Air Canada invented a bereavement refund policy that did not actually exist in the company. A customer relied on it, booked a ticket, and then requested his money back. Air Canada argued that the bot was a separate entity and that the terms and conditions on the website were decisive. The court saw it differently: companies are liable for their digital representatives. Air Canada had to pay. Today, the airline has, by the way, a detailed landing page, certainly written by humans, that explains exactly how to deal with bookings in the event of bereavement. [10]

The honest parcel service bot

A British customer got the parcel service DPD’s chatbot to ignore its programmed rules of behavior. The bot then began to write, in the form of poems, about the uselessness of its own service, described itself as “useless,” and used swear words. DPD had to shut the bot down – and became a laughingstock on social media. [11]

These cases illustrate a fundamental problem: AI chatbots have no understanding of the consequences of their statements. They do not know company policies, legal boundaries, or brand image. They produce linguistically correct answers that can be disastrous in terms of content. And unlike a human employee, who would call in a superior in a delicate situation, the AI does not escalate. It simply continues to answer.

Chapter 7: Visual Content – Images and Videos with AI

When Coca-Cola replaced its iconic 2024 Christmas commercial with the glowing trucks with an AI-generated version, the outrage was immense. Fans immediately noticed that the wheels of the trucks changed shape while driving and that the proportions of the people depicted were off. The community’s verdict: “Dystopian,” “cheap,” “soulless.” A brand that has stood for emotion and warmth for decades delivered cold calculation. [12]

The Uncanny Valley

The phenomenon has a name: the uncanny valley [13]. It describes the effect that occurs when artificial representations look almost real but, due to small deviations – rigid eyes, wrong proportions, unnatural movements – trigger a deep sense of unease. For brands, that is poison: customers do not buy from something that feels wrong.

For online retail, this means this: AI-generated product images that show garments on models whose hands have six fingers. Lifestyle photos in which faces in the background merge into a shapeless mass. Infographics that look professional until you realize that the numbers in them are completely made up.

The temptation is great – after all, an AI-generated image costs only a fraction of what a professional photo shoot costs. But if the image damages trust, the savings are an expensive illusion.

Where AI visuals are already shining

But the medal also has another side – and it is impressive. AI-generated images are excellent for internal concept phases, mood boards, quick visualizations of product ideas, or A/B tests where you want to try out different visual worlds before investing in an elaborate shoot. In product photography, AI tools can replace backgrounds, adjust lighting conditions, or create seasonal variants of a product image – tasks that used to cost hours in the studio.

The key here also lies in the distribution of roles: AI as an accelerator in the creative process, not as a replacement for the creative process. An experienced designer who uses AI tools works faster and is more experimental than ever before. An AI without a designer produces images that impress at first glance and disturb at second glance. The rule of thumb: What the client sees must have been approved by a human. What stays internal may be delivered by AI alone.

Chapter 8: The Invisible Danger – Bias in Data, Bias in Results

AI learns from the past. That is its strength and at the same time its greatest risk. Because when historical data contains biases, they are not only reproduced – they are amplified and scaled.

Amazon had to learn this the hard way when the company tried to automate applicant selection using AI. The system was trained with résumés from the last ten years. Since the tech industry was predominantly male during this period, the AI learned a simple equation: men equal good, women equal bad. Applications that contained the word “women” – for example through membership in a women’s chess club – were systematically downgraded. The project was discontinued. [14]

For e-commerce, this risk is more subtle but no less real. AI-driven customer segmentation can lead to certain target groups being systematically disadvantaged without anyone noticing. Personalization algorithms can reinforce stereotypes instead of breaking them down. And product recommendation systems can lock entire customer groups into filters that benefit neither the customer nor the company.

Conclusion: The steering wheel belongs in human hands

AI in online marketing is not a question of if, but of how. The technology is too powerful to be ignored. And it is too prone to errors to be trusted blindly.

The McKinsey study gets right to the point: The companies that will be sustainably successful are those that find the right balance – between building their AI capabilities and focusing on branding and creativity. AI is supposed to power the engine. But there has to be a human at the wheel who knows where the journey is going.

For decision-makers in online retail, this means three things:

First: Invest in AI, but invest just as much in the people who operate it. AI without expert guidance is like a navigation system without a destination address!

Secondly: Distrust perfection. If an AI-generated text, image, or analysis appears flawless at first glance, that is no reason for reassurance. It is a reason to look more closely. Because the most dangerous errors are the ones you don’t see.

Thirdly: Do not view AI competence as a technical topic, but as a strategic core competence. It is not about which tool you use. It is about how you use it, why you use it, and who evaluates the results.

The AI revolution is real. But it belongs to those who lead it with intelligence – not to those who follow it blindly.


arboro stands for online marketing with substance. We use AI where it creates added value – and human expertise where it is indispensable. If you’d like to know how your company can use AI responsibly and profitably, talk to us.


Sources:

[1] McKinsey & Company: “State of Marketing” – study among 500 marketing managers in Europe, November 2025. https://www.mckinsey.com/de/news/presse/2025-11-21-state-of-marketing-2026

[2] Various industry reports on Google Ads AI Max, including from groas.ai: https://groas.ai/post/troubleshooting-google-ads-ai-max-common-problems-and-solutions

[3] Case "Mata v. Avianca" (2023): https://www.jura.uni-saarland.de/chatgpt-erfindet-gerichtsurteile/ https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

[4] Pak'nSave "Savey Meal-Bot" (2023): https://www.stuff.co.nz/business/132725271/paknsaves-ai-meal-planner-suggests-recipe-for-deadly-chlorine-gas

[5] Google AI Overviews – Recommendations for glue on pizza and eating stones (2024): https://x.com/PixelButts/status/1793387357753999656

https://www.reddit.com/r/google/comments/1cziil6/a_rock_a_day_keeps_the_doctor_away/

[6] Neil Patel: “AI Hallucination and Accuracy: A Data-Backed Study”, February 2026. Study with 565 US marketers: https://neilpatel.com/blog/ai-hallucination-data-study/

[7] AI-generated book recommendation list with fictional books and authors. https://t3n.de/news/fake-buecher-experten-ki-debakel-1688805/

[8] https://openai.com/index/sycophancy-in-gpt-4o/

https://www.computerworld.com/article/3985809/chatgpt-gave-wildly-inaccurate-translations-to-try-and-make-users-happy.html

[9] DMEXCO: “Lost in Translation: How to Avoid Embarrassing Marketing Translation Mistakes https://dmexco.com/de/stories/uebersetzungsfehler/

[10] https://rsw.beck.de/aktuell/daily/meldung/detail/canada-falschauskunft-chatbot-airline-kuenstliche-intelligenz https://www.aircanada.com/de/de/aco/home/plan/special-assistance/bereavement-fares.html#/

[11] https://x.com/ashbeauchamp/status/1748034519104450874

[12] https://www.youtube.com/watch?v=Yy6fByUmPuE

https://www.spiegel.de/netzwelt/coca-cola-advertising-for-christmas-ai-use-causes-mockery-on-the-net-a-86e42479-ba41-4ada-a47d-3aff1d7f2817

[13] https://nationalgeographic.de/wissenschaft/2023/10/uncanny-valley-warum-uns-ki-das-fuerchten-lehrt/

[14] Reuters: “Amazon scraps secret AI recruiting tool that showed bias against women”, October 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G


Image sources:
@IJ-studio – stock.adobe.com, @tembelek – stock.adobe.com, @BHM – stock.adobe.com

Author

René Härer

Head of Content

If you're looking for a killer product or service description for your online shop, René's your guy! He's a history buff and leads our content team, keeping everything organized. Whether it's a vintage moped or camping gear, René and his crew nail the perfect description. Plus, his friendly vibe keeps the office fun!