Skip to main content

Generative Engine Optimization: The New Tech Hustle or a CX Reality?

The digital marketing landscape is undergoing a massive tectonic shift. Since the ascent of Google, search engine optimization was the undisputed king of visibility. The industry grew into an eighty-billion-dollar behemoth built entirely on gaming Google search results. Marketers optimized keywords, built backlinks, and structured their websites to appease a single, dominant algorithm. Times are changing rapidly. We are now entering the era of Generative Engine Optimization. The rules of engagement have completely shifted.

In our latest CRMKonvos episode, Ralf and I sat down with Noriko Yokoi. Noriko holds a PhD from the London School of Economics and is co-founder of 3cubed.ai. We discussed this exact transition. The shift from traditional SEO to Generative Engine Optimization is leaving many enterprise buyers completely confused. Marketers are flooding LinkedIn and other platforms with colorful PDFs trying to explain the difference. The reality is far less glamorous and much more chaotic. Large language models like ChatGPT, Claude, and Gemini are changing how consumers find information. Consequently, brands are scrambling to ensure their products show up in AI-generated answers.

TL;DR

If you want to watch the full CRMKonvo, please go ahead here (optimized for smartphones) or here (optimized for tablets/computers).


Else, be my guest and continue to read.

Or do both …

At the core problem lies the absolute lack of transparency. Traditional SEO relies on concrete metrics. Google publishes keyword volumes. Marketers know exactly how many people searched and search for specific terms. Generative AI platforms provide absolutely nothing comparable, nothing at all, in fact. The big AI companies keep their vaults tightly sealed. They do not share search volumes, prompt frequencies, or exact ranking methodologies. This creates a massive void in the market. As a result, new startups are emerging and rushing in to fill it. They offer dashboards and visibility scores that attempt to quantify the unquantifiable.

The Black Box of Generative Visibility

Noriko pointed out the fundamental difference between traditional search and generative search. SEO was always about ranking links on a page. You wanted to be the top blue link. Or at least be on the first page. Generative AI does not give you a list of links. It gives you a single, synthesized answer. It might cite three or four sources behind that answer. If you are not one of those cited sources, you practically do not exist.

This creates a fierce competition for authority. The LLMs scour the internet for sources they deem highly authoritative and citable. They look for deep expertise and comprehensive knowledge. Ironically, this is where the modern marketing machine fails spectacularly. For years, marketers have produced short, punchy, keyword-stuffed articles. They created marketing fluff designed to catch a quick click. LLMs absolutely hate fluff. They prefer deep, substantive content written by recognized experts.

This brings us to the cynicist view I raised during the episode. Ironically, LLMs are great at creating fluff. LLMs generated content is akin to instant mediocrity. We are seeing a flood of AI-generated articles hitting the web. The internet is drowning in average, uninspired, LLM-generated text. If a brand wants to stand out to an LLM, it therefore cannot rely on generative AI to write its core thought leadership. You cannot feed instant mediocrity to an algorithm that is actively hunting for unique expertise. The brands that win will be the ones that invest in genuine, human-led research and deep-dive analysis.

The Measurement Dilemma and the Red Face Test

How do you measure success in a non-deterministic system? If you ask a search engine the same query five times, you get the same result, well almost, as there is AI behind them as well, but you get the picture. Search engines are inherently not probabilistic. If you ask an LLM the same prompt five times, you might get different citations. This makes building a reliable tracking tool incredibly difficult.

Noriko explained that the current crop of Generative Engine Optimization tools uses a mix of methodologies to make sense of this unpredictability. Some of them rely on opt-in user panels to gather real-world prompt data. Others use synthetic prompts. They program bots to ask the LLMs thousands of questions and scrape the citations that come back. They compile this data to give brands a share of voice metric. It is still an educated guess, a pin dropped into a massive ocean of data.

These services are not cheap. Companies are charging anywhere from a few hundred to several thousand dollars a month for these insights. Enterprise buyers are essentially paying to find out if another machine likes them. Before you write that check, you must apply what Noriko called the red face test. You need to look at the methodology behind these tools. Ask the vendor exactly how they calculate their visibility scores. If they cannot give you a straight answer without turning red in the face, you should walk away. You must be comfortable with the level of guesswork involved before committing your scarce CX budget.

Writing for the Machine

We have reached a bizarre inflection point in content creation. For years, we wrote for human readers. Then we started writing for search engine crawlers. Now, we must write specifically to educate LLMs. And don’t be of the illusion that they all work the same. The don’t. And there is no dominant player like Google is for search yet. Asking Gemini, Claude and ChatGPT the very same question:” Which are the most used LLMs? Rank by number of monthly users?”, they agree that there the leading three players are Meta AI, ChatGPT, and Gemini – not necessarily in that order, though, but the usage numbers are fairly similar. So, there are at least three to consider.

During our conversation, I asked Noriko if authors now need to change their writing style. Should we stop addressing our human audience and start addressing Claude and Gemini directly? Her answer was a partial yes. To become a cited authority, your content must be structured in a way that an LLM easily digests. You have to break down complex topics into clear, distinct sub-segments. You must avoid superficial marketing jargon. Instead, provide deep, factual knowledge. Part of this, btw, is exactly what would make a human editor, or an LLM, for that matter, flag your content as potentially AI generated.

Still, this requires a massive shift in corporate marketing strategy. The days of publishing five shallow blog posts a week are over. Brands need to publish comprehensive, authoritative pillars of content. They need to earn their citations by being the most credible source on a specific topic. The machine will ignore you if you offer nothing but noise.

Tools like Profound, Peak AI, and others are emerging as the early leaders in this measurement space. They are trying to standardize a chaotic environment. It is advisable to utilize tools like these to get a baseline understanding of your AI visibility. However, you must treat their metrics as directional indicators rather than absolute facts. The ecosystem is simply too volatile for absolute certainty.

The Reality Check for Buyers: Stop Feeding the Hype Machine

Let’s make some sense out of the chaos. Enterprise AI buyers are constantly bombarded with pitches promising magical visibility. The shift to Generative Engine Optimization can’t be ignored, but it is deeply misunderstood. You cannot buy your way to the top of an LLM response (yet?) with a simple subscription. You must fundamentally change your organization's approach to data and information. Here are the three main takeaways you need to implement immediately.

Integration Realities Must Guide Strategy. Do not purchase a standalone visibility tool expecting it to magically fix your customer experience, or to bring you into an LLM’s response. Let alone all of them. These tools provide directional data at best. You must integrate these insights into your broader CRM and CX architecture. A visibility score is useless if it does not translate into better customer interactions and ultimately improved conversions. The technology must serve your overarching strategy; it cannot become a siloed, self-serving vanity metric.

Data Quality Over Generative Hype. The market is obsessed with using AI to generate content. This is a fatal error. LLMs prioritize depth, expertise, and authoritativeness. If you feed your channels with recycled marketing fluff, you will achieve nothing but instant mediocrity. Algorithms will bypass you in favor of sources providing genuine knowledge. Invest your budget in subject matter experts who produce high-quality content. The machines look for truth; make sure you provide it.

The Human-in-the-Loop Necessity. Generative AI is inherently unpredictable. It hallucinates and changes answers based on invisible updates. But it’s here to stay. You cannot put your brand reputation entirely in the hands of automated dashboards. As your vendors do, you must pass the red face test, too. Therefore, demand transparency from vendors regarding their methodologies. Keep skilled human analysts in the loop to interpret the data. Trust the tools but strictly verify outputs before making strategic CX decisions. 

Comments

Last Year's Top 5 Popular Posts

You are only as good as your customer remembers

As you know, I am very interested in how organizations are using business applications, which problems they do address, and how they review their success. In a next instance of these customer interviews, I had the opportunity to talk with Melissa Gordon , Executive Vice President, Enterprise Solutions at Tidal Basin about their journey with Zoho. You can watch the full interview on YouTube. Tidal Basin is a government contractor that provides various services throughout the government space, including disaster response, technology and financial services, and contact centers. Tidal Basin started with Zoho CRM and was searching for a project management tool in 2019. This was prompted by mainly two drivers. First, employees were asking for tools to help them running their projects. Second, with a focus on organizational growth and bigger projects that involved more people, Tidal Basin wanted to reduce its risk exposure and increase the efficiency of project delivery. This way, the compa...

Data Wars: SAP Vs. Salesforce In The AI-Driven Enterprise Future

The past weeks certainly brought a lot of news, with SAP Sapphire and Salesforce's surely strategically timed announcement of acquiring Informatica , ranging at the top. I have covered both in recent articles. The enterprise software landscape is crackling with energy, and Artificial Intelligence (AI) is certainly the star of the show. It isn't anymore about AI as a mere feature; it's about AI as the strategic core of enterprise software. Two recent announcements underscored this shift: SAP's ambitious AI-centric vision that was unveiled at its Sapphire 2025 conference, and, arriving hot on its heels, Salesforce's agreement to acquire data management titan Informatica for $8 billion. Both signal an intensified battle for AI supremacy, where trusted, enterprise-wide data is the undisputed new monarch. Of course, SAP and Salesforce are not the only ones duking this one out. SAP's Sapphire Vision: An AI-Powered, Integrated Enterprise At its Sapphire 2025 event in ...

CPQ, Meet Price Optimization: Your Revenue Lifecycle Just Got Serious

The news On October 1, 2025, Conga announced its intent to acquire the B2B business of PROS , following PRO’s acquisition by Thomas Bravo . At the same time, ThomaBravo and PROS announced that PRO’s travel business segment will be run as a standalone business . The bigger picture Revenue operations, revenue management and revenue lifecycle management have become a thing in the past years, as evidenced by the number of specialized companies that solve parts of the overall problem of optimizing revenue. It also got abused to some extent (e.g., surge pricing models) when the users of the corresponding capabilities consider optimizing being the same as maximizing. Reality check: It is not. While optimizing involves a bit of identifying how much a customer is willing to pay, it also involves the thought of repeat business, or in other words customer loyalty, even without a formal loyalty program. And that involves the customer experience, part of which the speed of creating a quote with mat...

The CDP is dead – long live the CDP!

In the past few years, I have written about CDPs, what they are and what their value is – or rather can be. My definition of a CDP that I laid out in one of my column articles on CustomerThink is:  A Customer Data Platform is a software that creates persistent, unified customer records that enable business processes that have the customers’ interests and objectives in mind. It is a good thing that CDPs evolved from its origins of being a packaged software owned by marketers, serving marketers. Having looked at CDP’s as a band aid that fixes the proliferation of data silos that emerged for a number of reasons, I have ultimately come to the conclusion and am here to say that the customer data platform as an entity is increasingly becoming irrelevant – or in the typical marketing hyperbole – dead.  Why is that? There are mainly four reasons for it.  For one, many an application has its own CDP variant already embedded as part of enabling its core functionality. Any engageme...

LLM Showdown: Comparing ChatGPT, Gemini, and Grok for Automated News Research

The analyst’s day is full of research. Now, this is the age of AI and AI is here to help, isn’t it? As everyone is talking about copilots and AI agents, why not using the tools at hand to do a little research on research. NB., no one really has a good definition of an AI agent, so this might become an additional topic for research. But I digress. Imagine the following project at hand, which is not only interesting for analysts, btw, but also for a variety of roles in the corporate world. Let’s call it vendor (competitor) monitoring. The job is the following: Research reputable sites for news about a number of vendors, relating to a set of keywords. Reputable sites are high quality news sites, high quality tech publications, high quality analyst sites and, of course the news pages of the vendors in question. Limit the time frame of the search matching to the cadence of my information requirement, e.g., “yesterday” for a daily update or “last week” for a weekly update. Provide a summary ...