Agent commerce diagnostic

Can ChatGPT or Claude buy from your Shopify store?

Paste your domain. In under a minute, see what an AI agent sees when it tries to find a product, pick a variant, and reach checkout — and exactly where it gives up.

Built for the next checkout channel. The same way Claude Desktop, ChatGPT's Operator, and Perplexity Comet are starting to drive purchases — most stores are not ready, and nothing in your current analytics stack will tell you that.

Scan gratuit · 30 secondes · sans inscription

Vous voulez un instantané de votre situation ?Scannez votre boutique.

Collez votre domaine. Serge parcourt votre boutique en environ trente secondes — entièrement déterministe, sans IA dans la boucle de scan — et renvoie un instantané rapide : où les agents peuvent être bloqués, quoi inspecter en premier, et les corrections suggérées que votre équipe peut vérifier. Sans inscription. Transmettez le résultat à votre lead front-end.

Le scan est le point de départ. La vue plus profonde vient de la visibilité des sessions, du replay et du briefing.

The three failure patterns

Where agents actually give up on Shopify stores

Across the stores we scan, the same handful of patterns produce most of the failed checkouts. Each one is a few lines of theme code. Each one is invisible to GA4 and to human session-replay tools.

01 / Variant selector

The size or colour picker is not a real button

Pattern

Many Shopify themes render variant pickers as a roleless `div` with an `onClick` handler — no `role`, no accessible name, no underlying `select` element. The visible label says "Small Medium Large" but the DOM exposes those words as plain text inside generic divs.

Why agents stall

Browser-use agents (Operator, Comet, ChatGPT Atlas, Claude via Playwright MCP) read the accessibility tree before they look at pixels. A roleless div looks like prose. The agent sees "Small Medium Large" as a sentence about a product, not as three options it can pick. It tries to click "Add to cart" without selecting a variant, gets blocked, doesn't know why.

Fix

Use a `button` with `role="radio"`, `aria-checked` set per option, and a clear `aria-label` (`Size: M`, `Size: L`). Or a real `select` element. Either works. The Dawn theme (Shopify's default) ships the right pattern out of the box; many premium themes do not.

02 / Cart state

The cart only exists in React state

Pattern

Slide-out cart drawers are a UX upgrade for human shoppers. They open via local component state, no route change, no `/cart` page navigation. The cart's contents live in a Zustand store or React Context; nothing in the DOM mirrors what is in it.

Why agents stall

When the agent gets to step 3 of "add the laptop, then the case, then check out," it needs to verify each item landed in the cart. With drawer-only cart UI, the agent has no readable signal — the drawer collapses on navigate, the URL never changes, the page DOM does not expose `Offer` data. The agent re-adds the same item, doubles the line, gives up.

Fix

Mirror cart state to `/cart` (Shopify's default page works). Expose line items in a JSON-LD `Offer` block. The drawer can stay; just do not make it the only source of truth.

03 / Checkout posture

Cookie banner + bot challenge + cursor-locked field

Pattern

The agent reaches `/checkout` and immediately faces three things: a Cloudflare bot challenge, a cookie consent banner with a tiny dismiss link, and a checkout form that hides the email field until the banner is dismissed. Each one is fine for humans. Stacked, they are an obstacle course.

Why agents stall

Cloudflare's standard "Managed Challenge" looks for cursor entropy — humans wiggle, agents don't. Cookie banners often use heuristic interaction detection that flags single-shot clicks. Form fields that appear after a state change require the agent to re-read the page, which costs tokens and time.

Fix

Configure Cloudflare to allow signed agents (Web Bot Auth, in beta as of Q1 2026; OpenAI's signed agent and Anthropic's signed agent are early entrants). Move the consent dismiss to a stable selector with `role="button"` and an accessible name. Render the email field at first paint.

These are the three we see most often. The full scan checks twenty-three patterns and ranks them by likelihood of stopping a real agent.

Why GA4 won't surface this

GA4 sees the arrival. Not the failure.

GA4 can detect AI-mediated traffic with a custom channel group. The setup is straightforward (we have a separate guide). Even with that setup running, the journey itself stays opaque.

  • Headless browser agents — ChatGPT Agent, Operator, Comet, Atlas — navigate via direct URL fetch. No referrer. GA4 logs them as "Direct."
  • Loamly's analysis of 446,000 AI-driven sessions found 70.6% arrived with no referrer at all. Invisible to a referrer-based regex by design.
  • GA4 catches arrival, sometimes UTM, sometimes referrer. It cannot tell you what happened on the product page. A visit that ended with "agent gave up at the variant selector" looks identical to a visit that ended with "agent decided not to buy."
  • Hotjar, Contentsquare, FullStory record human cursor traces. Agents do not move cursors. Heatmaps of agent sessions are blank.

There is a gap in every analytics stack between arrival and conversion. For human shoppers the gap is small — clicks and scrolls fill it in. For agents the gap is the whole story.

What the scan returns

What you get back, in 30–90 seconds

Score

0–100. The headline is whether an agent could complete the purchase task. The detail is which of the twenty-three checks contributed.

Findings

Ranked list of issues, with the failure pattern named, the affected URL, and a fix line.

Agent's-eye view

What the accessibility tree exposed at the moment the scan ran. The same surface a real browser agent reads.

Fix list

Per finding: the line of theme code or the Shopify setting to change. Sized for an FE engineer to ship in an afternoon.

Scan gratuit · 30 secondes · sans inscription

Vous voulez un instantané de votre situation ?Scannez votre boutique.

Collez votre domaine. Serge parcourt votre boutique en environ trente secondes — entièrement déterministe, sans IA dans la boucle de scan — et renvoie un instantané rapide : où les agents peuvent être bloqués, quoi inspecter en premier, et les corrections suggérées que votre équipe peut vérifier. Sans inscription. Transmettez le résultat à votre lead front-end.

Le scan est le point de départ. La vue plus profonde vient de la visibilité des sessions, du replay et du briefing.

Common questions

Before you run the scan

Does this work for non-Shopify stores?

Yes. The scan is platform-agnostic. WooCommerce, BigCommerce, custom Next.js builds, Magento — all of them get the same checks. The page is Shopify-titled because that's where most of our scans land, but nothing about the scoring is Shopify-specific.

Does it run a real AI agent?

No, not on the free scan. The free scan is deterministic — it crawls a sample of pages, checks the DOM and accessibility tree against twenty-three patterns, and returns the score. Live agent runs (Claude Desktop, Operator, GPT Agent against your real checkout) are part of Investigate Mode, the paid product.

Will it set off our bot protection?

Single fetch per page. We respect robots.txt — if your robots blocks crawlers, the scan stops. The user-agent identifies itself as `SergeBot` (`/bot` page documents the IP ranges). No volumetric load, no scraping, no follow-up requests.

What if my score is high?

Then your store is in the minority and you are early on this. The score moves over time as agents get more capable and as your theme is updated; we recommend re-scanning every six weeks while the agent ecosystem is still settling.

What can't this scan tell me?

It can't tell you how much agent traffic you actually receive — that's the tracking snippet's job. It can't tell you which specific agents fail and which succeed — that's Investigate Mode. It can tell you whether your store, as currently shipped, is structurally ready for the next checkout channel.

Will fixing these issues hurt SEO or human conversion?

No. Every check we run is grounded in either WCAG accessibility, schema.org product markup, or normal HTML semantics. Fixes that help an AI agent also help screen readers, search crawlers, and sometimes Lighthouse scores.