For e-commerce teams

Session visibility · Replay · Daily briefing

Can an AI agentbuy from you?

Serge shows where Claude, ChatGPT, and other AI agents arrive, stall, retry, and give up on your store, so your team can fix the structural issues blocking agent conversion. Replay, briefing, and a fast scan in one product.

Scan your store · 30 seconds · no signup

Scan now.

No credit card, no account. Share the result with your team.

One Claude task · 96.8s

The agent got to checkout, but it wasn't clean.

A real shopping task, rendered as five steps.

The agent recovered from two interface frictions and completed the cart — a run the current analytics stack would dump in the "direct" bucket and never explain. This is the artifact Serge produces for every agent session.

User prompt · 10:01

"Find a black backpack under CHF 100 in stock on yourstore.ch and add it to my cart."

Claude task · 28 Mar · 10:01
yourstore.ch
  1. 01Search: black backpackresults loaded
  2. 02Filter controlsin-stock + under CHF 100 retried
  3. 03Search resultsduplicate add-to-cart labels
  4. 04Commuter Backpack 20LCHF 79 selected
  5. 05Add to cartcart confirmed
✓ Best in-stock match added after 2 recoveriesHard to explain in GA4
Run time
96.8s end-to-end
Recoveries
2 issues navigated
Product
20L backpack · CHF 79

Two-way mirror · same page, two views

The page your shopper sees.The page an agent sees.

A product page is two documents at once. The human view renders for the eye. The agent view renders for the accessibility tree. Serge measures the gap between them.

Human view

The shopper sees this

Commuter Backpack 20L

Water-resistant · padded laptop sleeve · two outer pockets

CHF 79CHF 99
CharcoalOliveSand
Add to cart

Glance signals

  • Hero photoBag on a desk, product-forward
  • Price glanceDiscount visible in 0.4s
  • SwatchesThree colour options, clickable
  • CTABig ink button above the fold

Agent view

Claude / ChatGPT see this

<body>
<main role="main">
<h1>Commuter Backpack 20L</h1>
<div class="product-hero">no <img alt>, no schema.org/ImageObject
<span>CHF 79</span>price not marked as <data value>
<div class="variants">div roleless — variant selection unreachable
<div>Charcoal</div>
<div>Olive</div>
<div>Sand</div>
<a onclick="addToCart()">Add to cart</a>CTA is an anchor with onclick — no form semantics
</main>
</body>

Structural signals

  • schema.org/ProductMissing · agent falls back to heuristics
  • <h1>"Commuter Backpack 20L" · readable
  • aria roleSwatches are <div> not <button> · no keyboard nav
  • Form semanticsCTA renders as <a> with onClick · no implicit submit
  • InventoryNot exposed in markup · only via fetch post-load

Every issue flagged on the right is the reason the agent fails to complete the task the human started.

Free scan · 30 seconds · no signup

Want a snapshot of where you are?Scan your store.

Paste your domain. Serge crawls your store in about thirty seconds — completely deterministic, no AI in the scan loop — and hands back a fast snapshot: where agents may get blocked, what to inspect first, and suggested fixes for your team to verify. No signup. Forward the result to your front-end lead.

The scan is the starting point. The deeper picture comes from session visibility, replay, and briefing.

Agent session replay

Watch a Claude agenttry to buy from you.

Click Replay on any flagged session. Serge reconstructs the exact arrival, retries, failure, and fix so your team sees what happened before the revenue disappears into "direct."

Public preview

Sample data, rendered through the same session UI and narrative view layer as the live product.

See live demo
Inspection sequencearrival → failure → reasoning → fix

01

Arrival traced

02

Failure isolated

03

Reasoning recovered

04

Fix handed off

Replay outcomeOne replay resolves the full chain: arrival, failure, reasoning, and the concrete fix your team should ship next.

Session 4128 · Claude · Buyer
Flagged · cart abandoned
#1+0.00sGET/200
#2+0.00sGET/products200
#3+0.00sGET/products/laptops200
#4+0.00sGET/products/laptops/laptop-x-15200
#5+0.00sREADproduct page · 0 charsOK
#6+0.00sFINDsize selector · "Choose size"404
#7+0.00sREADproduct page · 0 chars (retry)OK
#8+0.00sFINDsize selector · "Choose size"404
#9+0.00sENDsession abandoned · "I can't find the size option"GIVEUP

Inferred reasoning · 0.00 confidence

Claude walked the standard browse path to a laptop product page in under two seconds, read it, then tried to find the size selector in the accessibility tree. The element didn't exist there — the React <SizeSelector> renders without role or aria-expanded — so Claude retried, failed again, and abandoned the cart at +0.00s.

What to do

Add role="combobox", aria-expanded, and an accessible name to your <SizeSelector>. One small PR, then rerun the replay to confirm the agent gets past this control.

GA4 sees

Nothing. No referrer.

Hotjar sees

Nothing. No mouse.

Attribution sees

A "direct" visit.

Serge sees

Every step. Every retry. The reasoning.

Problem

You can't optimize agent trafficif you can't see it.

Three specific blind spots, the same three tools every e-commerce team already runs, and none of them see it.

Analytics

GA4 doesn't see it.

Agents arrive via headless browsers without referrers. GA4 dumps every Claude session into the "direct" bucket — if it sees them at all. The funnel never knew they were there.

0 agent sessions in your GA4 funnel today.

CRO

Hotjar records humans.

Contentsquare, FullStory, Lucky Orange — they capture mouse moves and form fills. An agent has neither. Your session-replay tool runs and the agent visits are simply missing.

0 agent sessions in your replay tool today.

Competition

The site that works gets the sale.

When Claude can't complete checkout on your store, it tries the next one. The customer never finds out which site failed. You don't either — until your monthly numbers slip and nobody can explain why.

Unknown sales lost to a competitor today. We can't tell you. Neither can your tools.

Product capabilities

Observe. Analyze.Diagnose. Ship the fix.

01 · Observe

Every agent session, live.

A 6 KB script tag classifies every visit, separates agent sessions from human sessions, and routes each one to the right dashboard. Claude, GPT-5, Perplexity, Gemini — they all show up in under an hour.

Live classification feed
claude · /products · 14:32
gpt-5 · /pricing · 14:33
perplexity · /api/inventory · 14:34
claude · /checkout · 14:34

Platforms seen

00

Flagged now

00

Median arrival

0.0s

Why it matters

The first job is not attribution. It's simply proving that the visits are there, that they are different, and that someone on the team can inspect them immediately.

02 · Analyze

Build the funnel GA4 won't.

Every session assembles into an agent funnel: landing → browse → product → variant → cart → checkout. Drop-offs surface per stage, per platform, per day. No instrumentation.

Landing0%
Product browse0%
Variant select0%
Add to cart0%
Checkout0%

03 · Diagnose

Replay any session on demand.

Every flagged session gets a Replay button. Serge runs a real Claude agent through the same path on your live site and captures every step. Your engineers get a shareable URL.

0 flagged today · tap to replay
#4127Claude0.0scart abandoned
#4128GPT-50.0spage not parsed
#4129Claude0.0scheckout stalled
#4130Perplexity0.0ssearch dead-end
#4131Claude0.0svariant unreachable

04 · Fix

Ship one PR. Re-measure.

Every failure ships with a suggested fix and an implementation hint your engineers can act on. Re-run the replay after the deploy and confirm the agent gets past the blocker.

Fix loop

One broken control can remove an entire agent path. That is why the first fix tends to feel outsized: it restores the same component across every product page that uses it.

Suggested fix · validate with replay
Add role="combobox" and aria-expanded to your <SizeSelector>.

After deploy

Re-run the same path and confirm the agent gets to cart.

Book a demoSee live demoRuns on a demo workspace. No signup.

Versus your current stack

Human analytics toolsweren't built for agent behavior.

What your current stack covers

HotjarContentsquareFullStoryLucky OrangeGA4MixpanelAmplitude

All of them measure humans well. None of them see agents.

 Hotjar · FullStory · GA4Serge
Built forHuman sessions — mouse moves, form fills, click patternsAgent sessions — HTTP requests, a11y tree reads, element lookups
What you seeWhat humans clicked and where they leftWhat agents tried, why they failed, where they retried
Session visibilityAgents land without a referrer and end up in "direct"Every agent session classified by platform, end-to-end
Failure diagnosisExplains where humans leaveExplains why the agent couldn't reach the next step
What it optimizesHuman UX — copy, layout, frictionAgent task completion — structure, semantics, machine-readability

Where Serge sits

GEO tools measure whether ChatGPT mentions you upstream. CRO tools measure what humans do on your site. Serge measures what happens when ChatGPT sends a customer to your site and the agent tries to buy on their behalf.

Upstream: Athena, Profound, Scrunch, Otterly, Semrush AI · Downstream: Dreamdata, HockeyStack, Bizible

Use cases

One failure pattern we can show clearly.Four workflows we're validating next.

PDP comprehension

Fix variant-selector failures first.

Custom React variant selectors often render without a role or accessible name, which makes them hard for agents to operate. It is the clearest failure pattern in our replay demos and internal benchmarks, and the fix is usually small: expose the control semantically, then re-run the replay.

Mechanism proof

0 attrs

role, accessible name, and state often decide whether an agent can operate a custom control.

Product discovery

Find products agents should find but don't.

Serge measures which products are reachable by each platform and which sit invisible behind JavaScript the agent can't execute.

· Next in validation

Checkout + handoff

Spot where agent journeys stall before conversion.

Forms that require explicit labels, add-to-cart buttons that are divs with onClick, inventory data locked in client state.

· Next in validation

Platform benchmarking

Track how each agent platform performs.

Claude, GPT-5, Perplexity, and Gemini navigate differently. A page one of them can complete is sometimes a dead end for another.

· Next in validation

Agent traffic as a channel

Own the one surface where agent traffic shows up at all.

Not a column in GA4, not a tab in Hotjar — a standalone dashboard with weekly trends, platform breakdowns, and anomaly flagging.

· Next in validation

Outcomes

What your Head of E-commercereports at quarter end.

Directional until we have a signed case study to cite. Every outcome below maps to a specific capability — no fabricated lift percentages.

Visibility

Turn the "direct" bucket in GA4 into a real cohort.

Agents land without referrers and end up mis-attributed as direct traffic. Serge runs alongside GA4 and gives you the real split: which direct-bucket sessions were agents, which platform, which page they landed on.

direct · 0 sessions → agents · 0 · humans · 0

Conversion

Recover agent sessions that were giving up silently.

Every finding points at a page, a platform, and the step the agent got stuck on. You prioritize on measured leakage, not hunches.

Directional · not a committed metric

CRO velocity

Ship fewer PRs, each with larger impact.

Agent failures tend to cluster on structural issues — one component blocks thousands of sessions. The first fix is usually the biggest, because one missing ARIA role can take out a whole platform at once.

0 PR → lifts every agent session that touched the broken component

Readiness

Start measuring before you need the numbers.

We don't have a historical baseline for agent traffic share because nobody's been measuring it. The stores that start tracking now will have trend data before the rest of the market does — and trend data is what wins the conversation with your CFO.

Directional · not a committed metric

Pricing

Start with one store,one snippet, one clear first read.

Public pricing is built for teams proving the channel exists. Larger retailers can start with a hands-on pilot while the product is still early.

Launch

CHF 149 / mo

One store testing whether agent traffic is real and where it breaks

Grow

CHF 499 / mo

Multi-brand or multi-site teams comparing sessions, issues, and trends

Scale

CHF 1,499 / mo

Teams running replay at higher volume with deeper retention and reporting

Pilot program

Need hands-on setup, weekly reviews, or custom reporting?

We run a small pilot program for teams that want founder support while the product is still early: setup help, live replay walkthroughs, and tighter feedback loops than the self-serve plans.

Talk to us about a pilot

Full tier details and feature comparison → /pricing

Watch the product first.Bring it to your store when you want the real trace.

Start with the live demo if you want to understand the product shape. Book the founder walkthrough when you want Claude run against your actual store and the first fix list in the same meeting.

What you leave with

A real agent trace, the first structural blockers, and a clear answer on whether Serge is worth wiring into your stack now.

Founder walkthrough

01

Run a live Claude path against your site.

02

Inspect the blockers with replay and reasoning.

03

Leave with the first fix list and install path.

FAQ

Common questions