For e-commerce teams
Session visibility · Replay · Daily briefing
Can an AI agentbuy from you?
Serge shows where Claude, ChatGPT, and other AI agents arrive, stall, retry, and give up on your store, so your team can fix the structural issues blocking agent conversion. Replay, briefing, and a fast scan in one product.
Scan your store · 30 seconds · no signup
Scan now.
No credit card, no account. Share the result with your team.
One Claude task · 96.8s
The agent got to checkout, but it wasn't clean.
A real shopping task, rendered as five steps.
The agent recovered from two interface frictions and completed the cart — a run the current analytics stack would dump in the "direct" bucket and never explain. This is the artifact Serge produces for every agent session.
"Find a black backpack under CHF 100 in stock on yourstore.ch and add it to my cart."
- 01results loaded
- 02in-stock + under CHF 100 retried
- 03duplicate add-to-cart labels
- 04CHF 79 selected
- 05cart confirmed
Two-way mirror · same page, two views
The page your shopper sees.The page an agent sees.
A product page is two documents at once. The human view renders for the eye. The agent view renders for the accessibility tree. Serge measures the gap between them.
Human view
The shopper sees this
Commuter Backpack 20L
Water-resistant · padded laptop sleeve · two outer pockets
Glance signals
- Hero photoBag on a desk, product-forward
- Price glanceDiscount visible in 0.4s
- SwatchesThree colour options, clickable
- CTABig ink button above the fold
Agent view
Claude / ChatGPT see this
<body> <main role="main"> <h1>Commuter Backpack 20L</h1> <div class="product-hero">⟶ no <img alt>, no schema.org/ImageObject <span>CHF 79</span>⟶ price not marked as <data value> <div class="variants">⟶ div roleless — variant selection unreachable <div>Charcoal</div> <div>Olive</div> <div>Sand</div> <a onclick="addToCart()">Add to cart</a>⟶ CTA is an anchor with onclick — no form semantics </main></body>Structural signals
- schema.org/ProductMissing · agent falls back to heuristics
- <h1>"Commuter Backpack 20L" · readable
- aria roleSwatches are <div> not <button> · no keyboard nav
- Form semanticsCTA renders as <a> with onClick · no implicit submit
- InventoryNot exposed in markup · only via fetch post-load
Every issue flagged on the right is the reason the agent fails to complete the task the human started.
Free scan · 30 seconds · no signup
Want a snapshot of where you are?Scan your store.
Paste your domain. Serge crawls your store in about thirty seconds — completely deterministic, no AI in the scan loop — and hands back a fast snapshot: where agents may get blocked, what to inspect first, and suggested fixes for your team to verify. No signup. Forward the result to your front-end lead.
The scan is the starting point. The deeper picture comes from session visibility, replay, and briefing.
Agent session replay
Watch a Claude agenttry to buy from you.
Click Replay on any flagged session. Serge reconstructs the exact arrival, retries, failure, and fix so your team sees what happened before the revenue disappears into "direct."
Public preview
Sample data, rendered through the same session UI and narrative view layer as the live product.
See live demo01
Arrival traced
02
Failure isolated
03
Reasoning recovered
04
Fix handed off
Replay outcomeOne replay resolves the full chain: arrival, failure, reasoning, and the concrete fix your team should ship next.
Inferred reasoning · 0.00 confidence
Claude walked the standard browse path to a laptop product page in under two seconds, read it, then tried to find the size selector in the accessibility tree. The element didn't exist there — the React <SizeSelector> renders without role or aria-expanded — so Claude retried, failed again, and abandoned the cart at +0.00s.
What to do
Add role="combobox", aria-expanded, and an accessible name to your <SizeSelector>. One small PR, then rerun the replay to confirm the agent gets past this control.
GA4 sees
Nothing. No referrer.
Hotjar sees
Nothing. No mouse.
Attribution sees
A "direct" visit.
Serge sees
Every step. Every retry. The reasoning.
Problem
You can't optimize agent trafficif you can't see it.
Three specific blind spots, the same three tools every e-commerce team already runs, and none of them see it.
Analytics
GA4 doesn't see it.
Agents arrive via headless browsers without referrers. GA4 dumps every Claude session into the "direct" bucket — if it sees them at all. The funnel never knew they were there.
0 agent sessions in your GA4 funnel today.
CRO
Hotjar records humans.
Contentsquare, FullStory, Lucky Orange — they capture mouse moves and form fills. An agent has neither. Your session-replay tool runs and the agent visits are simply missing.
0 agent sessions in your replay tool today.
Competition
The site that works gets the sale.
When Claude can't complete checkout on your store, it tries the next one. The customer never finds out which site failed. You don't either — until your monthly numbers slip and nobody can explain why.
Unknown sales lost to a competitor today. We can't tell you. Neither can your tools.
Product capabilities
Observe. Analyze.Diagnose. Ship the fix.
01 · Observe
Every agent session, live.
A 6 KB script tag classifies every visit, separates agent sessions from human sessions, and routes each one to the right dashboard. Claude, GPT-5, Perplexity, Gemini — they all show up in under an hour.
Platforms seen
00
Flagged now
00
Median arrival
0.0s
Why it matters
The first job is not attribution. It's simply proving that the visits are there, that they are different, and that someone on the team can inspect them immediately.
02 · Analyze
Build the funnel GA4 won't.
Every session assembles into an agent funnel: landing → browse → product → variant → cart → checkout. Drop-offs surface per stage, per platform, per day. No instrumentation.
03 · Diagnose
Replay any session on demand.
Every flagged session gets a Replay button. Serge runs a real Claude agent through the same path on your live site and captures every step. Your engineers get a shareable URL.
04 · Fix
Ship one PR. Re-measure.
Every failure ships with a suggested fix and an implementation hint your engineers can act on. Re-run the replay after the deploy and confirm the agent gets past the blocker.
Fix loop
One broken control can remove an entire agent path. That is why the first fix tends to feel outsized: it restores the same component across every product page that uses it.
After deploy
Re-run the same path and confirm the agent gets to cart.
Versus your current stack
Human analytics toolsweren't built for agent behavior.
What your current stack covers
All of them measure humans well. None of them see agents.
| Hotjar · FullStory · GA4 | ||
|---|---|---|
| Built for | Human sessions — mouse moves, form fills, click patterns | Agent sessions — HTTP requests, a11y tree reads, element lookups |
| What you see | What humans clicked and where they left | What agents tried, why they failed, where they retried |
| Session visibility | Agents land without a referrer and end up in "direct" | Every agent session classified by platform, end-to-end |
| Failure diagnosis | Explains where humans leave | Explains why the agent couldn't reach the next step |
| What it optimizes | Human UX — copy, layout, friction | Agent task completion — structure, semantics, machine-readability |
Where Serge sits
GEO tools measure whether ChatGPT mentions you upstream. CRO tools measure what humans do on your site. Serge measures what happens when ChatGPT sends a customer to your site and the agent tries to buy on their behalf.
Upstream: Athena, Profound, Scrunch, Otterly, Semrush AI · Downstream: Dreamdata, HockeyStack, Bizible
Use cases
One failure pattern we can show clearly.Four workflows we're validating next.
PDP comprehension
Fix variant-selector failures first.
Custom React variant selectors often render without a role or accessible name, which makes them hard for agents to operate. It is the clearest failure pattern in our replay demos and internal benchmarks, and the fix is usually small: expose the control semantically, then re-run the replay.
Mechanism proof
0 attrs
role, accessible name, and state often decide whether an agent can operate a custom control.
Product discovery
Find products agents should find but don't.
Serge measures which products are reachable by each platform and which sit invisible behind JavaScript the agent can't execute.
· Next in validation
Checkout + handoff
Spot where agent journeys stall before conversion.
Forms that require explicit labels, add-to-cart buttons that are divs with onClick, inventory data locked in client state.
· Next in validation
Platform benchmarking
Track how each agent platform performs.
Claude, GPT-5, Perplexity, and Gemini navigate differently. A page one of them can complete is sometimes a dead end for another.
· Next in validation
Agent traffic as a channel
Own the one surface where agent traffic shows up at all.
Not a column in GA4, not a tab in Hotjar — a standalone dashboard with weekly trends, platform breakdowns, and anomaly flagging.
· Next in validation
Outcomes
What your Head of E-commercereports at quarter end.
Directional until we have a signed case study to cite. Every outcome below maps to a specific capability — no fabricated lift percentages.
Visibility
Turn the "direct" bucket in GA4 into a real cohort.
Agents land without referrers and end up mis-attributed as direct traffic. Serge runs alongside GA4 and gives you the real split: which direct-bucket sessions were agents, which platform, which page they landed on.
direct · 0 sessions → agents · 0 · humans · 0Conversion
Recover agent sessions that were giving up silently.
Every finding points at a page, a platform, and the step the agent got stuck on. You prioritize on measured leakage, not hunches.
Directional · not a committed metric
CRO velocity
Ship fewer PRs, each with larger impact.
Agent failures tend to cluster on structural issues — one component blocks thousands of sessions. The first fix is usually the biggest, because one missing ARIA role can take out a whole platform at once.
0 PR → lifts every agent session that touched the broken componentReadiness
Start measuring before you need the numbers.
We don't have a historical baseline for agent traffic share because nobody's been measuring it. The stores that start tracking now will have trend data before the rest of the market does — and trend data is what wins the conversation with your CFO.
Directional · not a committed metric
Pricing
Start with one store,one snippet, one clear first read.
Public pricing is built for teams proving the channel exists. Larger retailers can start with a hands-on pilot while the product is still early.
Launch
CHF 149 / mo
One store testing whether agent traffic is real and where it breaks
Grow
CHF 499 / mo
Multi-brand or multi-site teams comparing sessions, issues, and trends
Scale
CHF 1,499 / mo
Teams running replay at higher volume with deeper retention and reporting
Pilot program
Need hands-on setup, weekly reviews, or custom reporting?
We run a small pilot program for teams that want founder support while the product is still early: setup help, live replay walkthroughs, and tighter feedback loops than the self-serve plans.
Talk to us about a pilotFull tier details and feature comparison → /pricing
Watch the product first.Bring it to your store when you want the real trace.
Start with the live demo if you want to understand the product shape. Book the founder walkthrough when you want Claude run against your actual store and the first fix list in the same meeting.
What you leave with
A real agent trace, the first structural blockers, and a clear answer on whether Serge is worth wiring into your stack now.
Founder walkthrough
01
Run a live Claude path against your site.
02
Inspect the blockers with replay and reasoning.
03
Leave with the first fix list and install path.
FAQ