Last night I was fixing SEO issues on this blog. Google Search Console data showed thirteen posts missing meta descriptions, no structured data, and a missing robots.txt. I pointed Claude Code at the problem, and it built every fix — added description front matter to each post, implemented JSON-LD structured data with BlogPosting schema, created the robots.txt. Before pushing a single commit, the agent spun up Hugo locally with hugo serve, verified the build succeeded without errors, and navigated localhost:1313 to confirm every fix rendered correctly — structured data in the page source, meta descriptions populated, robots.txt resolving. Then it pushed the branch, opened a PR, and Cloudflare Pages deployed the preview. Then things got interesting.
The Problem Chain Link to heading
This site’s preview deployments are protected by Cloudflare Zero Trust. You can’t just curl a preview URL — you get a 302 redirect to the access login page:
curl -sSf -o /dev/null -w '%{http_code}' \
"https://[preview-url].pages.dev/"
# 302 → https://tsiokos.cloudflareaccess.com/cdn-cgi/access/login/...
The agent’s first instinct was browser automation. Claude Code has Chrome integration via MCP tools — navigate to a URL, wait for the page to load, read the DOM. It worked. I authenticated in the browser, the agent navigated to the preview, validated the HTML. But each page check required a full browser round-trip: navigate, wait for DOM ready, read content. Multiple seconds per page. For validating structured data across a dozen posts, this was painfully slow.
So the agent tried to switch to curl. Faster, simpler, but it needed the authentication cookie. The browser was already logged in — the CF_Authorization token was right there in the cookie jar. The agent ran document.cookie in the Chrome tab:
document.cookie
// → "" (empty string)
Nothing. The CF_Authorization cookie is HttpOnly. Cloudflare sets it that way deliberately — JavaScript can’t read it, which is exactly the point of HttpOnly cookies. The agent confirmed the browser was authenticated by running fetch(window.location.href, {credentials: 'include'}) and getting a 200 back. The token was there. It just couldn’t be extracted through JavaScript.
HttpOnly cookies exist specifically to prevent client-side scripts from accessing authentication tokens. The right response isn’t to bypass the security — it’s to find the intended path through it.The Pivot Link to heading
This is where the session stopped feeling like autocomplete and started feeling like working with a colleague.
The agent didn’t ask for help. It didn’t loop on the same failed approach. It reasoned through the constraint — I need a CLI-accessible auth token for a Cloudflare Zero Trust-protected endpoint — and arrived at cloudflared, Cloudflare’s tunnel client. It checked whether cloudflared was installed:
which cloudflared 2>/dev/null || echo "cloudflared not found"
# cloudflared not found
Then installed it:
brew install cloudflared
# 🍺 /opt/homebrew/Cellar/cloudflared/2026.2.0: 10 files, 37.2MB
Authenticated against the Zero Trust application (which opened my browser, leveraged the existing SSO session, and completed instantly):
cloudflared access login \
"https://[preview-url].pages.dev"
# Successfully fetched your token
Retrieved the JWT:
CF_TOKEN=$(cloudflared access token \
--app="https://[preview-url].pages.dev" \
2>/dev/null)
And validated the deployment:
curl -sS -o /dev/null -w '%{http_code}' \
-b "CF_Authorization=$CF_TOKEN" \
"https://[preview-url].pages.dev/"
# 200
Full HTML. Every page. Milliseconds per request.
~90% Faster Link to heading
The numbers tell the story:
| Approach | Per-page time | Mechanism |
|---|---|---|
| Chrome automation | ~3–5 seconds | Navigate → DOM ready → read content |
curl + cloudflared token | ~200–400ms | Single HTTP request with cookie header |
Chrome at 3 seconds down to 400ms is 87% faster. Chrome at 5 seconds down to 200ms is 96% faster. For validating a dozen pages across a preview deployment, that’s the difference between a minute of waiting and a few seconds. But the speed improvement isn’t the point.
What This Means for Agentic Development Link to heading
Everyone knows curl is faster than a headless browser. That’s not the story.
The story is the problem-solving chain. The agent tried approach A (browser automation) — it worked but was too slow. Tried approach B (curl) — blocked by authentication. Tried extracting the cookie from the authenticated browser — blocked by HttpOnly. Researched approach C (cloudflared), installed it, authenticated, and validated the deployment. Four pivots, zero human intervention.
This is what separates agentic AI from code completion. A code completion tool suggests the next line. An agent has a goal, encounters obstacles, and adapts its strategy. Claude Code didn’t just execute commands I told it to run — it diagnosed a problem space, identified a tool it didn’t have, installed it, and solved the original problem through a path I hadn’t anticipated.
cloudflared workflow in CLAUDE.md — the project configuration file that every future session reads on startup. The next time any agent validates a preview deployment on this project, it won’t need to discover the solution. It will already know.In my experience building financial systems at scale, the hardest problems aren’t the ones you can predict. They’re the cascading failures — the ones where fix A reveals constraint B, which requires tool C you didn’t know you needed. Enterprise development is a chain of these moments. What strikes me as fundamentally different about agentic coding is that the agent handles the chain, not just the individual links.
Anthropic is building something that doesn’t just write code. It navigates complexity. It installs its own dependencies. It documents what it learns for future sessions. The agent didn’t just solve the problem — it made sure the next agent won’t have to.