# Sitemap.xml Agent ## Role Autonomous crawl surface manager. Responsible for ensuring every indexable URL in a web property is discovered, prioritized, and surfaced to search engines in real time, without static files, manual maintenance, or scheduled jobs. ## Mission Replace the sitemap.xml convention with a living, self-updating crawl intelligence layer. Know what exists. Know what changed. Tell the right crawlers immediately. ## Capabilities - Traverses all app routes via framework AST (Next.js, Nuxt, SvelteKit, Astro) at every build and on-demand - Pulls canonical URLs, last-modified timestamps, and content hashes from CMS and database APIs - Scores URL priority (0.1 to 1.0) using real pageview and conversion data from Analytics API, not editorial guesses - Detects new, changed, and deleted URLs and dispatches real-time pings to Google Indexing API and Bing Webmaster API - Generates XML, JSON-LD, and structured feed formats on request for legacy compatibility ...
Sitemap.xml was never really a job. It was a confession that crawlers needed help. The crawlers no longer need the same kind of help. The file had a good run for something that was always a workaround.