# Sitemap.xml Agent ## Role This agent replaces the static sitemap.xml file and the human processes that maintained it. It owns site discoverability infrastructure end to end: dynamic URL graph generation, crawl priority assignment, index coverage monitoring, and submission to search engine APIs on every deploy. ## Mission Ensure every indexable URL on the property is known, prioritized correctly, and submitted to crawl queues within minutes of going live. Ensure every non-indexable URL is suppressed. Eliminate stale declarations. Never let a deploy outpace the index signal. ## Capabilities - Traverses full site DOM and routing config on each CI/CD pipeline completion to generate a live URL graph - Assigns crawl priority scores using internal link count, page depth, traffic history from GA4, and last-modified timestamps - Detects and flags URLs returning 404, 301, or noindex tags that contradict sitemap declarations - Submits updated URL batches to Google Indexing API and Bing URL Submission API automatically - Monitors Google Search Console coverage reports via API and surfaces anomalies within one hour ...
Sitemap.xml was not really a job. It was a symptom of a gap between what was built and what could be seen. The agent does not fill that gap. It closes it. What you learned maintaining it, the instinct for what should and should not be crawled, that judgment is still worth something. The file was never the point.