Glossary

How to Check When Google Last Crawled a Webpage or Site

There are several ways to discover when Google last crawled your website and its specific webpages; this guide walks through the fastest methods—Google Search Console’s Coverage and URL Inspection tools, the cached page view, server logs and log analyzers, and simple search operators—so you can quickly verify crawl dates, diagnose indexing issues, and confirm updates have been picked up.

Last Crawl Date (Google)

The Last Crawl Date (Google) is the most recent date and time Googlebot fetched a specific URL or site for crawling/indexing, indicating when Google last retrieved the page’s content.

Checking Manually with the URL Inspection Tool

Open Google Search Console, select the property, paste the exact URL (full URL, including protocol and trailing slash, if present) into the top URL Inspection field, and press Enter.



What you’ll see



  • Indexing status: whether the URL is indexed.

  • Last crawled (page fetch) date: shown as “Crawled as” or “Last crawled” — Googlebot’s most recent fetch of that specific URL.

  • Live Test: request a live fetch to check current crawlability and rendering (useful after updates).

  • Coverage and enhancements: summary of blocked resources, indexing issues, and mobile/AMP/structured data problems found during the last crawl.

  • Request indexing button: if eligible, ask Google to recrawl and (re)index the page.



How to interpret and use it



  • If Last crawled is recent, Google has fetched the current content; if it’s old, changes may not be reflected in search.

  • Use Live Test for immediate verification of crawlability or rendering — note it does not guarantee a full recrawl for indexing.

  • After a successful Live Test, click Request indexing to queue a recrawl; this does not guarantee instant indexing.

  • If the URL is not indexed, review the shown reasons (noindex, canonical to another URL, robots.txt blocks, crawl anomalies), fix them, then re-request indexing.



Limitations



  • Last crawled reflects the fetch date for that exact URL only, not site-wide crawl activity.

  • Request indexing actions are subject to Google’s processing and may be rate-limited.

  • URL Inspection shows Google’s view at the time of the last fetch or live test; server logs provide definitive crawl timestamps for precise historical records.

Using Request Indexing

Using Request Indexing



What it is: In Google Search Console’s URL Inspection tool, Request Indexing asks Google to re-crawl and reconsider a specific URL for indexing sooner than it might otherwise. It triggers Googlebot to fetch the page and re-evaluate content, canonical, and indexing signals.



When to use: After publishing or updating important pages (content changes, metadata, schema, canonical fixes), after fixing indexing issues, or when you want quick verification that Google recognizes a change.



How to use:



  • Open Search Console and select the property.

  • Paste the exact URL into the URL Inspection box and run the inspection.

  • Review the result (coverage, canonical, mobile usability, enhancements).

  • If you want a recrawl, click Request Indexing and wait for confirmation.



What to check immediately after: Use Live Test in URL Inspection to confirm what Googlebot sees right now. Review Coverage to confirm indexing status after a short time. The Last Crawl Date is the most recent date and time Googlebot fetched the URL, indicating when Google last retrieved the page’s content.



Limits and caveats:



  • Requesting indexing does not guarantee immediate indexing or ranking — it only asks Google to re-crawl and reconsider the URL.

  • There are rate limits and quotas per property (Google doesn’t publish exact numbers). Use it selectively for high-priority pages.

  • If a URL is blocked by robots.txt, has noindex, or is canonicalized to another URL, the request will have limited effect until those issues are fixed.

  • The Indexing API is available only for specific content types (for example, JobPosting and live stream structured data) — it is not a general replacement for Request Indexing.



Best practices:



  • Fix on-page and technical issues first (remove noindex, fix robots, ensure the correct canonical).

  • Request indexing for single high-priority pages or after major fixes; use sitemap submission for larger batches of URLs.

  • Use Live Test before requesting to ensure Googlebot sees the final content.

  • Monitor Coverage, URL Inspection, and server logs after the request to confirm crawling and indexing.



Alternatives:



  • Resubmit or update a sitemap in GSC for broader URL sets.

  • Use server logs or a log analyzer to verify Googlebot fetches.

  • For supported content types, consider the Indexing API for faster programmatic updates.

How to Check When Google Last Crawled a Webpage or Site

There are several ways to discover when Google last crawled your website and its specific webpages; this guide walks through the fastest methods—Google Search Console’s Coverage and URL Inspection tools, the cached page view, server logs and log analyzers, and simple search operators—so you can quickly verify crawl dates, diagnose indexing issues, and confirm updates have been picked up.

Troubleshooting Crawling and Indexing Issues with GSC



  1. Quick checks to see the last crawl




    1. URL Inspection: paste the exact URL in Search Console > URL Inspection. Review the Last crawled date and time, indexing status, canonical URL, mobile usability, and structured data. Use Request indexing if updated content needs a re-crawl.




    2. Coverage report: Search Console > Index > Coverage shows indexed, excluded, and error states. Open specific entries to see example URLs and last crawl-related details.




    3. Sitemaps: Search Console > Sitemaps shows when the sitemap was last read and the count of submitted versus indexed URLs.




    4. Crawl stats: Search Console > Settings > Crawl stats shows site-wide crawl activity, pages crawled per day, kilobytes downloaded, and response codes.






  2. Common causes of crawling or indexing failures




    1. robots.txt blocking Googlebot or disallowed paths




    2. Noindex meta tag or X-Robots-Tag header




    3. Blocked by server (403/401) or slow/5xx responses




    4. Canonical tags pointing elsewhere or duplicate content




    5. Redirect chains or broken redirects




    6. Sitemap errors or missing updated URLs




    7. Crawl budget issues on very large sites




    8. Mobile usability or Core Web Vitals issues preventing proper rendering




    9. URL parameters causing duplicates or crawl traps




    10. hreflang misconfigurations




    11. Manual action or security issues (check Search Console > Security & Manual Actions)






  3. Step-by-step troubleshooting checklist




    1. Verify with URL Inspection:




      1. Confirm whether the URL is on Google and note the Last crawled timestamp.




      2. Open View crawled page to see rendered HTML and resources; ensure critical JS and CSS load.






    2. Check Coverage details:




      1. Identify why URLs are excluded (soft 404, redirected, blocked by robots, duplicate, canonicalized).




      2. For errors (server error, redirect error), fix the server or configuration and revalidate in Search Console.






    3. Inspect robots.txt and meta/X-Robots:




      1. Use Search Console > Settings > robots.txt Tester to test URLs.




      2. Remove Disallow or noindex if indexing is desired.






    4. Review the sitemap:




      1. Ensure the sitemap is updated, valid, and contains canonical URLs.




      2. Resubmit the sitemap after bulk changes.






    5. Check server and logs:




      1. Confirm 200 responses for crawlable pages. Fix 5xx spikes and slow responses.




      2. Review server logs for Googlebot activity and frequency.






    6. Resolve rendering and resource blocking:




      1. Allow Googlebot to fetch JS and CSS. Use the rendered screenshot in URL Inspection to verify.






    7. Fix redirects and canonical tags:




      1. Eliminate redirect chains; use a 301 to the final URL.




      2. Ensure the canonical points to the intended indexable URL.






    8. Address mobile and performance issues:




      1. Fix mobile usability errors and improve CLS, LCP, and FID metrics; retest and request indexing.






    9. Handle duplicates and parameters:




      1. Use rel="canonical," parameter handling in Search Console, or robots rules to prevent crawl waste.






    10. Use Request indexing sparingly:




      1. After fixes, use Request indexing on critical pages. For bulk updates, resubmit the sitemap and monitor.






    11. Monitor for manual actions and security issues:




      1. Check Security & Manual Actions in Search Console; resolve issues and submit a reconsideration request if needed.








  4. When to escalate




    1. Persistent 5xx errors or crawl timeouts: involve hosting or DevOps for server tuning and CDN configuration.




    2. Complex JavaScript rendering issues: have developers implement server-side rendering or dynamic rendering.




    3. Large-scale indexing gaps: analyze crawl budget, eliminate waste (duplicate pages, parameter traps), and consider splitting the site into multiple sitemaps.






  5. Metrics to monitor after fixes




    1. URL Inspection Last crawled updates




    2. Coverage index count and error trends




    3. Sitemap indexation ratio (indexed versus submitted)




    4. Crawl stats (pages per day, response bytes)




    5. Search performance: impressions, clicks, and ranking changes






  6. Quick fix examples




    1. robots.txt accidentally blocked: remove Disallow and test; then request reindexing.




    2. Noindex present: remove the meta noindex and request indexing.




    3. 5xx spikes: fix the server error, then revalidate affected URLs in Coverage.




    4. JavaScript resources blocked: allow resources and re-crawl; verify rendered content in URL Inspection.






  7. Recommended routine




    1. Weekly: monitor Coverage and Performance; check sitemap status.




    2. After updates: use URL Inspection for critical pages; resubmit the sitemap for bulk changes.




    3. Monthly: review Crawl stats, Core Web Vitals, and server logs.






  8. If desired, provide one example URL, and a tailored troubleshooting checklist with likely causes and exact Search Console steps will be provided.