“Crawled – currently not indexed” in Google Search Console (usually in the Indexing > Pages report under “Excluded” or “Not indexed”) means:
Googlebot successfully crawled (visited and downloaded) your page, but Google has chosen not to add it to its search index right now. The page won’t appear in Google search results.
This is not necessarily an error — it’s often Google’s deliberate decision based on its algorithms. The “currently” part is key: it may get indexed later if conditions improve (no need to panic or resubmit every URL). Google explicitly says in their docs: “It may or may not be indexed in the future; no need to resubmit this URL for crawling.”
Common Reasons Why This Happens
Google prioritizes crawling and indexing high-quality, useful content. Pages get deprioritized or excluded if:
- Low-quality or thin content — Short, duplicate, auto-generated, or low-value pages (e.g., boilerplate text, poor E-E-A-T: Experience, Expertise, Authoritativeness, Trustworthiness).
- Duplicate or near-duplicate content — Similar pages across your site or compared to others (even if not exact copies).
- Poor user experience / technical issues — Slow loading, bad mobile usability, intrusive interstitials, too many ads, or render-blocking resources.
- Crawl budget limits — On large/new/low-authority sites, Google crawls but doesn’t index everything to avoid wasting resources (especially if your site is slow or has many low-value pages).
- Site structure problems — Weak internal linking (pages aren’t linked well from important pages), orphan pages, or no strong signals of importance.
- Content not meeting Google’s quality thresholds — Outdated, spammy, or doesn’t satisfy search intent.
- Other factors — Sometimes algorithm updates (like Helpful Content or core updates) cause de-indexing waves; temporary Google-side glitches (rare).
This status spiked for many sites in mid-2025 (e.g., after core updates), but it’s common and often fixable with improvements.
How to Investigate and Fix It
- Go to Google Search Console > Indexing > Pages
- Click “Crawled – currently not indexed” to see the list of URLs.
- Sort by last crawled date to spot patterns.
- Use URL Inspection tool (top of GSC):
- Enter a sample affected URL.
- Click “Test Live URL” → Check:
- Crawl status (should be successful).
- Indexing allowed? (No “noindex” tag or robots.txt block).
- Mobile usability, core web vitals.
- Final rendered page (does it look good?).
- If everything technical is fine → problem is likely content/quality.
- Prioritize fixes (start with highest-impact pages):
- Improve content quality — Add depth, unique value, better structure (headings, images, internal links), update outdated info, target user intent. Make it helpful and authoritative.
- Remove or noindex low-value pages — Thin pages, tag/archive pages with little content → add noindex meta tag or robots.txt disallow (if they shouldn’t be indexed).
- Strengthen internal linking — Link to these pages from high-authority pages (homepage, pillar pages).
- Improve site speed & UX — Optimize images, reduce redirects, fix Core Web Vitals issues.
- Fix duplicates — Use canonical tags properly, consolidate similar content.
- Submit sitemap — Only include important/canonical URLs (200 OK, good content). Avoid submitting junk pages.
- Request indexing (sparingly) — For your best/most improved pages: URL Inspection > “Request Indexing”. Don’t spam it on hundreds — Google limits this.
- After fixes —
- Monitor for weeks/months (re-crawling takes time).
- Click “Validate fix” on the issue in GSC (groups similar reasons).
- Some pages index naturally as crawl budget allows; others may stay excluded if Google deems them low-value.
If most of your site is affected (e.g., sudden spike to thousands), check for recent site changes, algorithm impacts, or crawl errors elsewhere. For new sites or e-commerce (like localized pages), this is very common until authority builds.