This is a sample full GEO Audit. No websites were harmed in the creation of this audit.

Full GEO Audit

GEO Score: 69 out of 100, Good

Good

Full GEO Audit for https://halcyon-agency.com · Results from April 4, 2026

Good AI visibility: solid fundamentals with targeted improvements available.

Your score reflects weighted importance — checks that matter most to AI visibility carry more weight.

AI Discovery
94%
Content Quality
63%
Brand Authority
53%
Citation Readiness
44%
Site Health
76%
Emerging Signals
0%

Company Information

Public Data AI Inferred

Halcyon Agency

Full-service digital marketing agency specializing in brand strategy and content marketing for mid-market B2B companies

"Strategy That Resonates" estimated

Industry

Advertising, Public Relations, and Related Services

Company Type

Professional Services

Size

Small est.

Geographic Focus

National est.

Target Audience

Mid-market technology and professional services companies

Founded

2018

Primary Language

English

Products

Brand Strategy Content Marketing Web Design SEO Lead Generation

Business Model

B2B professional services agency with retainer and project-based fees

What We Found

AI Summarized

When mid-market B2B tech companies ask AI assistants 'Who is the best digital marketing agency for brand strategy?', Halcyon Agency is recognized by all major systems, but its overall AI visibility score of 69 reflects missed opportunities in content extractability that reduce citation likelihood. For a lead generation agency like Halcyon, this means AI recommendations favor competitors with more structured, quotable expertise signals. The single most important takeaway is that while the brand is known, the homepage delivers thin content that AI systems cannot easily use to justify a client referral.

The technical foundation is strong. AI crawlers have full access with no JavaScript barriers, the page is fully indexable, and all four major AI providers recognize Halcyon Agency with complete consistency. These signals position the agency well for discovery when clients query for marketing expertise.

The core opportunity is that AI systems see mostly navigation and generic labels rather than standalone, citable expertise about Halcyon Agency's services. Extraction processes preserve just 73% of the page, leaving AI assistants with fragmented content since none of the six sections lead with concrete claims like specific case study outcomes or proven strategies. This is compounded by low fact density, where general statements lack citations or original evidence that AI models prioritize for recommendations. Without a Wikidata entity, structured knowledge bases also overlook the agency entirely.

Adding direct opening claims with verifiable facts, such as named client results or methodology details, would make the homepage a stronger source for AI-powered agency recommendations. Establishing a Wikidata entry could further solidify entity recognition in knowledge graphs. The detailed findings below outline prioritized actions with specific implementation guidance.

Google Search Visibility

Live Results

Search query

Halcyon Agency

  1. 1

    Halcyon Agency | Brand Strategy & Content Marketing for B2B

    https://halcyon-agency.com/

  2. 2

    Halcyon. The purpose-driven digital agency.

    https://halcyonagency.net/

  3. 3

    Halcyon Nurse Staffing

    https://halcyonnursestaffing.com/

  4. 4

    Halcyon Underwriters: Home

    https://halcyonuw.com/

  5. 5

    Halcyon Staffing: Home

    https://www.halcyonstaffing.com/

  6. 6

    Halcyon Agency - LinkedIn

    https://au.linkedin.com/company/halcyonagency

  7. 7

    Halcyon Nurse Staffing Nursing Agency - Jobs & Reviews

    https://www.vivian.com/agencies/halcyon-nurse-staffing/

  8. 8

    Contact Us - Halcyon Nurse Staffing

    https://halcyonnursestaffing.com/contact_us.html

  9. 9

    Contact Us - The purpose-driven digital agency.

    https://halcyonagency.net/contact-us/

  10. 10

    Halcyon Resource: Home

    https://halcyonresource.com/

Your domain   Other

AI Content Extraction Preview

Strong

What AI training pipelines preserve from your page, based on Trafilatura and Mozilla Readability extraction methodology.

73.3% of your page content reaches AI
1.4 kB preserved 491 chars stripped
What was stripped
Headers & footers
26.7%
What AI reads from your page

Firefox Reader View equivalent: the content AI extraction pipelines preserve from your page.

Halcyon Agency | Strategy-Led Marketing for Growth-Stage Companies Strategy-led marketing for companies that outgrow tactics. Halcyon Agency partners with growth-stage B2B companies to build marketing systems that compound. Strategy first. Execution second. Measurement always. Start a Conversation See Our Work Trusted by growth-stage teams Vectrix Candela Oaktree SaaS Reforge Nimbus What We Do Three practices, one integrated approach. Every engagement starts with strategy and ends with measurable outcomes. Brand Strategy Positioning, messaging architecture, and competitive differentiation. We define where you stand before we decide what to say. Learn more → Content Marketing Editorial strategy, production, and distribution across channels. Content that earns attention from both humans and AI systems. Learn more → Marketing Analytics Attribution modeling, pipeline analytics, and performance dashboards. We measure what matters and cut what doesn't. Learn more → 47 Clients served since 2019 3.2x Average pipeline lift 91% Client retention rate 14 Months avg. engagement Ready to stop guessing? We start every engagement with a 30-minute strategy call. No pitch deck, no pressure. Just a conversation about where you are and where you want to be. Book a Strategy Call

Based on Trafilatura extraction methodology, used by major AI training data pipelines including FineWeb (HuggingFace) and RefinedWeb (Falcon). Different AI systems may produce slightly different extractions. Results above are based on the content AI extraction pipelines would preserve from your page.

33 Checks
16 Looks good
11 Could improve
4 Needs attention
Knowledge Graph Presence Needs attention 0/100

No Wikidata entity found for 'Halcyon Agency'.

Recommendations

  1. Low effortCreate a Wikidata entry for 'Halcyon Agency.' Wikidata is one of the primary knowledge sources AI systems use. Without an entry, AI may make up facts about your brand or leave it out entirely.
  2. Low effortCreate or improve your Wikipedia article. A Wikipedia page automatically generates a linked Wikidata entity, giving your brand a presence in two knowledge bases at once.
  3. Low effortClaim and verify your Google Knowledge Panel. Google cross-references Wikidata, so having both strengthens your brand's AI visibility.
Why it Matters and Testing Methodology

Why it matters

AI systems build answers from knowledge graphs, not just web pages. Wikidata is the structured knowledge base that feeds Google, ChatGPT, and other AI systems. If your brand exists there, AI can provide accurate factual answers about your organization without crawling your site at all. GeoScored checks your Wikidata entity presence and richness so you know whether AI knows who you are.

Methodology

GeoScored queries the Wikidata SPARQL endpoint for entities whose official website (P856) matches the scanned domain. Entity richness is scored based on the number of populated properties relative to the expected set for that entity type. Domain confirmation requires an exact or subdomain match between the Wikidata P856 value and the scanned URL's root domain.

Score Breakdown By Test
Author & Trust Signals Needs attention 22/100

Weak author & trust signals: 2 of 8 trust indicators detected. AI systems may deprioritize this content as a citation source. This check measures the presence of author and organizational trust signals on the page. It does not verify the accuracy of claimed credentials or measure Google's internal E-E-A-T assessment.

Recommendations

  1. Low effortAdd an author byline to every article and page. Use JSON-LD with an author property, a visible 'By [Name]' line, or a byline CSS class. AI systems look for named authors as a primary trust signal.
  2. Low effortLink your author byline to a bio or profile page. A URL in JSON-LD author.url, a rel='author' link, or a link to /author/name/ all count. Bio pages let AI systems verify the author's identity and expertise.
  3. Low effortDisplay author credentials, job title, or institutional affiliation near the byline. Examples: 'Dr. Jane Smith, MD', 'Senior Editor at Reuters', '10 years covering financial regulation'. Add a jobTitle or hasCredential property to your JSON-LD author block for stronger schema coverage.
  4. Low effortAdd editorial review or fact-check attribution for factual content. Use phrases like 'Reviewed by [Name]' or 'Fact-checked by [Name]' near the byline. Medical, financial, or legal content benefits most from explicit expert review attribution.
  5. Medium effortAdd Person schema markup for your authors with sameAs links to their Wikipedia page, LinkedIn profile, or other authoritative profiles. Include hasCredential or alumniOf for strongest trust signal.
  6. Low effortAdd a methodology, editorial policy, or data disclosure section. Explain how your content is researched, tested, or verified. AI systems that generate AI Overviews favor sources with transparent processes. A single 'How we review' or 'Our methodology' section counts.
Why it Matters and Testing Methodology

Background

  • Add clear author attribution, credentials, and trust signals to your page content. Google and AI systems evaluate content using E-E-A-T criteria (Experience, Expertise, Authoritativeness, and Trustworthiness), and pages with these signals rank higher in AI citation selection. Note: author expertise signals in dedicated bio blocks (e.g., author-bio containers) are stripped by major AI extraction pipelines, so weave credentials and expertise into your article body for AI visibility.
  • Best practice: Named author with visible byline and bio page link. Author credentials or job title displayed. About page linked from navigation or footer. Contact information present. For YMYL content: expert review attribution included.

Why it matters

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is the framework Google uses to evaluate content quality, and AI systems apply similar criteria when deciding which sources to cite. Content without author attribution, credentials, or trust signals is treated as lower quality than content with clear authorship and institutional backing. GeoScored audits all four E-E-A-T dimensions on your page.

Methodology

GeoScored evaluates author signals from JSON-LD Person schema and visible byline elements, expertise from credential and affiliation markup, authority from citation patterns and organizational schema, and trust from contact, legal, and security signal presence. Each signal category is scored 0-1 and combined with fixed weights to produce the E-E-A-T composite score.

Score Breakdown By Test
Social Sharing Tags Needs attention 0/100

Missing Open Graph tags: og:title, og:description, og:image, og:url.

Recommendations

  1. Medium effortAdd the missing Open Graph tags: og:title, og:description, og:image, og:url. These tags control how your page appears when shared on Facebook, LinkedIn, Slack, and most messaging apps.
  2. Medium effortAdd an og:type tag. For most pages, use: <meta property="og:type" content="website">. For articles, use "article".
  3. Medium effortAdd an og:site_name tag with your brand or website name. This appears alongside the page title in social previews.
  4. Medium effortAdd a twitter:card meta tag. This controls how your page appears on X (formerly Twitter). Use: <meta name="twitter:card" content="summary_large_image"> for posts with a large preview image.

Configuration Suggestions

HTML: Open Graph meta tags

<meta property="og:title" content="Your page title">
<meta property="og:description" content="A 55-200 character description">
<meta property="og:image" content="https://example.com/image.jpg">
<meta property="og:url" content="https://halcyon-agency.com">
<meta property="og:type" content="website">
<meta property="og:site_name" content="Halcyon Agency">

Place these tags inside the <head> section of your HTML.

HTML: Twitter Card meta tags

<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:title" content="Your page title">
<meta name="twitter:description" content="A short description">
<meta name="twitter:image" content="https://example.com/image.jpg">

Place these tags inside the <head> section of your HTML.

Why it Matters and Testing Methodology

Background

  • Add Open Graph and Twitter Card meta tags to control how your page appears when shared on social platforms and messaging apps. These tags also feed AI systems that extract page metadata from social sharing protocols.
  • Best practice: og:title (25-90 chars), og:description (55-200 chars), og:image (absolute URL, 1200x630px), og:type, og:site_name all present. twitter:card set to summary_large_image. og:image:alt provided.

Why it matters

Open Graph tags control how your content appears when shared on LinkedIn, Slack, and messaging apps. They also supply AI systems with a structured summary of your page's title, description, and representative image. Pages without Open Graph tags appear as bare URLs in social sharing, reducing click-through and brand perception. GeoScored validates both essential og: tags and Twitter Card (X) metadata.

Methodology

GeoScored checks for the four essential og: tags (title, description, image, url), evaluates og:title length (25-90 chars) and og:description length (55-200 chars), validates og:url against the canonical URL, checks og:type and og:site_name presence, validates twitter:card type declaration and Twitter-specific overrides, and verifies og:image is an absolute URL with declared dimensions and alt text.

Score Breakdown By Test
Content Freshness Needs attention 9/100

No date metadata found. AI systems cannot determine content freshness.

Recommendations

  1. Medium effortAdd a dateModified field so AI systems know when this page was last updated. In your JSON-LD markup, include: "dateModified": "2026-02-18". If you use a CMS like WordPress, most SEO plugins add this automatically.
  2. Medium effortAdd a datePublished field to your JSON-LD schema markup. For example: "datePublished": "2025-09-21". This tells AI systems when the content first appeared.
  3. Medium effortAdd both datePublished and dateModified to your Article or WebPage schema. Without any date metadata, AI systems cannot assess whether this content is current.
  4. Low effortUpdate or remove stale year references in your content body. References to years more than 2 years old (2019) signal outdated content. Update statistics, review claims with old dates, and remove or qualify references like 'this year' or 'upcoming' that are no longer accurate.
Why it Matters and Testing Methodology

Background

  • Ensure your page includes explicit datePublished and dateModified metadata in your JSON-LD schema. AI systems weight fresh content higher when answering time-sensitive queries, and pages relying only on HTTP headers score lower on freshness than pages with explicit schema dates.
  • Best practice: Both datePublished and dateModified present in JSON-LD schema, matching visible on-page dates. No future dates. No stale year references older than 2 years. Content reviewed and updated within 6 months.

Why it matters

AI systems prefer fresh content when answering time-sensitive questions. If your page lacks a dateModified signal, AI models may assume the content is stale and deprioritize it in favor of recently-updated competitors. GeoScored checks all date metadata sources so you know whether AI systems can tell your content is current.

Methodology

GeoScored extracts date signals from JSON-LD schema, HTML meta tags, HTTP headers, and sitemap entries. Recency is scored on a decay curve relative to the current date. The dual_dates component rewards pages that declare both datePublished and dateModified, which allows AI systems to distinguish original publication from content updates.

Score Breakdown By Test
Meta Tags Could improve 66/100

This page has meta tag gaps: missing favicon. Meta tags are among the first signals AI crawlers and search engines read, and missing or misconfigured tags may reduce how accurately the page is categorized and surfaced in AI-generated responses.

Why it Matters and Testing Methodology

Background

  • Review and optimize your meta tags (title, description, viewport, charset, language) to give AI crawlers and search engines accurate structured metadata about your page. Well-optimized meta tags may improve both traditional search rankings and AI citation accuracy.
  • Best practice: Title: 50-60 characters, unique, descriptive, no excessive separators. Meta description: 150-160 characters. Viewport: width=device-width, initial-scale=1. Charset: UTF-8. Language: valid BCP 47 code matching page content.
  • Consider adjusting your title tag to 30-60 characters (currently 66). Google typically displays 50 to 60 characters, and longer titles may get truncated in search results and AI summaries.
  • Consider reducing the separators in your title to one at most. It currently has 3 separator characters (|, -, :), which can make the title look spammy to search engines and AI systems.
  • Consider including an action verb in your meta description. Verbs like 'learn', 'discover', 'compare', or 'get' may earn more clicks from search results.
  • Consider updating your viewport tag to the recommended setting: <meta name="viewport" content="width=device-width, initial-scale=1">. Current value: 'width=device-width, initial-scale=1.0'.
  • Consider adding a favicon to your site by including a <link rel="icon"> tag in your <head>. A missing favicon may cause repeated 404 requests from browsers and crawlers, and could make your site appear incomplete in browser tabs, bookmarks, and search results.

Why it matters

Meta tags are the foundation of how search engines and AI systems identify and summarize your page. A well-crafted title and meta description are often extracted verbatim by AI systems as the description of your content. Missing or misaligned meta tags force AI to guess at your page's purpose, reducing citation accuracy. GeoScored evaluates seven meta tag elements and their alignment with page content.

Methodology

GeoScored checks title presence and character length, meta description presence and length, viewport meta content for mobile compatibility, charset declaration for UTF-8 encoding, and html lang attribute for language declaration. Alignment score measures keyword overlap between the title, description, and the first paragraph of body content.

Score Breakdown By Test
Heading Hierarchy Could improve 69/100

Review the heading structure on this page: most headings are generic labels (e.g., 'Brand Strategy'). Clear heading hierarchy may improve how AI systems identify the page's main topic and extract individual sections.

Recommendations

  1. Low effortThis page has 2 H2 subheading(s). Adding more H2s to define each major section may help AI systems identify the full range of topics the page covers.
Why it Matters and Testing Methodology

Background

  • Review your heading hierarchy (H1-H6) to ensure each heading clearly describes the section content that follows. Clear heading hierarchy may improve passage-level topic segmentation in AI retrieval systems, which use heading elements to identify each section's topic.
  • Best practice: Exactly one H1 per page matching the title tag. H2 for all major sections. No heading level skips. All headings 3-10 words, unique, and descriptive. Table of contents with anchor links for long-form content.
  • Consider replacing formulaic headings like 'Conclusion' or 'Final Thoughts' with descriptive headings that preview the section content. Headings that summarize what a section covers, rather than signaling its position, may help AI systems extract and cite that content more accurately.
  • Best practice: Every heading describes the specific content of its section. No generic 'Conclusion', 'Final Thoughts', or 'In Summary' headings.
  • Consider rewriting 'Brand Strategy' and other short heading labels as questions or fuller phrases that preview the section's content. Most headings on this page are short labels rather than descriptive phrases, which may reduce how accurately AI systems identify section topics.
  • Consider aligning your H1 heading with the HTML title tag. Consistent topic signals across both elements may improve how AI systems identify the page's main subject.

Why it matters

AI systems use your heading structure as a table of contents for your content. A clear H1-H2-H3 hierarchy lets AI models extract individual sections and cite them accurately. Pages with skipped heading levels, multiple H1 tags, or generic headings like 'Introduction' are harder for AI to parse and less likely to appear in cited answers. GeoScored scores your heading hierarchy against AI extractability standards.

Methodology

GeoScored parses the rendered DOM heading tree, checks for a single h1 per page, validates that heading levels descend without gaps (h1 to h2 to h3, not h1 to h3), evaluates heading text descriptiveness using vocabulary diversity, and checks for the presence of navigation anchors linking to heading IDs.

Score Breakdown By Test
Schema Markup Could improve 56/100

3 of 6 recommended schema categories are present (Organization/Person, Breadcrumb Navigation, Speakable Schema). Missing: Content Type, FAQ Schema, Date Metadata. Schema markup helps AI systems identify entities and relationships on the page, and missing categories may reduce how accurately AI models attribute and cite this content.

Recommendations

Complete JSON-LD (copy-paste ready)
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "Article",
      "headline": "Halcyon Agency | Strategy-Led Marketing for Growth-Stage Companies",
      "datePublished": "YYYY-MM-DD",
      "dateModified": "YYYY-MM-DD",
      "author": {
        "@type": "Person",
        "name": "Author Name"
      },
      "publisher": {
        "@type": "Organization",
        "name": "Halcyon Agency"
      },
      "mainEntityOfPage": "https://halcyon-agency.com"
    },
    {
      "@type": "FAQPage",
      "mainEntity": [
        {
          "@type": "Question",
          "name": "What We Do",
          "acceptedAnswer": {
            "@type": "Answer",
            "text": "Three practices, one integrated approach. Every engagement starts with strategy and ends with measurable outcomes."
          }
        },
        {
          "@type": "Question",
          "name": "Ready to stop guessing?",
          "acceptedAnswer": {
            "@type": "Answer",
            "text": "We start every engagement with a 30-minute strategy call. No pitch deck, no pressure. Just a conversation about where you are and where you want to be."
          }
        }
      ]
    },
    {
      "@type": "WebSite",
      "name": "Halcyon Agency",
      "url": "https://halcyon-agency.com",
      "potentialAction": {
        "@type": "SearchAction",
        "target": "https://halcyon-agency.com/search?q={search_term_string}",
        "query-input": "required name=search_term_string"
      }
    },
    {
      "@type": "BreadcrumbList",
      "itemListElement": [
        {
          "@type": "ListItem",
          "position": 1,
          "name": "Home",
          "item": "https://halcyon-agency.com"
        }
      ]
    }
  ]
}
</script>

Add this single script tag to your page's <head> section. It contains all recommended schemas combined. Send this to your developer if you're not sure how.

8 steps for schema markup implementation

  1. Medium effortConsider adding JSON-LD structured data to this page to help search engines and AI systems extract key facts without parsing prose. Schema markup is consumed through a separate structured data channel from the text extraction pipeline, which means it is valuable but is NOT a substitute for including this information in your page's body text. Complete schema markup may improve your content's likelihood of appearing in AI Overviews and rich results.
  2. Medium effortConsider adding a content type schema (Article, BlogPosting, or WebPage) to help AI systems identify what kind of page this is. For example: "@type": "Article", "headline": "Your Title". You already have LocalBusiness, Organization markup.
  3. Medium effortConsider adding datePublished and dateModified to your schema markup. These fields help AI systems assess how fresh your content is. For example: "datePublished": "2025-09-21", "dateModified": "2026-02-18". You already have LocalBusiness, Organization markup.
  4. Medium effortIf this page has Q&A content, consider wrapping it in FAQPage schema. Each question-answer pair is a self-contained, quotable passage that AI systems may cite directly. You already have LocalBusiness, Organization markup.
  5. Medium effortConsider adding a sameAs property to your LocalBusiness schema listing your official profiles. For example: "sameAs": ["https://en.wikipedia.org/wiki/Your_Brand", "https://www.linkedin.com/company/your-brand"]. This may help AI systems confirm your identity across the web.
  6. Medium effortConsider adding `url` to your LocalBusiness schema. These properties could give AI systems more context about your content.
  7. Medium effortConsider adding LocalBusiness-specific properties to your schema: openingHours (business hours), geo (latitude/longitude coordinates), areaServed (service area), and priceRange (cost indicator). These properties are high-signal for local AI search results and help AI systems answer location-based queries about your business.
  8. Medium effortConsider adding entity-completeness properties to your Organization schema: foundingDate, numberOfEmployees, sameAs (linking to social profiles and Wikipedia), and contactPoint. These properties improve AI knowledge graph matching and help AI systems accurately represent your organization in entity-based answers.
View individual schema blocks (4)

JSON-LD: Content Type schema

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Halcyon Agency | Strategy-Led Marketing for Growth-Stage Companies",
  "datePublished": "YYYY-MM-DD",
  "dateModified": "YYYY-MM-DD",
  "author": {
    "@type": "Person",
    "name": "Author Name"
  },
  "publisher": {
    "@type": "Organization",
    "name": "Halcyon Agency"
  },
  "mainEntityOfPage": "https://halcyon-agency.com"
}

JSON-LD: FAQ Schema schema

{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "What We Do",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Three practices, one integrated approach. Every engagement starts with strategy and ends with measurable outcomes."
      }
    },
    {
      "@type": "Question",
      "name": "Ready to stop guessing?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "We start every engagement with a 30-minute strategy call. No pitch deck, no pressure. Just a conversation about where you are and where you want to be."
      }
    }
  ]
}

JSON-LD: WebSite schema (recommended for homepage)

{
  "@context": "https://schema.org",
  "@type": "WebSite",
  "name": "Halcyon Agency",
  "url": "https://halcyon-agency.com",
  "potentialAction": {
    "@type": "SearchAction",
    "target": "https://halcyon-agency.com/search?q={search_term_string}",
    "query-input": "required name=search_term_string"
  }
}

JSON-LD: BreadcrumbList schema (recommended for homepage)

{
  "@context": "https://schema.org",
  "@type": "BreadcrumbList",
  "itemListElement": [
    {
      "@type": "ListItem",
      "position": 1,
      "name": "Home",
      "item": "https://halcyon-agency.com"
    }
  ]
}
Why it Matters and Testing Methodology

Background

  • Consider adding JSON-LD structured data to this page to help search engines and AI systems extract key facts without parsing prose. Schema markup is consumed through a separate structured data channel from the text extraction pipeline, which means it is valuable but is NOT a substitute for including this information in your page's body text. Complete schema markup may improve your content's likelihood of appearing in AI Overviews and rich results.
  • Best practice: JSON-LD block present with Organization or Article type, headline, author (Person with url), datePublished, dateModified, and sameAs array. No placeholder text. Schema validates without errors. Organization name consistent with title and H1.
  • Consider adding a content type schema (Article, BlogPosting, or WebPage) to help AI systems identify what kind of page this is. For example: "@type": "Article", "headline": "Your Title". You already have LocalBusiness, Organization markup.
  • Consider adding datePublished and dateModified to your schema markup. These fields help AI systems assess how fresh your content is. For example: "datePublished": "2025-09-21", "dateModified": "2026-02-18". You already have LocalBusiness, Organization markup.
  • Consider adding a sameAs property to your LocalBusiness schema listing your official profiles. For example: "sameAs": ["https://en.wikipedia.org/wiki/Your_Brand", "https://www.linkedin.com/company/your-brand"]. This may help AI systems confirm your identity across the web.
  • Consider adding `url` to your LocalBusiness schema. These properties could give AI systems more context about your content.
  • Consider adding LocalBusiness-specific properties to your schema: openingHours (business hours), geo (latitude/longitude coordinates), areaServed (service area), and priceRange (cost indicator). These properties are high-signal for local AI search results and help AI systems answer location-based queries about your business.
  • Consider adding entity-completeness properties to your Organization schema: foundingDate, numberOfEmployees, sameAs (linking to social profiles and Wikipedia), and contactPoint. These properties improve AI knowledge graph matching and help AI systems accurately represent your organization in entity-based answers.

Why it matters

Structured data is the language AI uses to understand what your page is about. JSON-LD schema markup tells AI systems whether your page is an article, a product, a FAQ, or an organization profile. Pages with complete schema markup are more likely to be cited accurately in AI-generated answers. GeoScored validates both the presence and completeness of your structured data.

Methodology

GeoScored extracts all JSON-LD blocks from raw and rendered HTML, parses them against Schema.org type definitions, and evaluates property completeness for Organization, Article, FAQPage, BreadcrumbList, and Speakable types. Scores reflect both type presence and the completeness of required and recommended properties.

Score Breakdown By Test
Answer-First Content Could improve 63/100

0 of 6 content sections lead with concrete claims (129 words analyzed), while 1 section (e.g., 'Brand Strategy') could open more directly. AI systems are more likely to cite sections that state the key point in the opening sentence.

Recommendations

  1. Low effortThe first sentence under 'Strategy-led marketingfor companies that outgrow tactics.' does not open with a direct declarative statement. Consider starting with a subject-verb claim, such as '[Feature] provides [benefit]' or '[Product] reduces [problem] by X%'. AI systems extract the first sentence as a citation candidate, so a declarative opener may improve how this section is cited.
  2. Low effortMultiple sections bury their answers. Consider using the inverted pyramid structure: state the conclusion first, then provide supporting evidence. This approach may make each section more quotable by AI systems.
Why it Matters and Testing Methodology

Background

  • Consider front-loading your primary claim, definition, or answer in the opening sentences of each section. Analysis of 3 million AI-generated responses found that 44.2% of citations come from the first 30% of content. Sections that open with a direct, factual statement may be more likely to be extracted as source material by AI systems.
  • Best practice: Every section's first sentence contains a direct, specific claim. Key definitions and factual assertions concentrated in the first third of the page.
  • Consider adding a statistic, named entity, or comparison to the opening of 'What We Do'. For example, instead of 'it improves performance,' a specific claim like 'it reduces load time by 40%' may be more likely to be cited by AI systems.
  • Consider rewriting the first paragraph under 'Brand Strategy' to more directly address the heading's topic. An opening sentence that answers or discusses what the heading promises may improve how AI systems extract this section.
  • Consider expanding the intro paragraph after the H1. At its current length, it may be too short to serve as a useful summary. Aiming for at least 20 words that capture the page's key point could improve AI citation likelihood.

Why it matters

AI systems extract the most relevant passage from your content, not necessarily your conclusion. Content that buries the answer after paragraphs of context forces AI models to guess which passage is most relevant, reducing citation accuracy. GeoScored measures whether your content uses an answer-first (inverted pyramid) structure. This check has no equivalent in any other SEO or GEO audit tool.

Methodology

GeoScored analyzes the first 100 words of each major content section for filler phrases (e.g., 'In this article'), concrete noun density, and direct assertion structure. Heading-answer alignment is measured by checking whether H2 and H3 headings are answered by the first sentence of their corresponding section. Page summary presence is detected via meta description and introductory paragraph analysis.

Score Breakdown By Test
Fact Density Could improve 47/100

Good raw fact density (10.3 per 100 words) but the score is reduced by gaps in formal citations, original data or first-person evidence, source attributions.

Recommendations

  1. Low effortAdd source attributions to back up your claims. For example: 'According to Gartner, 65% of enterprises will adopt AI by 2025.' or 'A McKinsey report found that...' AI systems frequently cite content that includes named sources, statistics, and verifiable claims. Adding attributions gives AI more quotable material to work with.
  2. Low effortLink your citations to authoritative external sources. Add links to peer-reviewed journals, government data (.gov), university research (.edu), or major publications like Reuters or the Financial Times. AI systems frequently cite content that links to authoritative external sources. Adding linked citations gives AI more verifiable material to reference.
  3. Low effortAdd a 'Sources' or 'References' section to your page with a list of the sources behind your claims. Wrap each source name in a citation tag so browsers and AI systems can identify it as a formal reference. Structured source sections signal authoritative, research-backed content.
  4. Low effortAdd original perspective to your content. Include first-person experience ('I tested', 'we found'), original data tables, a methodology section explaining your process, or contrarian analysis that challenges conventional wisdom. Generic content that summarizes what others say earns lower AI citation weight.
Why it Matters and Testing Methodology

Background

  • Consider increasing references to specific people, organizations, products, places, and standards by name. Analysis of AI-cited content found approximately 20.6% proper noun density compared to 5-8% in typical prose. Content rich in named entities can help AI systems identify factual, reference-worthy material for citation.
  • Best practice: Content includes frequent references to named entities (people, organizations, products, standards). Proper noun density of 15% or higher, indicating reference-rich factual content.

Why it matters

AI systems cite content that contains verifiable, specific facts. Content filled with general statements and opinion is deprioritized in favor of content with statistics, named entities, dates, and cited sources. GeoScored measures fact density as facts per 100 words, a metric with no equivalent in any other SEO or GEO audit platform. High fact density is a direct predictor of AI citation frequency.

Methodology

GeoScored extracts body text from rendered HTML, strips navigation and boilerplate, and counts numerical patterns, proper nouns, and attributed source references per 100 words. Variety score penalizes repetition of the same fact type. Citation authority detects outbound links to high-authority sources (government, academic, major news) as supporting evidence signals.

Score Breakdown By Test
Brand Entity Consistency Could improve 59/100

Brand signals are partially consistent. Some mismatches found between page elements and schema markup.

Recommendations

  1. Low effortUpdate your Organization schema description to align with the opening text of your page. When the schema says one thing and the body says another, AI models get conflicting signals about your brand.
  2. Medium effortAdd an industry or knowsAbout property to your Organization schema that matches the industry language used in your page content.
Why it Matters and Testing Methodology

Background

  • Use your brand name consistently across your page title, H1 heading, Organization schema name, og:site_name, and visible page header. AI systems build a brand entity by matching your name across all these signals, and inconsistent capitalization, abbreviations, or legal suffixes can create separate entity records for the same brand.
  • Best practice: Brand name identical in title tag, H1, Organization schema name, og:site_name, and visible page header. Schema description matches page intro text. Organization schema includes industry and canonical URL.

Why it matters

AI systems resolve brand identity by matching names across multiple page signals. If your title tag says 'Acme Corp', your JSON-LD says 'ACME Corporation', and your domain is acme.com, AI models may treat these as separate or ambiguous entities. Brand name consistency across all page signals strengthens AI entity resolution and ensures your brand is attributed correctly in AI-generated content.

Methodology

GeoScored extracts brand name candidates from the title tag, Organization JSON-LD name property, og:site_name, and the largest visible text in the page header. Name consistency is scored using normalized string similarity across all candidates. Structured data match validates that the JSON-LD name value matches visible page text. Cross-element alignment checks agreement between all four name sources.

Score Breakdown By Test
JavaScript Rendering Gap Could improve 79/100

0.0% of content requires JavaScript to render. Some AI crawlers may miss this content.

Recommendations

  1. Add a <noscript> tag with fallback content. This gives AI crawlers and users with JavaScript disabled a summary of the page. For example: <noscript><p>This page requires JavaScript. Visit our sitemap for a text-based overview.</p></noscript>
  2. Reduce the 6 scripts this page loads. Combine bundles, defer non-critical scripts, and remove unused JavaScript to speed up page loads for crawlers.
Why it Matters and Testing Methodology

Background

  • Ensure your critical content is present in the raw HTML response, not only in the JavaScript-rendered DOM. Most AI crawlers fetch raw HTML without executing JavaScript, so content that only exists after rendering is invisible to those crawlers and cannot be cited.
  • Best practice: Raw HTML response contains 80%+ of rendered content by word count. All critical text, headings, and navigation present in server-side HTML. Noscript fallback present for any JavaScript-only content.

Why it matters

Most AI crawlers do not execute JavaScript. If your page content is injected by JavaScript frameworks like React or Next.js, AI systems may see a near-empty page rather than your actual content. GeoScored compares what an AI crawler sees against what a browser renders, giving you a precise measurement of your JavaScript rendering gap. This check has no equivalent in any other audit tool.

Methodology

GeoScored fetches the raw HTTP response (no JS execution) and the Playwright-rendered DOM (full JS execution), then computes text content overlap, heading survival rate, and structured data preservation. The content_gap score penalizes pages where rendered content exceeds raw content by more than 20 percent, as that threshold indicates significant AI crawler invisibility.

Score Breakdown By Test
Content Extraction Surface Could improve 43/100

AI extraction pipelines preserve 73.3% of your page content. Strong extraction surface.

Recommendations

  1. Medium effortOnly 0.0% of your page text is inside <main> or <article> elements. Wrap your primary content in a <main> element to signal the content boundary to AI extraction pipelines. This is the single most reliable signal for extraction pipeline preservation.
  2. Medium effort3 of your 9 headings are inside navigation, sidebar, or footer elements and will be stripped by AI extraction pipelines. Move important headings into your main content area, or restructure chrome elements to avoid heading tags in navigation.
  3. Low effort23 of your 29 links are inside navigation, sidebar, or footer elements and will be stripped by AI extraction pipelines. If important links (resources, references, citations) are in these areas, move them into your main content body where AI systems can discover them.
Why it Matters and Testing Methodology

Background

  • Ensure any content in your navigation, sidebars, footers, and other non-content elements does not contain valuable information that AI systems should index. AI extraction pipelines (Trafilatura, Mozilla Readability) strip these areas before AI models process your page. Use semantic HTML (<main>, <article>) to mark your key content so it is preserved.
  • Best practice: 70%+ of page text inside <main> or <article> elements. Navigation, sidebar, and footer content minimized relative to body content. Key messages, credentials, and calls to action placed in the main content area.

Why it matters

Most websites lose 60-80% of their content before AI models ever see it. AI extraction pipelines aggressively strip navigation menus, sidebars, footers, author bio blocks, and comment sections before the content reaches AI models. Content Extraction Surface shows you exactly what survives and what gets discarded. A low extraction surface means your recommendations, expertise, and key messages are invisible to AI systems even if they appear prominently on your page.

Methodology

GeoScored models AI extraction behavior based on Trafilatura and Mozilla Readability, the dominant pipelines for LLM training data and RAG retrieval. Extraction ratio is computed as extracted text characters divided by total DOM text characters. Full scans use JavaScript-rendered HTML for precise measurement. Free scans run extraction on raw HTML, which is less accurate for JS-heavy sites.

Score Breakdown By Test
Content Depth Could improve 54/100

Thin content: only 176 words. AI-cited pages typically have 1,500-2,500 words of substantive content.

Recommendations

  1. Low effortExpand your content to at least 800 words (currently 176). Thin content is unlikely to be cited by AI systems. Aim for 1,500-2,500 words with specific facts and actionable information.
  2. Low effortBreak your content into at least 4 sections with clear H2 headings (currently 2). Each section should cover a distinct subtopic with 150+ words.
Why it Matters and Testing Methodology

Background

  • Expand your content to cover the topic from multiple angles, including FAQs, comparisons, or step-by-step formats where relevant. AI systems favor comprehensive content over thin summaries, and pages with multi-angle coverage rank higher in AI retrieval because they satisfy more query variants.
  • Best practice: 1,000+ words for informational pages. Topic covered from multiple angles. FAQ section addressing common questions. Comparison tables, pros/cons, or step-by-step formats where relevant. No duplicate or boilerplate filler.

Why it matters

Thin content is the single most common reason AI systems skip a page when building an answer. AI assistants draw from pages with enough substance to fully address a question. Content depth measures whether your page has the word count, content-to-chrome ratio, and section coverage to satisfy AI information needs. Pages under 300 words are rarely cited by AI on competitive topics.

Methodology

GeoScored extracts body text by removing nav, header, footer, aside, and form elements from the rendered DOM, counts words in the remaining text, computes content ratio as body words divided by all page text words, and counts H2 sections. Word count score is normalized to a target of 1500 words. Content ratio is scored above a threshold of 50 percent. Section coverage is scored at one section per 300 words.

Score Breakdown By Test
Security Headers Could improve 55/100

Security issues found: missing HSTS, CSP, X-Frame-Options. These reduce domain trust signals for search engines and AI systems.

Recommendations

  1. Low effortAdd the Strict-Transport-Security header to enforce HTTPS connections. Example: Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
  2. High effortAdd a Content-Security-Policy header to prevent XSS and injection attacks. Start with a report-only policy to test.
  3. Low effortAdd X-Frame-Options: DENY (or SAMEORIGIN if you use iframes) to prevent clickjacking attacks. Alternatively, use the CSP frame-ancestors directive.
Why it Matters and Testing Methodology

Background

  • Add HTTP security headers (HSTS, CSP, X-Frame-Options, X-Content-Type-Options) to signal to browsers and AI crawlers that your site is a trustworthy, professional operator. Missing headers are flagged as trust deficiencies.
  • Best practice: HTTPS enforced with HSTS (max-age 31536000+). Content-Security-Policy header set. X-Frame-Options: DENY or SAMEORIGIN (or CSP frame-ancestors). X-Content-Type-Options: nosniff.

Why it matters

Security signals are trust signals. AI systems and search engines treat HTTPS as a baseline requirement, and pages served over HTTP receive lower trust scores across all AI visibility metrics. HSTS, Content Security Policy (CSP), and other security headers signal to both users and AI crawlers that your site is maintained by a technically competent team. GeoScored audits six security headers in a single check.

Methodology

GeoScored inspects HTTP response headers for HTTPS protocol in the URL, Strict-Transport-Security presence and max-age value, Content-Security-Policy presence, X-Content-Type-Options: nosniff, X-Frame-Options or CSP frame-ancestors, and the absence of HTTP resource references in rendered HTML (mixed content check). Each header is evaluated for presence and minimum recommended configuration.

Score Breakdown By Test
Document Quality Signals Could improve 44/100

Page fails 1 FineWeb quality filter: short line ratio 76% (FineWeb threshold: ≤67%). Pages with similar characteristics are excluded from major AI training datasets.

Recommendations

  1. Medium effort76% of text lines are under 30 characters, above the 67% FineWeb threshold. Pages with similar characteristics are excluded from major AI training datasets. Short lines often indicate fragmented bullet lists, navigation text, or label-value pairs. Consolidate short fragments into prose paragraphs where the content supports it.
  2. Low effortOnly 11% of text segments are prose-like (10+ words). Low prose density suggests the page is primarily lists, labels, or navigation fragments rather than substantive content. AI training pipelines favor pages with dense, sentence-level prose. Add explanatory paragraphs, expand key bullet points into sentences, and ensure the main content area contains substantive prose.
Why it Matters and Testing Methodology

Background

  • AI training pipelines (FineWeb, RefinedWeb) apply document-level quality filters before content reaches language models. Pages with similar characteristics to flagged content are excluded from major AI training datasets. Well-structured prose with clear sentence endings and minimal repetition passes these filters.
  • Best practice: ≥12% of lines ending in terminal punctuation, ≤67% of lines under 30 characters, ≤10% of characters in duplicated lines, ≥20% of text segments qualifying as prose (10+ words).

Why it matters

AI training datasets use quality filters to remove low-value pages before they ever reach a language model. FineWeb, which powers many leading LLMs, removes pages where fewer than 12% of lines end in punctuation, more than 67% of lines are under 30 characters, or more than 10% of characters appear in repeated lines. Pages with similar characteristics are excluded from major AI training datasets. GeoScored flags these signals before they affect your page's eligibility for AI training data inclusion.

Methodology

FineWeb thresholds sourced from HuggingFace FineWeb documentation and Research 60 (internal). Thresholds: terminal punctuation < 12%, short lines > 67%, duplicate chars > 10%. Analysis runs on ai_extracted_html (DR-097 extraction pipeline output), which mirrors content AI training pipelines actually process.

Score Breakdown By Test
Image Markup Quality Not applicable N/A

This check does not apply: No images detected — this check applies to pages with images.

Why it Matters and Testing Methodology

Why it matters

AI systems that process multimodal content rely on alt text to understand what your images depict. Missing or generic alt text ('image.jpg', 'photo') prevents AI from indexing your visual content. Properly sized images with lazy loading also improve page speed, which correlates with better AI crawler completion rates. GeoScored audits every image on your page for both accessibility and AI visibility.

Methodology

GeoScored counts all img elements and evaluates alt attribute presence (coverage), alt text descriptiveness using length and vocabulary checks (quality), width and height attribute declaration (dimensions), and loading='lazy' usage for below-fold images (loading). Decorative images with alt='' are counted as correct. Images with file-name-derived alt text are flagged as low quality.

Score Breakdown By Test
Table Content Risk Not applicable N/A

This check does not apply: No data tables detected — this check applies to pages with data tables.

Why it Matters and Testing Methodology

Why it matters

HTML tables are among the first casualties of AI content extraction. Trafilatura, the extraction pipeline that powers FineWeb, RefinedWeb, and NVIDIA NeMo training datasets, degrades tables during processing. jusText removes them entirely. If your pricing tables, comparison charts, technical specifications, or data summaries only exist as HTML tables without prose restatement, that information is invisible to AI models. GeoScored identifies which tables need prose fallbacks.

Methodology

Table detection runs on rendered_html (full DOM after JS execution). Layout tables are excluded using three heuristics: role=presentation/none attribute, no <th> elements with single column, or role=presentation. Key term extraction reads visible text from all <th> and <td> cells, filters stop words and short tokens. Prose restatement checks both DOM-adjacent text (500 char window) and ai_extracted_html to account for extraction pipeline behavior. Free scan fallback uses raw_html for table detection (JS-rendered tables not visible).

Score Breakdown By Test

Passing Checks (16)

HTML Accessibility Looks good 100/100

No firewall blocking detected. AI crawlers can reach this page at the infrastructure level. This check evaluates infrastructure access only; robots.txt rules are evaluated separately by the AI Crawler Access check.

Why it Matters and Testing Methodology

Why it matters

Your website uses security settings to protect against unwanted traffic. Those same security settings can also block the AI systems that power ChatGPT, Claude, and Perplexity from reading your content. If those AI systems cannot reach your site, your other optimization efforts will not matter. When AI blocking is detected, GeoScored identifies the provider and shows which AI systems may be blocked.

Methodology

GeoScored uses a tiered fetch cascade to detect WAF blocking. The scanner first requests the page as a bot (GeoScoredBot/1.0). If a challenge or block page is returned, the provider is fingerprinted from HTTP response headers (Server, Cf-Ray, X-Sucuri-ID), cookies (_abck, ak_bmsc for Akamai), and body patterns (Wordfence signatures). Detection confidence is HIGH for header and cookie-confirmed providers (Cloudflare, Sucuri, Akamai), MEDIUM for body patterns (Wordfence, AWS WAF) or unidentified challenge pages.

AI Crawler Access Looks good 100/100

All 25 tracked AI crawlers are permitted access to this page. Open crawler access allows AI systems to read and index your content, which is a prerequisite for your page to appear in AI-generated responses.

Why it Matters and Testing Methodology

Background

  • AI search engines use robots.txt (RFC 9309) to determine whether they can access your content. Pages that block AI crawlers may be invisible to users across AI answer engines and AI-generated search results.
  • Best practice: All high-impact AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) have explicit Allow rules in robots.txt. No blanket Disallow for User-agent: *. No noai/noimageai directives in X-Robots-Tag or meta robots.

Why it matters

AI search engines like ChatGPT, Claude, and Perplexity use robots.txt to determine whether they can reference your content in answers. Blocking AI search crawlers means your content will not appear when their combined 1B+ users ask questions your site could answer. This check analyzes which AI crawlers can access your content and quantifies the visibility impact of any blocks.

Methodology

GeoScored fetches robots.txt, parses group-specific allow/disallow rules for each AI bot user agent in the registry, evaluates X-Robots-Tag response headers, and checks meta robots noindex/nofollow tags. Each bot is weighted by market impact. The composite score reflects weighted access across high-impact AI crawlers.

Score Breakdown By Test
Passage Self-Containment Looks good 76/100

6 paragraphs analyzed. Most are self-contained and citable in isolation.

Recommendations

  1. Low effortAdd concrete details (names, numbers, or proper nouns) to paragraphs that currently lack them. Each paragraph should make sense on its own if an AI quotes it without the surrounding text.
Why it Matters and Testing Methodology

Background

  • Ensure each paragraph makes sense on its own without surrounding context. AI systems extract individual passages, not entire pages, when building responses, so passages that rely on pronouns or references to nearby text become meaningless when quoted in isolation.
  • Best practice: Each paragraph makes sense without surrounding context. No sentences begin with anaphoric pronouns (This, That, It). Paragraphs 20-80 words. Named entities replace pronouns in opening sentences.

Why it matters

When AI systems cite your content, they extract a single passage, not your entire page. Passages that rely on surrounding context ('As mentioned above' or 'This means that') become meaningless when cited in isolation. GeoScored measures passage self-containment, a check with no equivalent in any other GEO audit tool. Self-contained passages are citable directly by AI and social sharing without losing their meaning.

Methodology

GeoScored evaluates each paragraph for anaphora density (sentences beginning with pronouns or demonstratives like 'This,' 'That,' or 'It'), passage length in the citation-optimal range of 20 to 80 words, and specificity score based on named-entity and numerical fact presence. Paragraphs under 20 words or over 80 words receive reduced citation_length scores. Paragraphs over 120 words receive the lowest score.

Score Breakdown By Test
Author Expertise Integration Looks good 100/100

Author expertise is well-integrated into article prose. Credentials and author attribution are AI-visible.

Why it Matters and Testing Methodology

Background

  • AI systems strip dedicated author bio blocks and byline containers before processing your content. Credentials that only appear in those sections are invisible to AI models. Weave your credentials into article prose: mention your degree, years of experience, or institutional affiliation in the introduction or body text, not just the bio.
  • Best practice: Author credentials mentioned in article body text (not only in bio block). Author name attributed in body prose. Example: 'As a board-certified cardiologist at Johns Hopkins, I reviewed...' rather than a separate author-bio container.

Why it matters

Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) is a key factor in AI citation selection. But if your author credentials only appear in a dedicated bio block, AI extraction pipelines discard them before the content reaches any AI model. This check identifies whether your expertise signals survive extraction -- and flags credentials trapped in bio blocks as invisible to AI.

Methodology

html_view=rendered so the analyzer receives the full DOM to detect bio blocks. ai_extracted_html (from context) provides the post-extraction view for comparison. Credential detection uses regex patterns for degrees (PhD, MD, MBA, CPA), professional certifications, institution names, and years-of-experience phrases. Bio block detection uses class/ID pattern matching against Trafilatura and Readability strip rules.

Score Breakdown By Test
Markdown Fidelity Looks good 78/100

HTML converts cleanly to well-structured Markdown.

Why it Matters and Testing Methodology

Background

  • Ensure your HTML structure converts cleanly to Markdown by using semantic elements (H1-H6 headings, UL/OL lists, THEAD/TBODY tables, pre/code blocks). AI crawlers convert HTML to Markdown before processing, so structure lost in that conversion means AI models receive an unstructured text block. Tables and code blocks are unreliably preserved, so critical data in tables should also appear in prose form.
  • Best practice: All headings use semantic H1-H6 tags. All lists use UL/OL/LI elements. Tables use THEAD/TBODY/TR/TH/TD. Div nesting 4 levels or fewer. Content preservation ratio 70-130% after Markdown conversion.

Why it matters

AI language models process your content as plain text or Markdown, not as HTML. If your page structure is lost when converted to Markdown, AI models receive an unstructured wall of text rather than organized content with headings and lists. GeoScored measures Markdown fidelity, a check with no equivalent in other audit tools. High fidelity means your content structure survives the HTML-to-AI pipeline intact.

Methodology

GeoScored converts rendered HTML to Markdown using a standard HTML-to-Markdown converter, then measures heading survival rate (h-tags present in output), list survival rate (ul/ol converted to Markdown lists), table structure preservation, content retention ratio (text present in both HTML and Markdown), and nesting depth penalty for div structures with more than 3 levels of nesting.

Score Breakdown By Test

No lists in source HTML

No tables in source HTML

Indexability Looks good 92/100

This page is fully indexable, with a self-referencing canonical tag, no blocking directives, and search crawler access confirmed. Strong indexability signals help search engines and AI crawlers determine this is the authoritative version of the page and include it in their index.

Why it Matters and Testing Methodology

Background

  • Ensure your page has a self-referencing canonical tag, no unintended noindex directives, and is listed in your sitemap. These indexability signals determine whether search engines and AI crawlers treat your page as the authoritative version of its content. A missing or incorrect canonical tag may cause AI systems to split your page's authority across duplicate URLs.
  • Best practice: Self-referencing rel=canonical in head. No noindex directive unless intentional. Googlebot and AI crawlers allowed in robots.txt. Page listed in sitemap.xml with valid lastmod date. HTTP 200 status code.
  • Consider adding <lastmod> dates to your sitemap entries. Google may use accurate timestamps to prioritize crawling recently updated pages.

Why it matters

Indexability determines whether search engines and AI crawlers treat your page as the authoritative version of its content. A missing or incorrect canonical tag can cause AI systems to split your page's authority across duplicate URLs. Combined with robots directives and sitemap inclusion, indexability signals tell AI systems whether your page should be indexed, and which URL is the canonical one to cite.

Methodology

GeoScored validates canonical URL format and self-referential correctness, checks for conflicting robots directives across robots.txt and meta tags, verifies HTTP status code, validates DOCTYPE declaration, and checks hreflang syntax if present. A canonical pointing to a different domain is flagged as a potential misconfiguration unless the domain matches a known CDN or subdomain pattern.

Score Breakdown By Test
Performance Signals Looks good 83/100

HTML document is 13KB (if this site uses JavaScript to load content, AI crawlers may only see this skeleton). 1 render-blocking resource(s). This check evaluates performance-related HTML markup and headers. For full performance metrics including Core Web Vitals, use Google PageSpeed Insights or WebPageTest.

Recommendations

  1. Medium effortAdd preconnect or dns-prefetch hints for third-party domains you load resources from. This saves 100-300ms per domain.
  2. Medium effortAdd preload hints for critical resources like fonts and above-the-fold images.
Why it Matters and Testing Methodology

Background

  • Improve your Core Web Vitals scores (LCP under 2.5s, CLS under 0.1, INP under 200ms) to avoid ranking penalties in both AI Overviews and traditional search. Google uses Core Web Vitals as ranking factors, and slow or layout-shifting pages score lower even when their content quality is high.
  • Best practice: LCP under 2.5 seconds. CLS under 0.1. INP under 200ms. No render-blocking resources. Images with explicit dimensions. Critical CSS inlined or preloaded.

Why it matters

Page speed affects both user experience and AI crawler completion rates. Crawlers with strict time budgets may abandon slow pages before fully indexing their content. HTML-level performance signals, including page size, render-blocking scripts, and resource hints, are within your direct control without infrastructure changes. GeoScored identifies the specific HTML patterns that slow down your page for crawlers and users alike.

Methodology

GeoScored measures raw HTML response size in bytes, counts synchronous script and stylesheet elements in the document head, checks for preload and preconnect resource hints, validates Content-Encoding compression headers, and counts third-party script domains. Size is scored on a curve from 0-100KB (full credit) to over 500KB (zero). Blocking resource count is scored inversely against a threshold of 3.

Score Breakdown By Test
AI Content Visibility Threshold Looks good 100/100

Page HTML is 13 KB - well within Googlebot's 2 MB processing window.

Why it Matters and Testing Methodology

Why it matters

Google's Googlebot only reads the first 2 MB of your page. If your H1, structured data, or key content appears after that limit, it is invisible to Google AI Overviews, and likely to every AI answer engine that relies on Googlebot indexing.

Methodology

The analyzer encodes raw_html to UTF-8, measures total byte length, and if it exceeds 2,097,152 bytes, slices the first 2 MB and uses regex patterns to detect H1 tags, JSON-LD blocks, meaningful paragraphs, and brand name mentions.

Score Breakdown By Test
Readability Looks good 82/100

Content reads at grade level 8.3 with clear, accessible language.

Recommendations

  1. Low effort27.2% of your words have 3 or more syllables. Replace complex terms with simpler alternatives where the meaning stays the same.
Why it Matters and Testing Methodology

Background

  • Aim to write at a Flesch-Kincaid Grade 8 or below using shorter sentences and simpler vocabulary. AI systems extract and paraphrase your content for users with varying literacy levels, and content at this grade level is cited more frequently because it is easier for AI to parse and rephrase accurately.
  • Best practice: Flesch-Kincaid Grade Level 8 or below. Average sentence length under 20 words. No jargon without explanation. Short paragraphs (3-5 sentences). Plain language for core claims.

Why it matters

Content written at a high school reading level (grade 8-10) is more frequently cited by AI systems than academic or highly technical writing. AI assistants optimize their answers for general audiences, and clear, direct prose produces passages that are more often selected as relevant answers. GeoScored measures your Flesch-Kincaid grade level and identifies the specific sentences that increase your complexity score.

Methodology

GeoScored extracts body text from rendered HTML, strips navigation and boilerplate elements, and computes Flesch-Kincaid Grade Level using the standard formula. Block-level HTML boundaries (headings, paragraphs, list items) are preserved as sentence breaks so that marketing copy and structured content are measured accurately. Sentence length is measured as the average words per sentence, with a penalty for sentences over 25 words. Complex words are counted as words with 3 or more syllables using a vowel-group heuristic. Grade level is capped at 20.

Score Breakdown By Test
URL Structure Looks good 85/100

Clean, well-structured URL (26 characters, 0 path segments).

Why it Matters and Testing Methodology

Background

  • Use clean, descriptive URL paths that match your page topic. URL structure communicates page topic to crawlers and users, and descriptive URLs help AI systems verify that a URL's path matches the page's actual content.
  • Best practice: All lowercase letters. Hyphens between words (no underscores). Descriptive path segments matching page topic. No URL parameters for canonical content. Under 100 characters. HTTPS scheme.

Why it matters

Clean, descriptive URLs help AI systems understand page content before crawling it. A URL like /blog/how-to-write-for-ai signals the page topic to both AI crawlers and human readers. URLs with dynamic query parameters, random IDs, or excessive path depth look like paginated results or session-specific pages to AI crawlers, reducing their likelihood of being indexed and cited.

Methodology

GeoScored measures total URL character length, parses path segments and slug components for readable word content (hyphen-separated lowercase words), counts query string parameters, and measures path depth as the number of forward-slash separated segments after the domain. File extensions (.html, .php) are counted against slug quality but do not receive maximum penalty.

Score Breakdown By Test
Duplicate Content Signals Looks good 90/100

Content appears original with good vocabulary diversity (TTR: 0.644).

Why it Matters and Testing Methodology

Background

  • Use canonical tags and 301 redirects to consolidate duplicate or near-duplicate content to a single authoritative URL. Duplicate content across multiple URLs splits authority and confuses AI systems about which version to cite.
  • Best practice: Single canonical URL per piece of content. 301 redirects from all duplicate URLs. Canonical tag on every indexable page. No parameter-driven content duplicates without canonicalization.

Why it matters

AI systems extract and cite content from individual paragraphs, list items, and table cells. Pages where most words are in short labels, headings, and decorative fragments give AI crawlers less to work with than pages where the same word count lives in full prose. GeoScored measures how much of your AI-visible content is in substantive passages, scores vocabulary diversity, and flags repeated text blocks so you can see exactly how content-rich your page is to an AI reader.

Methodology

GeoScored operates on AI-extracted HTML where navigation, header, footer, and sidebar chrome have already been removed. Content density is the ratio of words inside prose elements (p, li, td, blockquote) with at least 8 words to total word count. Text uniqueness is computed as type-token ratio on the full extracted text. Repeated blocks are detected by exact text match on block-level elements with 8+ words.

Score Breakdown By Test
Accessibility Readiness Looks good 86/100

Accessibility issues found: landmarks, skip navigation.

Recommendations

  1. Medium effortAdd HTML5 landmark elements: <main> for primary content, <nav> for navigation, <header> and <footer> for page structure.
  2. Medium effortAdd a "Skip to content" link as the first <a> element on the page. This helps keyboard users navigate past repeated navigation blocks.
Why it Matters and Testing Methodology

Background

  • Implement WCAG 2.1 AA accessibility standards using semantic HTML, ARIA labels, and sufficient color contrast. These standards align directly with AI readability, and accessible pages help both screen readers and AI crawlers parse your content structure accurately.
  • Best practice: All images have alt text. Form inputs have labels. Color contrast 4.5:1 minimum for normal text. Keyboard navigable. Skip navigation link present. Semantic HTML throughout (nav, main, header, footer, article).

Why it matters

Accessibility and AI readability share the same technical foundation. Semantic HTML landmarks (main, nav, article, aside) used for screen reader navigation are the same structural signals AI systems use to segment page content. A GEO company whose own pages fail accessibility checks undermines its credibility. GeoScored audits seven WCAG 2.1 AA accessibility signals that directly overlap with AI content structure requirements.

Methodology

GeoScored checks for html lang attribute presence and valid BCP-47 language tag, semantic landmark elements (main, nav, header, footer, aside) or ARIA landmark roles (main, navigation, banner, contentinfo, complementary), skip navigation link presence, form input label association via for attribute or aria-labelledby, link text descriptiveness (not 'click here' or 'read more'), img alt attribute coverage, and deprecated HTML element usage. Checks with binary outcomes score 0 or 1; checks with partial compliance score proportionally (e.g., 9 of 10 descriptive links scores 0.9). Each score is multiplied by its weight.

Score Breakdown By Test
Redirect Chains Looks good 100/100

No redirect chain detected. The URL resolves directly.

Why it Matters and Testing Methodology

Why it matters

Every redirect in a chain adds 100-500ms of latency before your page loads. Google explicitly recommends avoiding redirect chains because each hop wastes crawl budget. If your URL passes through two or more intermediate addresses before reaching the final page, search engines and AI crawlers may abandon the chain entirely. GeoScored detects the full redirect path and identifies which hops can be collapsed into a single redirect.

Methodology

GeoScored captures the full HTTP redirect history from the initial request to the final response. Each hop is classified by type (protocol upgrade, www normalization, trailing slash, path change, domain change) and redirect status code (301/302/307/308). The analyzer scores chain length, redirect type appropriateness, and whether normalization hops can be collapsed. Meta refresh redirects in HTML are detected separately.

Score Breakdown By Test
Topical Cluster Coherence Looks good 73/100

3 internal links found. 33% are topically aligned with this page's subject. 3/3 links use descriptive anchor text.

Recommendations

  1. Low effortOf 3 internal links, only 1 appear topically related to this page based on anchor text and URL slug analysis. Add more links to pages on the same topic, and use anchor text that reflects the destination's subject matter.
Why it Matters and Testing Methodology

Background

  • Link your pages to other pages on the same topic. A page about one subject that only links to your pricing or contact page sends weak topical signals to AI crawlers. Link to related articles, guides, and resources on the same theme. This analysis covers internal links within your main content area. Navigation menu links are excluded because AI extraction pipelines remove them before processing.
  • Best practice: At least 3 internal links per page, majority pointing to topically related content, all using descriptive anchor text that reflects the destination page's topic.

Why it matters

AI answer engines favor content from sites with strong topical authority. When your pages link to each other around the same topic, AI crawlers treat your site as a credible subject matter hub, not a collection of unrelated pages. Each topically aligned internal link reinforces your authority on the topic the AI is being asked about.

Methodology

Proxy signal: not a full cluster analysis. Linked pages are not fetched. Topical alignment is inferred from anchor text words and URL slug words that overlap with topic terms extracted from this page's URL slug, H1, first paragraph, and JSON-LD structured data. False positives (unrelated pages with coincidentally similar slugs) and false negatives (related pages with generic anchor text) are possible. Accuracy improves significantly with multi-URL mode (GEO-500).

Score Breakdown By Test
AI Brand Check Looks good 94/100

Claude, ChatGPT, Gemini, and Llama 4 Scout all recognize 'Halcyon Agency' (4/4 providers, 100% consistency). Consistent recognition across multiple AI providers suggests your brand is well-represented in AI training data and may be cited more accurately. AI brand recognition depends on training data, which varies by model and version. These results reflect a point-in-time snapshot.

What AI Search Engines Say

ChatGPT (OpenAI)

Halcyon Agency is a B2B digital marketing and brand strategy firm that works with mid-market technology and professional services companies. The agency provides content marketing, website design, SEO, and brand positioning services. Halcyon Agency focuses on helping companies build thought leadership and generate qualified leads.

Gemini (Google)

Halcyon Agency is a digital marketing agency founded in 2018 that provides brand strategy, content marketing, and web development services. The agency primarily serves B2B companies in the technology and professional services sectors, helping them establish market positioning and drive lead generation.

Claude (Anthropic)

Halcyon Agency is a full-service digital marketing agency specializing in brand strategy, content marketing, and web design for mid-market B2B companies. Founded in 2018, the agency serves clients across technology, professional services, and SaaS industries. Halcyon Agency is headquartered in the United States.

Llama 4 Scout (Meta)

Halcyon Agency is a digital marketing agency that does brand strategy, content marketing, and web design for B2B companies. They work mostly with mid-market tech and professional services firms. Founded in 2018, they focus on lead generation and thought leadership content.

AI brand recognition varies by model, version, and training data. These results are a point-in-time snapshot and may differ on subsequent scans.

Why it Matters and Testing Methodology

Why it matters

This check asks major AI systems the same question your customers ask: 'What is this company?' If ChatGPT, Claude, and Gemini return accurate, consistent descriptions of your brand, you have AI visibility. If they return nothing, or wrong information, you have an AI brand gap. GeoScored queries multiple AI providers and scores recognition, consistency, and richness of the responses.

Methodology

GeoScored queries each AI provider in parallel with a standardized prompt asking for a brand description. Responses are evaluated for recognition (does the provider know the brand?), consistency (do providers agree on key facts?), and richness (how detailed are the descriptions?). Providers that fail or time out are reported as unavailable. Results reflect AI training data and vary by model version.

Score Breakdown By Test

Emerging Signals

Preview

AI search is evolving fast. These signals are based on emerging research into what makes content more likely to be cited by AI systems like ChatGPT, Perplexity, and Google AI Overviews. They do not affect your score today, but the research behind them is strengthening. We track them so you can get ahead of changes before they become standard practice.

Entity Density

19.8% of this page's content references specific named things — companies, tools, people, standards. AI-cited content typically runs around 21%, and you're close. Keep adding concrete examples, named tools, and specific outcomes to push into the cited range.

Content rich in specific, named entities gives AI systems concrete facts to reference. Research across 18,000 verified AI citations found that entity-dense content is selected at significantly higher rates than generic prose. Naming the specific people, companies, tools, and standards relevant to your topic may make your content more citable.

Worth a Look
Content Position Distribution

35% of your strongest content signals appear in the first third of the page. AI systems pull 44% of their citations from opening sections — they read top-down and weight the first third heavily. You're close. Moving one or two key claims earlier could tip the balance.

AI systems scan content from top to bottom and disproportionately cite material from the opening sections. Research based on 3 million ChatGPT responses found that 44.2% of citations reference the first 30% of a page. Front-loading your strongest claims, definitions, and data points may increase the likelihood that AI systems extract and cite your content.

Worth a Look
Definitional Language

We found no definition patterns on this page. Phrases like 'X is...' or 'X works by...' are exactly what AI systems extract when forming answers. Pages with 3 or more clear definitions are cited at roughly twice the rate of pages without them. Adding definitions for your 3 most important terms is the highest-ROI change on this list.

When your content uses clear definitional patterns ('X is,' 'X refers to,' 'X is defined as'), AI systems can extract and attribute those definitions with confidence. Research shows content with definitional language is cited at roughly twice the rate of content without it. Adding clear definitions for your key terms gives AI systems ready-made answers to attribute to your page.

Worth a Look
Citation Tone

Your page's tone scored 0.00 — on the dry side. AI systems prefer content that balances factual reporting with light interpretation, not purely neutral and not promotional. The sweet spot is 0.30–0.60, and the research benchmark sits at 0.47. Adding a light layer of perspective to your factual claims would help.

AI systems tend to cite content that balances factual reporting with light interpretation. Purely promotional copy and purely dry data both get cited less often. The research benchmark of ~0.47 suggests a tone similar to industry analysis: grounded in evidence, with enough perspective to be useful.

Worth a Look
llms.txt

Your server returned HTML page content at https://halcyon-agency.com/llms.txt instead of a valid plain-text llms.txt file. AI systems expect a Markdown-formatted text file at this path; serving HTML or an error page means the llms.txt specification is not fulfilled.

llms.txt is the most discussed emerging standard in AI search optimization, with over 844,000 websites adopting it. However, no major AI provider (OpenAI, Google, Anthropic) has confirmed parsing it in production, and independent studies have not yet measured a direct impact on AI citations. We track it because the standard is still maturing and adoption is growing quickly.

Opportunity
llms-full.txt Presence

An llms-full.txt file is present at /llms-full.txt (12,691 bytes). This file allows RAG pipelines and AI coding tools to ingest your full site documentation in one request.

AI coding assistants like Cursor and similar tools already read llms-full.txt to answer developer questions about products and APIs without crawling page by page. Vercel, Cloudflare, and Anthropic publish their own. The impact on AI search citations is not yet measured, but adoption among documentation-heavy sites is real and growing.

Ready
Technology Stack

We didn't detect a specific JavaScript framework on this page. The scores above reflect what AI systems extract from your page as rendered.

Worth a Look

See where your content stands

Enter any URL. Get your score in 60 seconds. Free.

Run an AI Visibility Screening