How It Works

AccessGuard Scanner evaluates your website against WCAG 2.1 AA — the accessibility standard that governs ADA compliance for both federal Title II deadlines and state laws like Colorado HB 21-1110. Here is exactly what the scan does, what it checks, and what it cannot.

Step by step

When you submit a URL, four things happen in sequence, usually in under 30 seconds:

  1. We render your page in a real browser. Chromium loads the page the same way a visitor would, including any content injected by JavaScript. This catches accessibility issues that simple HTML parsers miss.
  2. Rule-based checks run first. Our Tier 1 scan uses axe-core, the industry-standard accessibility engine built into Chrome DevTools and Microsoft Accessibility Insights. It evaluates 90+ deterministic criteria in milliseconds.
  3. AI analysis adds context. Our Tier 2 scan evaluates judgment calls that rule-based engines cannot make — whether alt text is actually descriptive, whether reading level fits the audience, whether link purpose is clear without surrounding context.
  4. Results compile into a report card. A letter grade (A–F), findings categorized by the four WCAG principles, and a specific fix recommendation for every issue.

The two-tier approach

Tier 1 — Rule-based checks

Fast, deterministic, industry-standard. If a rule-based engine flags an issue, it is a real issue. These are the checks you can count on for baseline compliance:

  • Missing or empty alt text on images
  • Insufficient color contrast ratios (fails 4.5:1 for normal text, 3:1 for large text)
  • Broken heading hierarchy — skipped levels, multiple H1s, orphan headings
  • Missing form labels or labels not programmatically associated
  • Generic link text (“click here”, “read more”) that loses meaning out of context
  • Missing page language attribute
  • Keyboard focus indicators that have been removed or suppressed
  • Invalid ARIA roles and attributes
  • Missing page title
  • Viewport scaling restrictions that prevent zoom
  • Auto-playing media without user controls
  • Missing skip-navigation link
  • Data tables without proper headers

Tier 2 — AI-powered analysis

The nuance automation usually misses. Tier 2 reviews content that is technically valid but may still fail real users:

  • Alt text quality — is your alt text actually descriptive, or just image_01.jpg?
  • Content reading level — flagged against plain-language expectations, especially important for government-facing sites
  • Link purpose clarity in context — would a screen reader user know where a link goes from the link text alone?
  • Logical content structure — does the heading outline actually describe the content, or was it styled for appearance?
  • Plain-language suitability — writing aimed at a general audience, not subject-matter experts

Your report card explained

Every scan produces a letter grade from A to F. The grade reflects a severity-weighted count of findings across the four WCAG principles, so it captures both how many issues exist and how seriously they affect users.

Severity weighting

Not every issue is equal. We weight findings by how much they actually affect real users:

  • Critical (3×) — blocks major user groups entirely. Missing alt text on informational images, keyboard traps, form fields with no label.
  • Major (2×) — significantly hampers users. Low contrast ratios, broken heading hierarchy, missing page language.
  • Minor (1×) — affects some users in some contexts. Generic link text, missing skip-nav link on short pages.

Fix recommendations

For every issue we find, the report includes the specific WCAG success criterion violated (e.g., 1.1.1 Non-text Content), the reference to ADA Title II regulations where applicable, and a concrete fix — not just “add alt text” but WHERE in your markup and HOW.

What automated scanning cannot catch

Automated tools cover roughly 60–70% of WCAG success criteria. Being honest about that is important — no scanner, ours included, can certify full compliance on its own. These things require manual testing or human judgment:

  • Screen reader flow — how a VoiceOver or NVDA user actually experiences the page, which a static scan cannot simulate
  • Cognitive accessibility — whether the content is understandable to users with cognitive disabilities
  • Complex interactive widget behavior — custom dropdowns, modals, single-page app routing, drag-and-drop
  • Video and audio content quality — whether captions actually convey what is being said
  • Dynamic content changes — whether live regions announce updates appropriately
  • Touch target sizing on real devices — 44×44px is the WCAG guideline, but real-device testing is more reliable

If you need comprehensive coverage, pair AccessGuard Scanner with manual testing by accessibility professionals. Castle Rock Sky offers full audits as a consulting service.

Ready to see where your site stands?

Scroll to Top