What we cover
Give me the TL;DR

A WCAG 2.1 AA accessibility audit evaluates whether a government website allows people with disabilities to perceive, navigate, and complete digital services.

Most public agency accessibility audits fail before they produce a single finding.

Not because the tools are wrong. Not because the people running them don't know what they're doing. They fail because they are scoped as technical exercises rather than governance tools… and those are fundamentally different things with fundamentally different outputs.

A technical accessibility audit tells you what is broken. A governance-focused accessibility audit tells you where you are exposed, what the exposure costs you if it surfaces in an enforcement context, what needs to be fixed first and why, and how to build the documentation that makes your remediation program defensible.

Public agencies need the second kind. Most are getting the first kind… or worse, they're running automated scans, generating a report full of color contrast flags and missing alt text counts, declaring the audit complete, and building remediation plans around findings that represent maybe 30 percent of their actual exposure.

This guide is about how to do it right. Eight steps, grounded in the operational reality of public sector digital services, structured to produce outputs that hold up when they need to.

 

1. Define Scope Based on Public Service Impact, Not Page Count

The first mistake most agencies make when approaching an accessibility audit is treating it like a web crawl. They point a tool at their domain, let it index every URL, and end up with a spreadsheet of thousands of findings across hundreds of pages — most of which represent low-risk cosmetic issues on pages that a small fraction of residents ever visit.

That is not a useful starting point. It is a paralysis engine.

A governance-focused audit scopes strategically, prioritizing the surfaces where accessibility failures create the most exposure — legally, operationally, and in terms of actual impact on residents trying to access public services.

Scope your audit around:

Global templates first. Header, footer, navigation, form patterns, modal windows, and any other component that is shared across your site. Template-level accessibility issues are the highest-leverage audit target because they replicate across every page that inherits from the template. A keyboard navigation failure in your global navigation is not one issue. It is an issue on every page of your site. Fix it at the template level and you resolve it everywhere simultaneously.

Core transactional workflows. Every workflow a resident must complete to access a government service — permit applications, business license renewals, tax payments, utility services, public records requests, inspection scheduling, event registrations. These are not content pages. They are interactive systems. And they are the surfaces that receive the most scrutiny in ADA Title II enforcement proceedings because they represent the core function of government digital services. An agency that has clean homepage contrast ratios but an inaccessible permit application has its priorities exactly backwards.

High-traffic public-facing pages. The pages residents actually visit — service landing pages, contact pages, department home pages, emergency information pages. Prioritize by traffic volume, not by how recently they were redesigned.

Public document sampling. A representative sample of your most-accessed PDFs and downloadable documents — meeting agendas, public notices, permit forms, annual reports. Documents are audited differently than web pages and require a separate evaluation framework, but they belong in the initial scope.

Third-party integrations. Every embedded tool a resident interacts with: payment portals, GIS mapping systems, scheduling platforms, chat widgets, public records databases. These are part of your compliance obligation whether you built them or not.

Scope discipline is what makes an audit actionable. An unfocused crawl of your entire domain produces a report no one can prioritize. A strategically scoped audit produces a findings document that tells you exactly where to start and why.

 

2. Combine Automated and Manual Testing — and Understand What Each Actually Does

Automated accessibility testing tools are useful. They are also fundamentally limited in ways that matter enormously for public agencies evaluating their Title II exposure.

Here is the honest accounting: automated tools reliably detect a specific category of accessibility issues — things that can be identified through code analysis without human judgment. Missing alternative text attributes. Color contrast ratios below threshold. Duplicate element IDs. Improper heading hierarchy. ARIA attribute misuse. These are real issues worth finding.

What automated tools cannot detect is the category of issues most likely to generate ADA complaints and enforcement attention: the failures that only become visible when a real user with a disability actually tries to use the system.

What automated tools miss:

A permit application form may pass every automated check — all fields have label attributes, all error states have associated text, the markup is technically valid. And that same form may be completely unusable for a screen reader user because the error announcements fire in the wrong order, the dynamic validation messages are not announced to assistive technology, and the "submit" button becomes unreachable after a validation failure traps keyboard focus inside a specific field.

None of that is detectable by an automated scan. All of it is a service barrier. All of it is Title II exposure.

Manual testing must include:

  • Keyboard-only navigation through every core workflow — complete the entire transaction using only a keyboard. Tab through every field. Navigate every dropdown. Submit the form. Handle an error state. Receive a confirmation. If any step in that sequence fails, residents who cannot use a mouse are blocked from completing that transaction.
  • Screen reader validation using actual assistive technology — NVDA and JAWS on Windows, VoiceOver on macOS and iOS. Confirm that every form field is announced with its label when focused. Confirm that error messages identify which field failed and what the correct format is. Confirm that dynamic content updates — progress indicators, inline validation, confirmation messages — are announced when they occur.
  • Focus order validation — tab through the entire page in sequence and confirm the focus moves in a logical order that follows the visual and content hierarchy. Focus that jumps randomly or skips interactive elements is a keyboard navigation failure regardless of what the automated scan reported.
  • Error state testing — intentionally trigger every error state in every form. Submit with required fields empty. Enter incorrectly formatted data. Exceed character limits. For each error, confirm that the error is announced to assistive technology, that the announcement identifies which field failed, and that the guidance for correction is specific and actionable.
  • Modal and dialog behavior — open every modal window, overlay, and dialog on the site. Confirm that focus moves into the modal when it opens, that the content behind the modal is not accessible while the modal is open, that the escape key closes the modal, and that focus returns to the triggering element when the modal closes. Modal focus management failures are one of the most common keyboard navigation failure patterns in public sector web environments.
  • Session and timeout behavior — if your transactional workflows include session timeouts, confirm that users receive a warning before the session expires, that the warning is announced to assistive technology, and that the session can be extended using keyboard only.
  • Dynamic content updates — any element that updates without a full page reload needs to announce those updates to assistive technology. Progress indicators, form validation messages, search results, map updates. Confirm each one with a screen reader.

The practical implication: an audit that relies solely on automated scanning is structurally incomplete. It will miss the failures that matter most. And when those failures surface in a complaint, the existence of an automated scan report will not constitute a defensible compliance program.

 

3. Audit Through the WCAG Principles, Not Just an Error List

WCAG 2.1 AA is organized around four principles: Perceivable, Operable, Understandable, and Robust. Structuring your audit findings around these principles rather than a flat error list does something important — it reveals patterns in how your digital environment fails, which is the information you need to make remediation decisions that address root causes rather than symptoms.

Perceivable

Information must be presented in ways all users can perceive regardless of sensory ability. In public sector environments, perceivability failures concentrate in a predictable set of areas.

Images and non-text content without alternatives. Every image that conveys information needs meaningful alternative text. Not a filename. Not "image." A description of what the image communicates. A photo of a construction detour needs alt text that conveys the detour route. A chart showing budget allocations needs a text alternative that conveys the data. An infographic summarizing a public health initiative needs a text version of the information the infographic contains.

Scanned documents. This is the most pervasive perceivability failure in local government. A board agenda scanned to PDF is a flat image. A screen reader sees nothing. Every piece of information in that document — the meeting agenda, the public comment procedure, the voting items, the financial reports — is completely inaccessible to residents who are blind or have low vision. Agencies post thousands of these. Many have no idea this is a compliance failure because the documents look fine on screen.

Video and audio content without captions or transcripts. Council meeting recordings. Public safety announcements. Instructional videos for permit applications. If it contains audio, it needs accurate captions. If it contains audio-only content, it needs a transcript. "Auto-generated" captions from video platforms are not sufficient — they are frequently inaccurate and do not meet the accuracy threshold required for WCAG conformance.

Color as the sole means of conveying information. Required form fields marked only with a red asterisk and no text label. Error states indicated only by a red border with no text announcement. Charts that differentiate data series only through color. All of these fail for users with color vision deficiencies and for users accessing content on screens or in environments where color is not reliably distinguishable.

Operable

Users must be able to navigate and interact with all digital content using different input methods. Keyboard operability is the primary benchmark. If a user cannot complete an action using only a keyboard, that action fails this principle.

The permit application test. Open your agency's most-used online permit application. Put your mouse across the room. Navigate the entire application from start to submitted confirmation using only the Tab key, Shift-Tab, Enter, Space, and the arrow keys. Can you reach every field? Can you select every option? Can you navigate the date picker? Can you submit the form? Can you read and respond to error messages? Can you receive and read the confirmation?

Most agencies that try this for the first time discover failures they did not know existed. CAPTCHA systems with no audio alternative. Date pickers built with custom JavaScript that cannot be navigated via keyboard. Multi-step forms where a validation failure traps keyboard focus inside a specific component with no escape. These are not minor usability inconveniences. They are complete service barriers for residents who rely on keyboard navigation — which includes people with motor disabilities, people using assistive technology, and people who simply cannot use a mouse.

Skip navigation links. Every page should have a skip navigation link as the first focusable element — a link that allows keyboard users to bypass the global navigation and jump directly to the main content. Without it, a keyboard user navigating a 40-item navigation menu must tab through every item on every page before reaching the content. For screen reader users, this is not inconvenient. It is exhausting and often prohibitive.

Visible focus indicators. Every interactive element on the page — links, buttons, form fields, dropdowns — needs a visible focus indicator that shows keyboard users where they are. Many agency websites suppress default browser focus styles for aesthetic reasons and fail to replace them with custom styles. The result is an invisible cursor for keyboard users navigating the page.

Understandable

Content and interface behavior must be clear, predictable, and usable under constraint. This is the principle that most directly reflects the quality of the user experience for people with cognitive disabilities, people using assistive technology, and people navigating unfamiliar systems.

Form labels and error messaging. Every form field needs a visible, programmatically associated label — not a placeholder that disappears when the user starts typing, not a visual label positioned near the field without a programmatic association. When a form submission fails, the error message must identify which field failed, what the problem is, and specifically how to correct it. "Error" is not a useful error message. "Please enter your phone number in the format 555-555-5555" is.

Consistent navigation. Navigation that changes structure, order, or labeling across different pages creates orientation failures for screen reader users and cognitive load for all users. Navigation must be consistent. Patterns must be predictable.

The tax payment portal scenario. A resident navigates to their county's online tax payment portal. They enter their parcel number and proceed to the payment screen. They enter their credit card information and click submit. The page reloads with a red border around several fields. No text explanation appears. No announcement is made to the screen reader. The resident — who is blind — has no idea which fields failed, why they failed, or what the correct format is. They try again. The same thing happens. On the third attempt, they give up and call the office.

That is not a minor usability friction. That is a resident being denied access to a government service because of an inaccessible error handling implementation. And it is one of the most common failure patterns in public sector transactional workflows.

Robust

Content must be compatible with current and future assistive technologies. This principle is where governance intersects most directly with technical implementation.

Valid, semantic HTML. Assistive technologies interpret your site's markup to build a model of the page that users navigate. Invalid HTML, improper use of semantic elements, and incorrect ARIA implementation produce unreliable or misleading models. A screen reader user navigating by landmark regions on a page where landmark structure is incorrect will have a fundamentally different experience than what the visual design suggests.

ARIA used correctly. ARIA — Accessible Rich Internet Applications — is a set of attributes that allow developers to communicate roles, states, and properties to assistive technology for dynamic content that HTML alone cannot describe. Used correctly, ARIA dramatically improves the accessibility of complex interactive components. Used incorrectly — which is common — ARIA actively breaks accessibility by overriding correct semantic information with incorrect programmatic information. The first rule of ARIA is "don't use ARIA if you can use a native HTML element instead." Many accessibility failures in public sector web environments are the result of incorrect ARIA implementation, not the absence of accessibility effort.

Vendor update monitoring. Vendor tools fail the robustness principle more often than internal code — because they update independently of agency oversight. A payment portal that was WCAG 2.1 AA conformant at implementation may fail robustness tests after the vendor's next release. Without recurring testing of vendor integrations after major updates, these regressions are invisible until a resident encounters them.

 

4. Classify Findings by Operational Risk, Not Just Technical Severity

Traditional accessibility audits classify findings as Critical, Major, or Minor based on the severity of the technical violation. That classification is useful as a starting point. It is not sufficient for public agencies making remediation decisions under resource constraints.

An inaccessible decorative icon on a low-traffic blog post may be classified as "Minor" by technical severity standards. An inaccessible required field on a permit application that 500 residents submit per week may also be classified as "Minor" if the technical violation itself is minor. Treating these as equivalent is a resource allocation failure.

Public agencies need a risk classification layer that sits on top of technical severity and incorporates operational impact.

The operational risk factors that matter:

Service criticality: Does this issue exist in a workflow that residents must complete to access a government service? Permit applications, tax payments, utility services, public records requests. Issues in these workflows are high operational risk regardless of technical severity classification.

Template scope: Does this issue exist in a shared template or global component? An issue that affects every page on the site is categorically different from an issue on one page, even if the technical severity is identical.

Traffic volume: How many residents encounter this issue per week? A failure on your agency's most-visited service page affects more people than the same failure on a page with minimal traffic.

Discoverability: How easily would this issue be identified by a resident filing a complaint or by an enforcement body conducting a review? Issues in primary navigation, on the homepage, and in core transactional workflows are more discoverable than issues in obscure corners of the site.

Systemic versus isolated: Is this a pattern that appears across multiple pages and workflows, or is it isolated to a single instance? Systemic patterns indicate underlying structural or governance problems that individual page fixes will not resolve.

When you layer these operational factors onto technical severity classifications, the remediation priority order changes significantly — and the result is a program that actually reduces exposure rather than one that generates impressive fix counts while leaving the most dangerous issues unaddressed.

 

5. Audit Your Document Library as a Separate Discipline

PDF and document accessibility requires a different evaluation approach than web page accessibility, and most agencies dramatically underestimate both the volume and the complexity of their document exposure.

Web pages are code. PDFs are documents — and their accessibility depends on structural properties that are entirely separate from the visual presentation. A PDF can look perfectly formatted on screen while being completely inaccessible to assistive technology.

Document audit evaluation includes:

  • Tag structure: Is the document tagged? An untagged PDF has no structural information for assistive technology to interpret. Scanned PDFs are always untagged because they are flat images — no OCR processing means no text layer, which means no accessibility.
  • Heading hierarchy: Does the document use tagged headings (H1, H2, H3) to establish structure? Without heading structure, screen reader users cannot navigate by section — they must listen to the entire document sequentially.
  • Reading order: Is the reading order defined correctly? Multi-column layouts, sidebars, and complex visual arrangements frequently produce incorrect reading orders when exported to PDF without remediation — meaning assistive technology reads content in a sequence that does not match the visual or logical order.
  • Alternative text for images and figures: Charts, graphs, photographs, and diagrams embedded in documents need text alternatives just as they do on web pages.
  • Table structure: Data tables need programmatic header associations so the relationship between cells and headers can be communicated to assistive technology.
  • Accessible form fields: Interactive PDF forms need programmatically labeled fields, not just visual labels positioned near the fields.
  • Document language: The language of the document must be specified in the document properties so assistive technology uses the correct voice and pronunciation profile.

Document inventory and risk classification:

The practical reality at most public agencies is that a complete remediation of every historical document in the library is not achievable in a single compliance program. The document audit needs to produce not just a findings list but a risk-classified inventory that distinguishes:

  • Active service documents: Forms residents must complete, public notices, meeting agendas, permit instructions, regulatory requirements. These are the highest priority and need immediate remediation attention.
  • High-traffic informational documents: Annual reports, budget documents, environmental studies that receive significant public access. Medium-high priority.
  • Archival content: Historical documents that are rarely accessed and do not relate to current services. Lower priority, addressable on a longer timeline.

This classification is what makes a document remediation program sustainable — it focuses resources on the documents where inaccessibility creates the most impact and the most legal exposure.

 

6. Include Every Vendor Tool in Your Audit Scope

This is the step most agency audits skip. It is also one of the most significant sources of unmanaged Title II exposure.

Public agencies do not operate a single codebase. They operate a digital ecosystem — their own CMS, plus a payment processor, a permit portal built by a third-party vendor, a GIS mapping tool, a scheduling system, a chat widget, an online forms platform, and potentially dozens of other embedded tools that residents interact with when they access public services.

Every one of those tools is part of your compliance obligation.

Vendor audit evaluation includes:

  • Keyboard navigation testing of every resident-facing function in the tool — not just the landing page, the actual transactional workflows the tool is used for
  • Screen reader compatibility testing using at least two assistive technology combinations — NVDA with Chrome, JAWS with Edge, VoiceOver with Safari
  • VPAT review — request the vendor's current Voluntary Product Accessibility Template and evaluate it against WCAG 2.1 AA. A VPAT that says "supports" for every criterion is not meaningful. Look for honest conformance levels, identified gaps, and the date of the last evaluation.
  • Post-update testing — document when vendor tools were last tested and establish a process for retesting after major vendor updates

Document the responsibility boundary for each vendor tool: What is the agency responsible for? What has the vendor committed to? What contractual accountability exists if the vendor's tool fails accessibility testing? These questions need answers before an enforcement inquiry surfaces them.

 

7. Produce Governance-Ready Deliverables, Not Just a Findings List

An audit that produces a spreadsheet of accessibility violations has produced an incomplete deliverable. A raw findings list tells you what is broken. It does not tell you what to do about it in what order, how to explain it to executive leadership, how to build a remediation program around it, or how to use it as the foundation for a governance documentation record.

A governance-ready audit deliverable includes:

Executive summary. A non-technical summary of the audit's key findings, risk classification, and recommended priorities — written for the city manager, the CIO, legal counsel, and leadership stakeholders who need to understand exposure without reading a technical report. This is the document that creates executive visibility and generates organizational will to resource the remediation program.

WCAG criterion mapping. Findings mapped to specific WCAG 2.1 AA success criteria so remediation decisions can be tied to the technical standard being addressed. This is the reference document for developers executing fixes and for documentation demonstrating that remediation was WCAG-targeted rather than arbitrary.

Risk-based categorization. Issues classified by both technical severity and operational risk — the two-axis model described in Step 4. This is what makes the findings actionable rather than paralyzing.

Affected templates and workflows. A clear identification of which issues exist at the template level versus the page level versus the workflow level, so remediation sequencing can leverage template-level fixes appropriately.

Remediation recommendations. For each finding, specific guidance on what the fix requires — not just identification of the problem. This is what allows remediation work to begin immediately rather than requiring a separate discovery process for each issue.

Estimated effort levels. A rough sizing of remediation effort for each finding category — not a precise estimate for every individual issue, but enough to inform resource allocation decisions and roadmap planning.

Prioritized remediation roadmap. The output that drives the actual work: a sequenced list of what to fix first, organized by operational risk and template scope, with enough specificity to assign ownership and timelines.

This complete package is not just an action plan. It is the first entry in the governance documentation record — dated evidence that your agency evaluated its accessibility posture, identified issues systematically, and established a prioritization framework. That record is foundational to defensible compliance.

 

8. Treat the Audit as a Baseline, Not a Finish Line

The single most important thing to understand about an accessibility audit is what it is not. It is not a certification. It is not a compliance badge. It is not something you complete and file.

An audit is a diagnostic. It establishes a baseline — an honest, documented picture of where your agency's accessibility posture stands at a specific point in time. What you do with that baseline is what determines whether the audit was worth conducting.

The transition from audit to governance requires:

Defined remediation allocation. The audit findings feed a remediation program with specific resource commitments — developer hours, external partner retainer, or both. Without allocation, the audit report sits on a drive while the issues it identified persist and new issues accumulate.

A monitoring cadence. The baseline the audit establishes will degrade without monitoring. New content, vendor updates, and template modifications will introduce new issues. Monthly automated scanning with quarterly manual QA is the minimum viable monitoring program for most public agencies. The audit establishes what you're monitoring against.

A documentation and logging program. Every remediation action taken in response to audit findings should be logged — dated, categorized, attributed, and archived. The audit report plus the remediation log is what transforms good-faith effort into demonstrable good-faith effort.

An executive reporting cadence. Audit findings and remediation progress should be reported to leadership on a defined schedule. This creates institutional accountability, sustains organizational will to resource the program, and generates the reporting record that demonstrates executive engagement to enforcement bodies.

Vendor review integration. The audit's vendor findings feed an ongoing vendor review process — VPAT review in procurement, post-update testing on a scheduled cadence, contractual accountability frameworks.

Without this transition, agencies conduct audit cycles without building sustainability. They find the same categories of issues each time because the structural conditions that produce them were never addressed. The audit fee is paid twice, three times, four times — and the compliance posture never fundamentally improves.

The audit is the beginning of the governance program. Not the program itself.

 

Structure First. Remediation Second. Sustainability Always.

A WCAG 2.1 AA audit conducted with governance as the objective produces something categorically more valuable than a technical audit conducted as a compliance exercise. It produces clarity about where exposure lives, confidence that prioritization decisions are defensible, documentation that establishes the baseline for a real compliance program, and a roadmap that actually reduces risk rather than generating activity.

What protects public agencies under ADA Title II is not the existence of an audit report. It is the existence of a governance-driven remediation and monitoring program that the audit report initiates.

If your agency has not conducted a governance-focused accessibility audit — or if your last audit was more than 18 months ago — your current compliance posture is unknown. And unknown is not defensible.

Request a Hounder ADA Risk Assessment

Share this post