A website accessibility self-audit is a structured process for identifying WCAG compliance issues using automated tools, keyboard testing, and screen reader validation without requiring specialized expertise.
You do not need to hire a consultant to know whether your government website has serious accessibility problems.
You need about 90 minutes, three free tools, and the willingness to put your mouse down.
That last part is important. The single most revealing accessibility test you can run on any government website costs nothing and requires no software. Navigate your most important service page using only your keyboard. Tab through every interactive element. Try to complete the form or transaction without touching your mouse. If you get stuck — if focus disappears, if you cannot open a dropdown, if you cannot submit — you have found an accessibility barrier that is preventing real residents from accessing real government services.
That test takes ten minutes. Most government websites fail it in under two.
This guide gives you a complete, structured, 90-minute self-audit protocol that any web manager, IT director, or communications coordinator can run without specialized training or paid tools. It will not replace a professional accessibility audit — we will be honest about exactly what it will and will not catch. But it will tell you where your highest-risk failures are, give you a documented starting point for your compliance program, and produce findings specific enough to drive remediation decisions.
If you run this protocol and the results are troubling — and for most government websites, they will be — you will at least know what you are dealing with. That is the beginning of a real compliance program.
What This Protocol Will and Will Not Catch
Before running the tests, understand what the 90-minute protocol is designed to find and what requires professional expertise to identify.
What this protocol catches well:
Keyboard navigation failures — traps, missing focus indicators, broken tab order. Color contrast failures on primary content areas. Missing or incorrect alternative text on images. Form label association failures. Inaccessible error handling in form submissions. Basic heading structure problems. Missing skip navigation. Page language not set. Obvious ARIA implementation errors.
These are the categories that generate the most ADA complaints against government agencies. They are also the categories where the free tools available to any web team perform reliably.
What this protocol does not catch reliably:
Complex ARIA implementation errors in dynamic content. Screen reader incompatibilities in interactive components that look correct but behave incorrectly with specific assistive technology combinations. Reading order failures in complex layouts. Cognitive accessibility issues. Subtle focus management failures in multi-step workflows. PDF and document accessibility. Third-party embedded tool failures. Mobile-specific accessibility issues.
These require manual screen reader testing, professional expertise, and in some cases assistive technology user testing. A professional audit catches failures that this protocol misses — and for a formal ADA compliance program, a professional audit is necessary. But this protocol catches the failures that are most visible, most likely to generate complaints, and most directly preventable with moderate technical effort.
The Tools You Need — All Free
Tool 1: axe DevTools Browser Extension The browser extension version of axe DevTools is free and available for Chrome and Firefox. It performs automated WCAG scanning on any page you have open in your browser, identifies violations mapped to specific WCAG success criteria, flags issues with high accuracy and low false positive rates, and provides developer-actionable guidance for each finding.
Install it from the Chrome Web Store or Firefox Add-ons. After installation, open it through your browser's developer tools panel (F12 in Chrome, then the axe DevTools tab).
Tool 2: WAVE Browser Extension Developed by WebAIM at Utah State University, WAVE is a free browser extension that overlays accessibility information directly on the page you are viewing. Unlike axe, which outputs a list of violations in a panel, WAVE shows icons directly on the page indicating where errors, alerts, and structural elements are located. This makes it excellent for visually identifying where problems are in the context of the actual page layout.
Install from the Chrome Web Store or Firefox Add-ons. Activate it by clicking the WAVE icon in your browser toolbar.
Tool 3: NVDA Screen Reader (Windows only) NVDA — NonVisual Desktop Access — is a free, open-source screen reader for Windows developed by NV Access. It is one of the two most widely used screen readers by blind users (alongside JAWS, which requires a license). Testing with NVDA tells you what a blind resident actually experiences when they navigate your website.
Download from nvaccess.org. Use it with Chrome or Firefox for the most representative results. When NVDA is running, it reads page content aloud and announces interactive elements as you navigate with keyboard commands.
If you are on a Mac, VoiceOver is built into macOS and requires no download. Activate it with Command + F5. Use it with Safari for the most representative Mac testing results.
The 90-Minute Protocol: How to Structure Your Time
The protocol is organized into five testing sessions. Work through them in order — automated scanning first to get an overview, then manual testing to catch what automation misses.
Session 1 (15 minutes): Automated scanning with axe DevTools across five pages Session 2 (20 minutes): Visual evaluation with WAVE — structure, contrast, and images Session 3 (25 minutes): Keyboard-only navigation testing Session 4 (20 minutes): Screen reader testing with NVDA or VoiceOver Session 5 (10 minutes): Document spot check
Before you start, select your five test pages. Choose them strategically:
- Your homepage
- Your highest-traffic service page or application (permit portal, payment system, public records request)
- A page with a form (contact form, service request, application)
- A page with images, charts, or data visualizations
- A page with your navigation in its most complex state (a page deep in your site hierarchy where breadcrumbs and secondary navigation appear)
These five pages are a representative sample, not an exhaustive audit. They are chosen to expose the most common failure patterns across the most consequential surfaces.
Session 1: Automated AccessibilityScanning with axe DevTools (15 minutes — 3 minutes per page)
Open your first test page. Open the browser developer tools (F12). Click the axe DevTools tab. Click "Analyze."
What to do with the results:
The results panel shows Violations, Needs Review, and Best Practices. Focus on Violations first — these are confirmed WCAG failures with high confidence.
For each violation, axe shows:
- The WCAG rule violated
- The number of elements on the page that fail that rule
- A description of the issue
- Links to elements failing the rule (highlight them in the page)
- Guidance on how to fix it
Record every violation — the rule ID, the number of elements affected, and the page it was found on. Do not try to fix anything during the testing session. Document first, remediate after.
The violations that matter most in the axe output:
image-alt — Images missing alt text. Every instance is a failure. label — Form inputs without associated labels. Every instance is a critical failure. color-contrast — Text failing minimum contrast ratios. Note which elements and the actual vs. required ratio. heading-order — Heading levels skipped or out of sequence. link-name — Links with no discernible text (empty links or icon-only links). html-has-lang — The page language is not specified. aria-required-attr — ARIA roles used without required attributes. button-name — Buttons with no accessible name.
After scanning all five pages, look at your violation list. If the same violations appear across all five pages — particularly color-contrast and heading-order — those are template-level issues that a single fix will resolve everywhere. Template-level failures are your highest priority because fixing them once fixes the problem on every page.
What axe will not catch:
Axe automation detects approximately 30 to 40 percent of WCAG failures. It is reliable for what it catches, but the majority of real-world accessibility barriers require manual testing to find. Session 3 and 4 are where those get identified.
Session 2: Visual Evaluation with WAVE (20 minutes — 4 minutes per page)
Open your first test page and activate the WAVE extension.
WAVE overlays your page with colored icons indicating:
- Red icons: Errors — confirmed accessibility failures
- Yellow icons: Alerts — potential issues requiring human judgment
- Green icons: Structural elements — headings, landmarks, lists
- Blue icons: ARIA attributes
- Contrast tab: Color contrast analysis across the page
Work through four specific checks on each page:
Check 1: Red error icons. Look at what is flagged. The most common government website errors in WAVE are missing form labels, empty links, missing image alt text, and empty buttons. Click each red icon to see the specific element and the error description.
Check 2: Heading structure. Click the "Structure" tab in the WAVE panel. This shows your page's heading hierarchy as a nested outline. Evaluate it for:
- Is there exactly one H1 on the page? (Should be the page title)
- Do the heading levels follow a logical hierarchy without skipping levels? (H1 → H2 → H3, not H1 → H3)
- Does the heading structure reflect the actual content organization?
A heading structure that looks like this is correct:
H1: Apply for a Building Permit
H2: Eligibility Requirements
H2: What You Will Need
H3: Required Documents
H3: Required Fees
H2: How to Apply
H2: After You Apply
A heading structure that looks like this is a failure:
H1: Apply for a Building Permit
H3: Eligibility Requirements
H2: What You Will Need
H4: Required Documents
H2: How to Apply
Check 3: Page landmarks. Still in the Structure tab, check what landmark regions WAVE identifies. You should see at minimum: a banner region (site header), a navigation region (primary nav), a main region (primary content), and a contentinfo region (footer). If the main region is missing or if there are multiple navigation regions without distinguishing labels, flag it.
Check 4: Color contrast. Click the "Contrast" tab. WAVE highlights text elements that fail contrast requirements in red. Click any flagged element to see the actual contrast ratio and the required ratio. Note which elements fail and whether they are in shared components (navigation text, button text, link colors in the body) — template-level failures — or in individual content areas.
Session 3: Keyboard-Only Accessibility Navigation Testing (25 minutes)
This is the most revealing session in the protocol. Put your mouse in a drawer. You will not use it again until Session 5.
Navigate using only these keys:
- Tab — move focus forward through interactive elements
- Shift + Tab — move focus backward
- Enter — activate links and buttons
- Space — activate checkboxes and buttons
- Arrow keys — navigate within components (dropdowns, radio groups, date pickers)
- Escape — close modals, dropdowns, dialogs
Test 1: Skip Navigation (2 minutes)
Load your homepage. Do not click anything. Press Tab once.
What happened? If the first thing that received focus was a visible "Skip to main content" link, your site has skip navigation implemented. Press Enter to activate it. Did focus jump to the main content area?
If pressing Tab once moved focus to the first item in your navigation menu — a logo, a top-level nav link, or something in your header — and there was no skip link visible, your site is missing skip navigation. A keyboard user navigating this site must tab through every navigation item on every page before reaching the content they came for. Document this failure.
Test 2: Navigation and Focus Visibility (5 minutes)
Starting from the top of your homepage, tab through the navigation. Watch for two things:
First: Is focus always visible? As you tab through the page, there should always be a clearly visible indicator showing which element currently has focus — typically an outline, a highlight, or a color change. If focus ever becomes invisible — if you cannot tell where on the page the keyboard cursor is — that is a critical failure. Note which element caused focus to disappear.
Second: Can you open and close dropdown navigation? If your site has dropdown menus, tab to a top-level nav item that has a dropdown. Can you open the dropdown with Enter or the down arrow key? Can you navigate the dropdown items with arrow keys? Can you close the dropdown with Escape? Can you Tab past the top-level item to the next one without the dropdown opening?
Test 3: Form Completion (10 minutes)
Navigate to one of your test pages with a form. Using keyboard only, attempt to complete the form from the first field to a successfully submitted confirmation.
Work through every field. Tab to each one. Confirm focus lands on the field. Confirm you can enter or select a value. Tab to the next field. Note any of the following:
- A field that does not receive focus when tabbed to
- A field that receives focus but does not announce its label (you will need NVDA for this check — do it in Session 4)
- A custom component — date picker, custom dropdown, file upload — that you cannot operate with keyboard
- A point in the form where focus disappears and you cannot find it
- A point where pressing Tab causes you to lose your position in the form
Intentionally submit the form with errors — leave required fields empty, enter an invalid email format. What happens? Does the page scroll or jump to an error area? Does focus move anywhere? Is there a visible list of errors?
Now try to correct the errors using keyboard only. Can you find the errored fields? Can you navigate to them from the error messages?
Test 4: Any Modal Dialogs or Popup Components (5 minutes)
If your site uses modal dialogs — cookie consent banners, confirmation dialogs, help panels, lightboxes — find one and test it.
When the modal opens: Does focus move into the modal automatically? Can you interact with all the modal's content using keyboard? Can you close the modal with Escape? When the modal closes: Does focus return to the element that triggered the modal?
If any of these behaviors are missing, the modal has focus management failures that make it unusable for keyboard and screen reader users.
Test 5: Tab Order Sanity Check (3 minutes)
On one of your test pages, tab through the entire page from top to bottom without activating anything. Just tab through. Does the focus order match the visual order of the page? Or does focus jump unexpectedly — moving to the footer before the main content, jumping into a sidebar and back, or skipping sections entirely?
Document any focus order failures — they are often symptoms of DOM order not matching visual layout, which is a common issue in complex layouts built primarily for visual appearance.
Session 4: Screen Reader Testing with NVDA or VoiceOver (20 minutes)
Screen reader testing tells you what your site sounds like to a blind user. You do not need to be an expert screen reader user to identify obvious failures.
Setting up NVDA:
After installing NVDA, start it. A dialog will appear — click OK to dismiss it. NVDA will begin announcing everything your browser is doing.
Key NVDA commands for this testing session:
- NVDA + F7 — opens the Elements List, showing all headings, links, and landmarks on the page
- H — jump to the next heading
- Shift + H — jump to the previous heading
- Tab — move to the next interactive element
- Enter — activate a link or button
- NVDA + Down Arrow — read from current position to the end of the page
- NVDA + Control — stop reading
Setting up VoiceOver (Mac):
Activate with Command + F5. Use VO (Control + Option) as the modifier key.
- VO + U — opens the Rotor, similar to NVDA's Elements List
- VO + Right Arrow — move to the next element
- VO + Space — activate an element
- VO + F8 — VoiceOver Utility settings
Test 1: Heading Navigation (5 minutes)
Load your homepage. With NVDA running, press NVDA + F7 to open the Elements List and select the Headings tab. You will see all the headings on the page as a list.
Does the heading list reflect the actual content structure of the page? Are there headings listed that are not obvious sections of the page — template elements, hidden headings, navigational elements? Are there sections of the page that should have headings but do not appear in the list?
Close the Elements List and press H to navigate between headings. Does the page content make sense in the heading-to-heading navigation order?
Test 2: Form Field Announcements (8 minutes)
Navigate to your test form page. Tab through each form field. Listen to what NVDA announces when focus lands on each field.
A correctly implemented, accessible field announces: the field label, the field type, and any required status or format hint. For example: "First name, required, edit text."
An inaccessible field announces: "edit text" with no label, or reads a nearby text element that is not actually the label, or announces nothing useful.
Document every field that does not announce a meaningful label. These are the most critical form accessibility failures — residents using screen readers cannot fill in fields they cannot identify.
Now submit the form with errors. What does NVDA announce? Does it read an error summary? Does it announce which fields failed? If you hear nothing after submitting an empty form, the error handling is inaccessible.
Test 3: Link Context (4 minutes)
Press NVDA + F7 and select the Links tab. This shows all links on the page as a list — the same list a screen reader user uses to navigate a page by jumping between links.
Scan the list. Do the links make sense out of context? Do you see "click here," "read more," "learn more," or "here" appearing multiple times with no way to distinguish what each leads to? These are inaccessible links — a screen reader user navigating by links cannot tell them apart.
Test 4: Image Announcements (3 minutes)
Tab to a few images on your test pages. What does NVDA announce?
For informational images: NVDA should read the alt text — a meaningful description of what the image communicates. For decorative images: NVDA should announce nothing — the image should be skipped.
If NVDA reads a filename ("image-2024-03-15.jpg"), the image is missing alt text. If NVDA reads "image" with no description, the alt attribute is empty when it should have content. If NVDA reads something clearly copied from a filename or autogenerated, the alt text is wrong.
Session 5: Document Spot Check (10 minutes)
Download three to five PDFs from your website. Choose the highest-traffic ones — meeting agendas, permit forms, public notices.
Check 1: Text selection test (2 minutes)
Open each PDF in your PDF reader. Try to click and drag to select text. If you can select text, the document has a text layer — it is not a flat scan. If your cursor becomes a crosshair or you cannot select text, the document is a scanned image with no text layer and is completely inaccessible to screen readers.
Every scanned PDF in your document library is a complete accessibility failure.
Check 2: Quick axe scan of PDF content (3 minutes)
For PDFs that open in the browser (many do), activate axe DevTools while the PDF is displayed. This will not give a full picture of PDF accessibility, but it will flag obvious issues like missing document language and structural problems.
Check 3: Screen reader spot test (5 minutes)
With NVDA running, open a PDF and press NVDA + Down Arrow to read from the beginning. Does it read the document content in a logical order? Or does it jump between columns, read the header and footer repeatedly, or produce a stream of text that does not match the visual document order?
A well-tagged PDF reads smoothly from beginning to end in the correct order. An untagged or poorly tagged PDF produces garbled output.
Recording Your Findings
As you work through the protocol, record every finding in a structured format. This serves two purposes: it gives you the remediation list you need to start fixing issues, and it begins the documentation record that a compliance program requires.
For each finding, record:
- Date of testing
- Page or document where the issue was found
- The specific issue (be precise — "color contrast failure" is less useful than "navigation link text #005ea2 on white background fails 4.5:1 ratio at 3.2:1")
- The WCAG success criterion the issue violates
- Severity (critical, major, minor)
- Whether the issue appears on one page or across templates
A finding log with this level of specificity is the beginning of a defensible remediation record. It is not just a list of problems — it is the documented evidence that your agency assessed its compliance posture and identified what needs to be addressed.
What to Do With Your Results
After running the full protocol, you will have findings across five categories: automated scan violations, visual evaluation findings, keyboard navigation failures, screen reader failures, and document accessibility failures.
Prioritize by impact and scope:
Template-level failures — issues that appear on every page because they are in shared components — are always first priority. A single developer fix eliminates the failure everywhere simultaneously.
Transactional workflow failures — keyboard traps, inaccessible error handling, unlabeled form fields in service applications — are second priority regardless of how many pages are affected. These directly prevent residents from completing government service transactions.
High-traffic page failures — issues on your most-visited pages that do not appear in shared components — are third priority.
Low-traffic and archival content — issues on pages that receive minimal traffic and involve no transactional interaction — are lowest priority for initial remediation.
Open a remediation log:
Before a single fix is made, open a remediation log and create an entry for each finding. This is not administrative overhead — it is the documentation that transforms testing activity into compliance evidence. A finding that was identified, logged, assigned, remediated, and validated with a timestamp is part of your defensible compliance record. A finding that was fixed without being documented is indistinguishable from a finding that was never addressed.
Repeat the protocol quarterly:
The 90-minute protocol is most valuable as a recurring activity, not a one-time event. Run it quarterly on the same five pages plus any new templates or features added since the last test. This creates a monitoring timeline that shows whether your compliance posture is improving, stable, or regressing — and that timeline is itself a piece of the compliance documentation record.
What the Self-Audit Does Not Replace
This is the honest part. The 90-minute protocol is a meaningful starting point. It is not a compliance program and it is not a professional audit.
Here is what it does not replace:
Comprehensive coverage. The protocol tests five pages. A professional audit tests a representative sample of every page template, every content type, every user flow, and every embedded vendor tool — typically 30 to 50 pages minimum for a medium-complexity government site.
Expert screen reader evaluation. The screen reader tests in Session 4 catch obvious failures. A professional accessibility auditor testing with multiple screen reader and browser combinations — NVDA/Chrome, NVDA/Firefox, JAWS/Chrome, JAWS/Edge, VoiceOver/Safari on macOS and iOS — catches subtle failures that require deep assistive technology expertise to identify.
PDF and document audit. The document spot check gives you a directional read on document accessibility. A complete document audit requires Adobe Acrobat Pro, the PAC accessibility checker, and the expertise to evaluate tag structure, reading order, and form field implementation across your full document library.
Vendor tool testing. The protocol does not cover embedded third-party tools. Payment gateways, permit portals, GIS tools, scheduling systems — all of these require specific testing outside the scope of a 90-minute self-audit.
The baseline documentation a compliance program requires. A self-audit you ran and documented is better than no audit. It is not equivalent to a third-party accessibility audit with a full conformance report that can be produced in response to an enforcement inquiry.
At Hounder, our ADA compliance assessments start where this protocol ends. We take the directional findings a self-audit produces and build on them with a full WCAG 2.1 AA audit that covers every template, every transactional workflow, every document category, and every vendor integration — producing the audit report, risk-based prioritization framework, and remediation roadmap that a defensible compliance program is built from.
If you run this protocol and the results are concerning — and they will be for most government websites — the self-audit has done its job. It has shown you what the problem is. The next step is building the program that addresses it.
Related:
Get a FREE Accessibility Audit from the Hounder Team
FAQ: DIY Government Website Accessibility Testing
What free tools are most reliable for government website accessibility testing?
The three most reliable free tools for government web team accessibility testing are axe DevTools browser extension for automated WCAG scanning with low false positive rates, WAVE browser extension for visual evaluation of page structure and contrast issues, and NVDA screen reader for manual testing of what blind users actually experience. Together these three tools cover the major automated and manual testing methodologies available to professional accessibility practitioners. The primary limitation is that they require human interpretation and cannot replicate the full range of assistive technology combinations a professional audit evaluates.
How long does a DIY accessibility test take for a government website?
The 90-minute protocol in this guide covers five representative pages with automated scanning, visual evaluation, keyboard navigation testing, screen reader testing, and document spot checking. A thorough self-audit of a medium-complexity government website covering 10 to 15 pages typically takes 3 to 4 hours. The time investment scales with site complexity and the number of unique page templates, not just the number of pages. Running the same protocol on a regular quarterly basis takes less time as testers become familiar with their site's failure patterns.
What is the most important keyboard accessibility test to run on a government website?
The single most revealing keyboard test is attempting to complete your highest-traffic service transaction — a permit application, a payment, a public records request — using keyboard only from start to submitted confirmation. If any step in that transaction cannot be completed without a mouse, you have identified a critical accessibility barrier that is preventing residents with motor disabilities and keyboard-dependent users from accessing a core government service. This test costs nothing, requires no special tools, and reveals the most consequential failures a government website can have.
Can a government agency use the results of a self-audit for ADA compliance purposes?
A self-audit using free tools produces documented evidence that the agency assessed its compliance posture — which is a meaningful component of a good faith compliance record. It does not constitute the professional accessibility audit that enforcement bodies expect to see as a foundation of a compliance program. For ADA Title II compliance documentation purposes, a third-party professional audit that produces a conformance report mapped to WCAG 2.1 AA success criteria is the appropriate baseline. The self-audit is the appropriate starting point for understanding current compliance posture and prioritizing before commissioning a professional audit.
How often should government web teams run self-accessibility checks?
The 90-minute protocol should be run quarterly on primary templates and high-traffic service pages. Additionally, any new page template, new component, new form, or new vendor tool integration should be tested using keyboard navigation and axe DevTools before it goes live. Monthly automated scans using a tool like Siteimprove or recurring axe scans can supplement the quarterly manual testing with continuous automated monitoring. The quarterly manual protocol combined with monthly automated scanning provides the ongoing monitoring cadence that a governance-grade compliance program requires.
What should I do if the self-audit reveals significant accessibility failures?
Significant failures found through a self-audit are the starting point for a remediation program, not a cause for alarm. The appropriate next steps are: document all findings in a remediation log with dates, WCAG criteria references, and severity classifications; prioritize remediation starting with template-level failures and transactional workflow barriers; open a monitoring cadence so new issues are caught going forward; and commission a professional accessibility audit that establishes the comprehensive baseline a formal compliance program requires. Agencies with documented evidence that they identified their accessibility issues and initiated a structured remediation program are in a materially better compliance position than agencies that have done nothing.