Revolutionizing Accessibility: GitHub's Continuous AI Approach
For years, GitHub faced a common yet critical challenge: effectively managing accessibility feedback. Unlike typical product issues, accessibility concerns are pervasive, often cutting across multiple teams and systems. A single report from a screen reader user might touch navigation, authentication, and settings, making traditional siloed feedback processes ineffective. This led to scattered reports, unresolved bugs, and the frustration of users whose issues lingered in a mythical "phase two" that rarely materialized.
Recognizing the need for a fundamental shift, GitHub embarked on a journey to centralize feedback, create standardized templates, and clear a significant backlog. Only after establishing this robust foundation did the question arise: How could AI transform this process further? The answer lies in an innovative internal workflow, powered by GitHub Actions, GitHub Copilot, and GitHub Models, designed to continuously transform every piece of user feedback into a tracked, prioritized, and actionable issue. This approach ensures that AI enhances human judgment, streamlining repetitive tasks and allowing experts to focus on delivering inclusive software.
Continuous AI: A Living System for Inclusion
GitHub's "Continuous AI for accessibility" is more than just a tool; it's a living methodology that integrates automation, artificial intelligence, and human expertise to embed inclusion directly into the fabric of software development. This philosophy underpins GitHub's commitment to the 2025 Global Accessibility Awareness Day (GAAD) pledge, aiming to strengthen accessibility across the open-source ecosystem by effectively routing and translating user feedback into meaningful platform improvements.
The core realization was that the most impactful breakthroughs stem from listening to real people, yet listening at scale presents significant challenges. To overcome this, GitHub built a feedback workflow that operates as a dynamic engine rather than a static ticketing system. Leveraging its own products, GitHub clarifies, structures, and tracks user and customer feedback, converting it into implementation-ready solutions.
Before diving into technological solutions, GitHub adopted a people-first design approach, identifying key personas the system needed to serve:
- Issue submitters: Community managers, support agents, and sales representatives who need guidance to report issues effectively, even without deep accessibility expertise.
- Accessibility and service teams: Engineers and designers requiring structured, actionable data—such as reproducible steps, WCAG mapping, and severity scores—to efficiently resolve issues.
- Program and product managers: Leadership needing clear visibility into pain points, trends, and progress to make strategic resource allocation decisions.
This foundational understanding allowed GitHub to design a system that treats feedback as data flowing through a well-defined pipeline, capable of evolving with their needs.
Automating the Accessibility Feedback Pipeline
GitHub constructed its new architecture around an event-driven pattern, where each step triggers a GitHub Action to orchestrate subsequent actions, ensuring consistent handling of feedback regardless of its origin. While initially built manually in mid-2024, such a system can now be developed significantly faster using tools like Agentic Workflows, which allow for creating GitHub Actions through natural language.
The workflow responds to key events: issue creation initiates GitHub Copilot analysis via the GitHub Models API, status changes trigger team hand-offs, and issue resolution prompts follow-up with the original submitter. The automation covers the common path, but humans can manually trigger or re-run any Action, maintaining oversight and flexibility.
The Seven-Step Feedback Workflow:
-
Intake: Feedback flows from various sources like the GitHub accessibility discussion board (which accounts for 90% of reports), support tickets, social media, and email. All feedback is acknowledged within five business days. For actionable items, a team member manually creates a tracking issue using a custom accessibility feedback template, which captures essential context. This creation event triggers a GitHub Action to engage GitHub Copilot and add the issue to a centralized project board.
-
Copilot Analysis: A GitHub Action calls the GitHub Models API to analyze the newly created issue.
-
Submitter Review: The initial submitter reviews Copilot's analysis, confirming its accuracy or making adjustments.
-
Accessibility Team Review: The specialized accessibility team conducts a deeper review and strategizes solutions.
-
Link Audits: Relevant audits or external resources are linked for context and compliance.
-
Close Loop: Once addressed, the issue is formally closed, and the original user or customer is informed.
-
Improvement: Feedback on the system's performance, including Copilot's analysis, informs continuous updates and refinements.
This continuous flow ensures visibility, structure, and actionability at every stage of the feedback lifecycle.
GitHub Copilot's Intelligent Accessibility Triage
At the heart of this automated system is GitHub Copilot's intelligent analysis. When a tracking issue is created, a GitHub Action workflow programmatically calls the GitHub Models API to analyze the report. GitHub made a strategic choice to use stored prompts (custom instructions) instead of model fine-tuning. This allows any team member to update the AI's behavior via a simple pull request, eliminating the need for complex retraining pipelines or specialized machine learning knowledge. When accessibility standards evolve, the team updates markdown and instruction files, and the AI's behavior adapts with the next run.
GitHub Copilot is configured with custom instructions developed by their accessibility subject matter experts. These instructions serve two critical roles:
- Triage Analysis: Classifying issues by WCAG violation, severity (sev1-sev4), and affected user group.
- Accessibility Coaching: Guiding teams in writing and reviewing accessible code.
The instruction files refer to GitHub's accessibility policies, component library, and internal documentation, providing Copilot with a comprehensive understanding of how to interpret and apply WCAG success criteria.
The automation unfolds in two key steps:
- First Action: Upon issue creation, Copilot analyzes the report, automatically populating approximately 80% of the issue's metadata. This includes over 40 data points such as issue type, user segment, original source, affected components, and a summary of the user's experience. Copilot then posts a comment on the issue containing a problem summary, suggested WCAG criteria, severity level, impacted user groups, recommended team assignment, and a checklist for verification.
- Second Action: This subsequent Action parses Copilot's comment, applies labels based on the assigned severity, updates the issue's status on the project board, and assigns it to the submitter for review.
Crucially, if Copilot's analysis is inaccurate, anyone can flag it by opening an issue describing the discrepancy, directly feeding into GitHub's continuous improvement process for the AI.
Human Oversight and Iterative Accessibility Enhancements
The workflow emphasizes human oversight and collaboration. After Copilot's automated analysis, the "submitter review" phase (step 3) allows the human submitter to verify the AI's findings. This human-in-the-loop approach ensures accuracy and allows for manual corrections or flags for Copilot's continuous improvement process. The subsequent steps—Accessibility Team Review, Link Audits, and Close Loop—further integrate human expertise, ensuring that complex problems are addressed by specialists and that users receive timely, effective resolutions.
This dynamic system represents a significant shift for GitHub. By leveraging AI to handle the repetitive and data-intensive aspects of feedback management, they've transformed a chaotic, often stagnant process into a continuous, proactive engine for inclusion. This means that every piece of accessibility feedback is now reliably tracked, prioritized, and acted upon, moving beyond promises of "phase two" to deliver immediate, tangible improvements for all users. The ultimate goal is not to replace human judgment but to empower it, freeing up valuable time and expertise to focus on strategic fixes and foster a truly accessible software experience.
Original source
https://github.blog/ai-and-ml/github-copilot/continuous-ai-for-accessibility-how-github-transforms-feedback-into-inclusion/Frequently Asked Questions
What challenges did GitHub face with accessibility feedback before implementing its Continuous AI system?
What defines 'Continuous AI for accessibility' and how does it enhance traditional accessibility efforts?
How does GitHub Copilot specifically contribute to the efficiency and effectiveness of the accessibility feedback workflow?
What are GitHub's 'custom instructions' for Copilot, and why were they chosen over model fine-tuning for this system?
How does GitHub ensure that human judgment and oversight remain central to the accessibility process despite the extensive use of AI automation?
Who are the primary beneficiaries of GitHub's enhanced accessibility feedback system, and how does it cater to their specific needs?
How does GitHub integrate user feedback from external sources into its internal accessibility process, ensuring consistency and actionability?
Stay Updated
Get the latest AI news delivered to your inbox.
