February 2026 Comments on HHS Health Sector AI RFI
February 23, 2026
Thomas Keane, MD, MBA Assistant Secretary for Technology Policy, National Coordinator for Health Information Technology Mary E. Switzer Building 330 C Street SW Washington, DC 20201
Filed via regulations.gov
RE: HHS Health Sector AI RFI
Dear Dr. Keane:
The American Foundation for the Blind (AFB) is a national nonprofit that creates equal opportunities and expands possibilities for people who are blind, have low vision, and are deafblind through advocacy, thought leadership, and strategic partnerships.
In 2025, AFB conducted a nationwide survey of 1,735 adults, of whom 1,070 have a disability or medical condition, investigating their experiences, hopes and risks with regard to Artificial Intelligence (AI). The results of the study will be released in March 2026, but many of the pertinent findings for healthcare are described below. We aim to answer Question 10: What challenges within health care do patients and caregivers wish to see addressed by the adoption and use of AI in clinical care? Equally, what concerns do patients and caregivers have related to the adoption and use of AI in clinical care?
Critically, AI used in healthcare must be developed in a way that avoids unfair treatment of people with disabilities. In our AI research, survey participants with disabilities were almost three times as likely to report a healthcare denial within the past two years as participants without disabilities (27.7% vs. 10.8%). The most common type of denial was health insurance denial. Although the research was not able to verify the actual cause of the denials, 47% of disabled and 35% of nondisabled participants stated that they suspected AI was involved in the denial.
The disproportionate rate of denials for disabled people means that there is a need to systemically ensure that AI used in decision making does not deny care to people with disabilities. There is both an existing legal obligation to ensure that technology used in healthcare does not discriminate on the basis of disability and a looming trust issue if patients assume that any denial is attributable to AI. It is concerning that there is often no or limited transparency about when AI is being used, so patients may not fully understand a denial decision. In addition, training data and decision parameters must be inclusive of people with disabilities with a wide range of health conditions and care needs. HHS has an oversight and enforcement role to play to ensure that these systems are being deployed responsibly. Actively addressing disproportionate healthcare denials and other instances of bias will also ensure that patients trust the healthcare system to deliver the care they need in a timely manner.
Participants were asked about the use of AI for mental health therapy. AI therapy could reduce some barriers to care (such as cost or provider availability), but it also presents significant risk of harm to people with disabilities if it is not appropriately trained. Although relatively few participants in our study had used AI for therapy, there were several reports of harm. Some of these reports were relatively minor, such as chatbots offering surface level or mismatched responses. However, four participants reported more significant harms, such as clinically inappropriate advice (e.g. suggesting dieting to a person with an eating disorder), validation of psychotic delusions or suicidal thinking, and exacerbating crisis situations. One participant described a distressing experience when the chatbot suggested that she consider the death-positivity movement after venting about a chronic health condition. These cases are especially concerning, but among all users of AI for therapy, only 29% thought that AI could be more helpful than human therapy. There is a need for ensuring that AI used in care, including mental healthcare, is trained on appropriate clinical decision-making, that it can provide appropriate disability awareness, and that guardrails are in place to avoid facilitating or encouraging self-harm.
From an accessibility perspective, AI presents real potential to expand access to information and communication for people with disabilities. However, given the importance of accurate communication in a highly specialized healthcare setting, great consideration should be given to deploying AI to resolve communication barriers. Among our study participants who used AI captions, only 4% rated captions as extremely accurate. 7.5% of people who use AI to describe images or the visual environment find it to be extremely accurate. In a healthcare setting, mistakes can be dangerous. One participant described a situation that made her realize that she couldn’t rely on AI in situations that require a high degree of accuracy:
“[I was] using AI to read package instructions on a tube of topical medication and when I asked what the directions for use were specifically, AI told me to ‘chew 4 tablets 2 times per day.’ Luckily this was an obvious error, but for now I will NOT be using AI for these tasks [and] will always confirm with a human.”
AI may one day be accurate enough to provide communication assistance in lieu of a human interpreter, captioner, or reading assistant, but it is vitally important that the vendors of such tools are held to high quality standards.
Another area where our findings significantly implicate the health care system is concern about privacy and data protections. Although patient information is legally required to be protected, individuals are largely concerned about whether AI systems actually can protect their sensitive information. Participants were presented with several situations in which they were asked whether they would prefer to rely on a human or AI for assistance. In both cases, participants were significantly more likely to prefer a human over AI if their sensitive information was involved. In addition, participants believed that AI is somewhat less private than working with a human: 18% stated that AI is much less private than humans, and 37% stated that AI is slightly less private than humans. Another 29% stated that AI and humans are equally private. Only 16% stated that AI is somewhat or much more private than humans. Participants also reported a general preference for privacy over using AI to achieve efficiency and independence, or they reported that both are equally important. They expressed a desire for transparency, control of data, consent, and stronger guardrails to protect privacy and data security.
Altogether, the findings suggest that disabled people, as well as nondisabled people, desire that AI be thoughtfully and carefully designed and deployed. There are certain cases in which AI can increase efficiency for providers and improve independence for individuals and their families. However, there is already growing evidence that AI developers and deployers must ensure that AI is accurate, does not disadvantage certain groups, and is trained on inclusive data. Personal data must be safely stored, deleted as soon as practicable, or processed on-device where appropriate. Rigorous data privacy protections should be expected in health use cases. Users should be informed when AI is being used and provided an opportunity to control how their data is stored or shared. HHS should oversee the development and deployment of AI to ensure that healthcare use cases are clinically appropriate and ensure that no group of people faces disproportionate denials of care. HHS should invest in research to improve the data and model validation tools and practices used by health AI companies. By developing and requiring the use of guardrails protecting people with disabilities, HHS can ensure that the onus does not fall solely to deployers of AI to understand its effect on patients.
Thank you for the opportunity to respond to this RFI. If you have any questions about this issue, please contact Sarah Malaier, smalaier@afb.org.
Sincerely, Stephanie Enyart Chief Public Policy and Research Officer