Dear AccessWorld Readers,
As I use a screen reader to interact with my technology each day, I noticed something about how my screen reader interacts with apps and software that I thought I would expand upon here. It seems that there are two distinct ways that screen readers gain information about an interface; I have come to think of these as "Pull" and "Push". In an interface that uses the "Pull" method, your screen reader is pulling information directly from the interface and interpreting it for you. A chief example is when using the web; your screen reader is pulling the information from the websites code and then presenting it to you. In the "Push" method, the screen reader can't interpret the interface itself, but is deliberately told specific information to report to you as you navigate. I find that Google's suite of applications seems to use this method, as if you do not specifically turn on screen reader mode, your screen reader will not be able to interact with a document or spreadsheet. It appears that with accessibility activated, as you interact, your screen reader is fed a stream of content to read. This is most noticeable when typing, as you might notice that the character echo built into apps such as Docs and Sheets doesn't add the higher pitch when entering capitol letters, seemingly because it is not possible to include this information when telling the screen reader what to say.
This has interesting implications for accessibility. When a screen reader is pulling information directly, it has some creative license so to speak in what it tells you. If you visit a page using NVDA, JAWS and VoiceOver, not to mention if you use different web browsers, you will find that the site is presented just slightly differently using each. This also gives you some more options in how you want content presented, for example, I find that if a website has some accessibility issues, I can go into the View menu in the Firefox web browser and turn off the style sheet to gain access to elements that are normally hidden or inaccessible. It is also true that even when a screen reader is pulling information, the site or application needs to be built with the protocols that the screen reader uses to access elements or it will either be entirely inaccessible or present confusing and or incorrect information.
When information is directly pushed to a screen reader, generally, the developer can customize exactly how their application is seen by the user with blindness or low vision. This can mean better consistency across platforms, but though I'm not a programmer myself, it seems that it would require a good deal of extra effort on behalf of a developer to specify the exact result from every interaction the user makes. It seems like following accessible design standards from the beginning would be much more efficient. In addition, when a screen reader is directly fed information, there is no room for interpretation for the user to change how the screen reader presents the information to them, as in my style sheet example previously.
Overall, I can see benefits to both methods. When a screen reader pulls information directly, this generally means that accessible design standards have been used in the creation of the app, site, or desktop software, though often this simply means the interface is only partially accessible or is interpreted strangely by the screen reader. Pushing information directly allows a more consistent interface presentation, but I would assume it would require much more work on behalf of the developer and does not provide the flexibility that is afforded when pulling information directly.
I would like to give a shout out to our awesome authors; they constantly work hard to bring you useful and interesting content each month; this publication is what it is today because of their diligent efforts. Also I would like to thank you for being an AccessWorld reader, knowing that you read and find value in our content is a valuable motivator in our work.
AccessWorld editor and Chief
This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.