Application • 55:27
The Universal Access capabilities of Mac OS X are being enhanced with a spoken interface that provides a new way—through speech, audible cues, and keyboard navigation—to access the Macintosh. In this session, we provide details on these spoken interface extensions as well as a thorough overview of the Accessibility API. This session is a must for developers who are interested in making their applications Section 508 – compliant.
Speakers: Eric Seymour, Rick Fabrick, Aaron Haney
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
[Transcript missing]
how important the keyboard is. I guess that's an understatement. And so it deserves a little bit of discussion. When you're using VoiceOver, you're going to spend a lot of time using the keyboard. And you're going to spend a lot of time using the control and option keys, which are used for activation.
In other words, if you want to issue a command on your computer that VoiceOver will interpret, you'd hold the control and option key down and then do something. And then there are keys designated for orientation, tell me where I am in my environment, what's on the screen, how many apps are running, right? And then there's also navigation. I want to move around using, getting to every possible thing that might be on the screen. Now, a couple of things about this.
Number one is, well, several things. So the first is there are lots of keyboard combinations to produce commands in a screen reader. This one is no exception. To help improve the learning experience for somebody who's coming to grips with this product to get to know it, we provide two main features.
The first is the smart menu, which is audible. You basically, using spoken interface, you're moving around and you go, "I have no idea what to do at this point. What do I do?" So you hit one memorable key combination, control option F5, and you're presented with an audible menu and it will be context sensitive. It will have only the things in it that you could do at that given point in time.
So if you're in a text field, there are lots of things you can do in a text field because you can select text, you know, you can--it's a more complicated element. But if you're on top of a button, there might be only one or two things you can do. You can press it, you might be able to read its title, things like that. So the smart menu helps out a lot. And then, of course, there's help.
Always the same consistent way to get help. You also might notice that we're rooting our modifier keys off the same keys. Other screen reader products, because of perhaps lower integration into the operating system itself, root themselves off the several keys around the keyboard. So a couple more things to mention. First is we realize that the keyboard is overloaded.
If you design an app that has lots of keyboard combinations defined, you know that already even before spoken interface comes along, there are occasions where keyboard combinations which Apple chooses sometimes conflict with ones that ship in your application. Spoken interface again adds to that. So we're designing some techniques to allow the user to work around that effectively. To specify when am I going to issue a voiceover command, when am I going to issue a command that might want to go directly to your application.
So we're solving that problem. And the other big piece of feedback that we've gotten is screen reader software is not terribly well designed typically for notebook keyboards. Because of the amount of keys that you want to use, sometimes they take advantage of the number pad or some of the keys that are hard to reach on a notebook keyboard.
And so we're also designing this for a notebook keyboard. We're using this as the least common denominator so that we don't come up with an environment that's too complex for us to use. So we're using this as the least common denominator so that we don't come up with an environment that's too complex for us to use. So we're using this as the least common denominator so that we don't come up with an environment that once you go mobile is really difficult to use. Okay. So that's the keyboard.
Now, to talk about a screen reader, we also have to talk about navigation because we need to provide a way to get around the environment that even the mouse and the keyboard can't get to. So when we navigate, especially sighted users when they navigate, all of us take advantage of two main things, the keyboard cursor and the mouse cursor. And-- So, they're both very tangible. If we're sighted, we can see them. If we're not relying on our sight, spoken interface, voiceover provides audible feedback as to where those things are if we want it.
But there's more than that because as sighted users, we take advantage of our eyes, right? We have the ability to glance, look around our screen. And if your only feedback is sound and all you're listening to is where's the mouse or where's the cursor, you don't have the ability to look at the time or to read an iChat that just came in while you happen to be typing an email message.
And we want to solve that problem. So, as part of voiceover, we're introducing a new cursor which is the voiceover cursor. And the voiceover cursor is separate and distinct from the other two. And its job is to allow you to get to everything in the screen. So here, I've got a picture of voiceover cursor moving over a user interface and it's moving from button to button.
Now, the voiceover cursor, of course, isn't just visual or it wouldn't be of much use. Could you turn on sound for the presentation machine, please? Let's try it again. Okay. So when the voiceover cursor moves over things, it also tells you where it is. Colors button. Colors button. Save as draft button.
Text. Now, here's a case where the voiceover cursor moved to an item which you can't get to with any other means using keyboard navigation or mouse because it's just static text, just a blob of pixels sitting on the screen somewhere. So, they're unattainable. So the voiceover cursor's goal is to get to everything. So in this case, it got to what appears to me, based on my audible feedback, it appears to be a prompt in mail that tells me what might be coming next. To text field.
And now I'm on top of a text field. So before it told me it was text, kind of told me it was a static thing, I couldn't edit it, but it's text. And then it told me it was a text field, which means it's editable. Now, what this does is it lets me build a mental picture in my mind of my user interface and navigate to every aspect of my interface, not just the things that full keyboard navigation can get me to. So it helps me learn and discover. So this concept gets a little heady, so the best way to talk about this is to introduce Rick Fabrick on stage to give you a little Fabric on stage to give you a demonstration of voiceover. So Rick? Thank you, Eric.
Good morning, everybody. As Eric mentioned, my name is Rick Fabrick, and I'm an engineer on the spoken interface team, the VoiceOver team. And for the next 10 or 15 minutes, I will be going over the general usage of the spoken interface so that you can use it as a tool to test the level of accessibility in your applications. Now, before I start, there's two things that I want you to know. The first is that throughout the demo, I will not be using the mouse. In fact, I'm going to unplug it. Here's the mouse. Everything I'm going to be doing will be through the keyboard only.
The second thing is that I am going to be using a lot of voiceover commands. So to save time, I'm not going to be telling you what the actual keystrokes are, but Eric went over a couple of them, and you can find out what the rest of them are through documentation. Alright, so orientation. One of the features Eric mentioned that screen readers need to provide is an easy way for users to get oriented with the state of their system, and I'm going to go over some of that right now.
Now, you may be able to see the screen and notice that it's blank, and that's intentional. A great percentage of the Spoken Interface users are going to be blind, so I want to give you a better sense of how they're going to be using your applications with the Spoken Interface.
What I'm using here is a test tool that we call the Screen Curtain. And all it does is it dims the screen so that you won't get any more information than your users will. That gives you a better chance of finding holes in your accessibility. All right. So let's try this.
As an example, let's say a user who's blind just sits down in front of their computer. They can't see the screen, and they need to find out what's going on, where they are on the system. So with a few voiceover commands, they can get some useful information. So let's try that.
With one command, we get this. Text edit is active. Five running applications. So text edit is active, and there are five running applications. With another command. Active window. Accessibility development. The active window is accessibility development. And with another command, we get this. Text editor has keyboard focus. A text editor has keyboard focus. So with three commands, I found out that I'm editing a text edit document called accessibility development.
Alright, so I've gotten to the point where it's really difficult to follow along if you're sighted. So what I'm going to do is I'm going to bring up the screen curtain. So if you can see the screen, you can tell it'll be easier for you to follow along. Disable screen curtain. Alright. Now, as Eric mentioned before, we have the voiceover cursor, so I'm going to bring that up and make it easier for you to find out where I am. Show viewfinder.
Alright, so if you can see the screen, you can tell that I am focused on the first word in this document. Now I can move the voiceover cursor either character by character, line by line, word by word, sentence by sentence, paragraph by paragraph. So what I'm going to show you here is moving word by word.
Accessibility defines the techniques that— Okay. That space— Now character by character. —space, T-H-A-T. Now what I want you to notice is when I move over to the letter M, the pitch of the voice will increase. And what that does is it lets you know that that letter is capitalized. So you don't need to be able to see the character to know that you have a capital letter. Control Option Locked. Control Option Lock Enabled. Ignore that. Control Option Lock Disabled. Space, M-A-C. Okay. Let me do a couple more letters. Space, O-S. Okay.
Accessibility. A feature that you may find useful as you're going along and you're getting used to the voices is to be able to speed up the rate of speech. Once you get used to it, they start to feel like they're too slow. So let me—with another command, you can speed up the rate of speech. Faster. Faster. Faster. Faster. Faster. Faster. And let me give you an example of what that would sound like.
Accessibility defines the techniques that Mac OS X applications use to make their user interpret— Okay. Yeah. So that may be unintelligible to most of us, but for users who are used to screen readers, they typically do have the rate of speech that high. Now one of the features that I was showing you here is to be able to read the entire document. So with one command, you can just have it start reading everything.
Alright, so that is, let's say the user now wants to add some information to this document. So what I'm going to do is move the insertion point to the bottom. And I'm going to use the standard arrow keys. Now there's two things I want you to know here when I'm doing this.
As the insertion point is moving from line to line, VoiceOver will be speaking out the text on that line. Now I'm not going to be waiting for it to finish. The second I know that I'm not where I want to be, I want to be able to just continue on. So I'm going to be interrupting VoiceOver through this and in a couple of places in the rest of the demo.
The second thing is that the VoiceOver cursor is going to still be at the top of the document. So it's important to make the distinction that that is separate from the insertion point, as Eric mentioned before. So let me move down. Applications used to make their user interface. No or no. I think maybe I'll decrease the rate of speech so that we can understand what it's saying. Available to an external assistive app to allow apps to new protocol. Okay. New line.
So now I'm just going to start entering text. A-P-P-L-E space. Now as you notice, as I'm typing each key, VoiceOver will tell you what the letter is being displayed. Now that can be useful, but if you're just typing along, that can get too much information. So what I'm going to do is turn on a feature that we call Word Echo. And instead of hearing each key, each letter, you hear each word.
Enabling Word Echo Mode. And let me put some more information in here. Computer is at send. Francisco period. All right. So now let's say the user really wants that name, San Francisco, to stand out. So I'm going to—let's say I'm going to make it bold. To do that, I'm going to go up to the menu bar, and I'm going to use the standard full keyboard navigation key command, Control-F2. Menu bar onto Apple with submenu.
What's new here is that you get the audible feedback. Now I can arrow around the menus. Text edit with file with submenu. And what's new in Tiger, if I know that there is a menu, In this case, let's say format. I can just start typing format and it will jump me directly to that menu.
Format with submenu. Okay, so let me go into the menu. Format menu. Font with submenu. I know this is a submenu, so I'm going to go into it. Font menu. Bold, Command B. Bold, Command B. So I'm going to select that. Bold. I should have selected text first, so let me do that. Friends send. There we go. Highlighted.
Bold. And apparently that isn't standing out quite enough, so I'm going to need to add some color to make it more lively. So I'm going to bring up this color panel with a standard key combination, command shift C. Show colors. I can move keyboard focus to that panel with the standard key combination Ctrl+F6. VoiceOver will announce the window that you move to and the item that you move on to. And now I'm going to move the VoiceOver cursor to the crayon color picker.
Color wheel, color sliders, color palettes, image palettes, crayons, button. And select it. Press crayons, button. All right, and up opens a box of crayons. Now, well, the color panel is actually in my way. I can't see my text that I want to change, so I can ask VoiceOver to move the window. Now, keep in mind this is without the mouse. Moving window. Step right. Step right. Step right. Step right. Okay. And now I can move down and select the color that I want to change the text to.
Stop moving window Maroon Strawberry Carnation Strawberry Press strawberry Okay, I like strawberry. All right, so there we go. That is reading text and editing text. So now let's say the user needs to check their email. So what I'm going to do is move to the mail application, and I'm going to use the dock to do that.
I'm going to use the standard key combination, Control-F3. Finder, running application. Again, what's new here is the audible feedback. I'm going to arrow around to mail. System preferences, Safari, terminal, text edit, mail, running application. And select it with the space bar. Mail application, window, drafts, three drafts. All right, so VoiceOver announces the application and the name of the window.
So I can navigate around here among the controls, whether they're disabled or not. Reply, Disable button. Delete, Disable button. And that's important because even though I can't interact with that button right now, I need to know where it is so that when I do need to interact with it, I can go to it. I can move down to static text like Eric showed. In, Three, Button. Three messages, Text.
So now the user needs to check their email. So what I'm going to do is I need to go into this table, select an email, move out, down, and into the body of the mail. Now keep in mind, for those who can see, all that requires is I move the mouse, click on the list, glance down, and start reading. I want to read the next email. I move the mouse a little, click, glance down, and read. This is what the default voiceover behavior would require the user to do. Control Option Lock Enabled. Control Option Lock Disabled.
Table. Into table on row one column status call. Select. Selected focusing keyboard on table. Out of table. Splitter. Scroll area. Into scroll area. Onto text entry area. Text entry area. And now I can ask spoken interface to read that email. Eric, I'm going to need three months off. Okay. Rick.
So this is a good example of where making your applications accessible is a lot more than just providing strings for the UI. You're going to need to identify areas in your application where it's much more difficult for somebody using a spoken interface to do something very simple like check your email.
And Eric's going to be asking you and telling you somewhat how to do this when he comes back on. And basically, in this case, mail has a situation where they have two UI elements that are linked together in some way and by some meaning that your application defines. But you can't get to it directly through voiceover.
So voiceover and accessibility APIs provide you a way to just stipulate, link these two elements together. All right. And the last thing that I want to show you is-- We provide a list of elements so that you can easily go to one place or another on the screen. So let's say, for instance, you know that you have an item on the screen, in this case, "Get Mail," and you want to go directly to it.
You can bring up a list With a voiceover command. Building item list. Item list menu. 46 items. So there are 46 items on this window, and all I need to do is start typing some portion of that.
[Transcript missing]
Okay, that's pretty much all of the features that I wanted to show you today, but there are many more, so I urge you, please, use VoiceOver to test the level of accessibility in your applications so that it's as easy to use for everybody and not just those who can see. And with that, I'd like to ask Eric Seymour to come back on. Thank you very much.
OK, so of all the things that Rick showed, the most important thing that he showed for many of you is the screen curtain. And that is turn that screen curtain on and try to figure out if you can use your application. That's probably the most effective testing tool you can do.
Because if you can't use your application, and you know it, you're coded it, then a user who's not relying on the visual display won't be able to use it. So screen curtain, very important. VoiceOver itself is a testing tool, effectively. So let's keep going. So we've given you an overview of VoiceOver, Apple's new spoken interface. Now I'm going to talk about accessibility API basics.
So to do that, I'm going to frame this within the context of VoiceOver. So applications are out there running on the system. Some of those applications are built by Apple. Many of them, hopefully more of them, are built by you guys. And those applications-- VoiceOver is this process that's also sitting out there on the system, and its job is to listen to your applications and sometimes request things of your application and produce feedback for a user.
VoiceOver does this by sitting on top of a set of spoken widgets. Just like Carbon and Cocoa have widgets that are very graphically based and very mouse-based, perhaps, sometimes keyboard-based, these widgets' foundation are based on spoken feedback and keyboard input. And these widgets are based on top of the Accessibility API, notably attributes of things in your interface, actions, and notifications.
Now, what this means is it's really important that your applications give accessibility clients like VoiceOver the information they need to make the end user successful using your application. So, that's what we're going to talk about here. And then there's a whole other session tomorrow which goes into much greater detail, Session 424. I'll plug it several times today, don't worry.
So, one of the questions that comes up immediately is, "Well, we write our application in Cocoa, or we write it in Carbon, and how do you know the difference, and how do you make heads or tails of our view hierarchy?" You know, they're two completely different programming models, for the most part.
And the beauty of this is the Accessibility APIs abstract that away for us, they provide you a way to speak to us or any other accessibility client in a common language. So, everything in your application is a UI element. And literally, when I say everything, I mean everything. So, Windows are UI elements.
Things inside Windows are UI elements: buttons, text fields, pop-ups, what have you. Things within things in your window are UI elements. So you could have a scroll area which has children, and its children might have children. And so there is ancestry—there's a concept of ancestry to your UI elements. In fact, your application is a UI element. And so what does this really mean for us? Well, let's take a closer look at the three main pieces of accessibility: attributes, actions, and notifications.
[Transcript missing]
Finally, there are notifications. Notifications are where—this is really important to an accessibility client. It's also probably something that you're going to deal with the least, but it's really, really critical. So when I say deal with it the least, notifications are, again, knitted into the frameworks and critical for operation of an accessibility client.
You're going to get some of this behavior for free. A lot of the notifications that we defend on are application-provided, which means even if you're a fairly custom application, at the application level, we're going to get all the notifications that we need. This is a more advanced topic. This is one where I really, really want you to go to session 424 tomorrow at 1045, I think, in the same room. That's notifications. So the basics of accessibility: attributes, actions, notifications, standard behavior out of the box, typically with Carbon and Cocoa.
So now we come to how do you make your application a really great application? And of course, I'll add some flash to it. How do you make the five-star application, an application that to a user who's not relying on the graphical user interface, how do you make that something that that user can appreciate and really enjoy and make that user want to use a Macintosh and want to purchase your application? So number one is all the basics need to work. So all the things that I just talked about. And so if you were to build a basic application, again, using some of our standard tools, the basics are going to work nine times out of ten.
There might be a few details that you might want to do, depending on the way that you've designed your application. But at a bare minimum, you want to be able to get to everything using VoiceOver. So the VoiceOver cursor needs to move all the things that are controllable things in your application or readable things in your application. And it needs to support the basic application. So that's the first thing. So the second thing is, you want to be able to get to everything using VoiceOver. accessibility API attributes.
Then, this is where only you can help provide the user with information about your application. There is a concept of descriptions. The best way to show this is really to go right to an example. The basics of descriptions are documented. A K-A-X description attribute is a new attribute that's defined in the accessibility frameworks. There are some rules associated with it. No punctuation, lowercase.
I'll save that for your future perusal, but the example is really where the rubber meets the road. So, if I'm in Finder and I've got a toolbar, there might be an item on that toolbar that has no label, no title. So, if I have vision, I can look at that and discern that that symbolic left arrow probably means go back, go to the previous thing, common metaphor on a computer. But this is all that can be gleaned by an accessibility client like VoiceOver. Button. Button. Let me do that one more time.
Button. Okay. Not very useful. Close your eyes and listen to that. You're like, okay, which one? I don't know which one. I don't know what to do. What do I do next? Right? So, what you really need to do -- Back to previous container. Button. -- is add a little context.
And only you know that context. The accessibility APIs can't make that up on the fly. And so, we ask that you add in certain places, especially items in your user interface. So, if you have a user interface which don't have title representation or which have some sort of symbolic representation, we ask that you provide a little bit more context so that VoiceOver can echo that information.
Here's another example. List. Okay. We know that this is a list, but that doesn't tell us very much. As a user, what we really want to hear is -- Sidebar list. Okay. Sidebar. Well, that's meaningful to me because I'm a Finder user and I know the Finder is a sidebar and that's where I put my stuff. Okay. That's how I get the things. It has context.
Just to note, there is a way to create good and bad descriptions when you're labeling some of your user interface. A way to create a bad description is to go overboard, to add too much information. So, it turns out that the Accessibility APIs know about every widget in the interface, generally speaking, especially if you're using standard widgets. And those have attributes that are role descriptions which can tell spoken interface the word "button" or the word "list" or the word "menu item" or what have you. We already know that information. And so don't add that information to your description.
It'll be redundant. Only add the piece that identifies or differentiates the actual functionality of that particular widget. So, here's an example of adding too much information. Back to previous container. Button, button. If you add the word "button" to the end, we're going to echo it twice because we don't know— we have to assume that your intention for that description was clean and... "Contained only when it needed to." Here's another bad example. "Sidebar list list." Okay, so not too much in the descriptions. So that's descriptions. That's for labeling an element that has a symbolic representation and doesn't have a visual title.
Now, There's another type of element which is sort of similar to this, another type of attribute, and that's sometimes in your user interface, there are things that have labels. You label them. You drag them out of Interface Builder, you set up your little static text to be a prompt.
But the problem is, sometimes that prompt is above, sometimes it's below, sometimes it's to the left, sometimes it's to the right. Sometimes it itself is a symbol. Now, as a sighted person, I could look at that and figure out through the context and layout and all the visual cues what that means, but if I don't have that information, I don't know what it's representing.
And so there's a way to associate two elements so that one element can serve as the title of another. And this is done through the title UI element attribute, and we'll show you a way to do this later. So this is really, really important. So if you've got things that serve as prompts, it's a nice, more than nice, it's somewhat of a required feature from a user of a voiceover to want to know, when I land on that text field, look at that prompt to tell me what that text field's name is. All right.
And then finally we get to link UI elements. So in the demo that Rick gave, he showed mail. And mail, if you're users of mail, of Apple's mail client, there's a table and there are a bunch of messages, and lots of times if you're like me, you've got 3,000 or 4,000 messages in there, and then you've got your text.
And what do we do every day when we read mail? We glance up and down, right? We glance at this selection, we select it, we glance down, we read, we glance up. And so those two things are really tightly coupled, but from a UI element perspective, there's no way for us to know that they're tightly coupled. There's no way for voiceover to figure that out on its own. You've got to give us that little hint.
And so there's this concept of a link UI element attribute, the ability for you to just, in your interface, say, "I want this thing to link to this thing." And then voiceover can make that a one keystroke thing for an end user. The user can then move the voiceover cursor from their mail item, or from the subject line, down to their mail message, read it, link back, select another, and they can just move back and forth, effectively glancing back and forth using the voiceover cursor.
This perhaps is one of the most important things you could do to your application to make it usable. So anytime you think you notice yourself—if you're sighted and you notice yourself glancing between two major pieces of functionality and it's more than just going from one button to another, think you might need a link UI element.
The fifth thing really has nothing to do with the accessibility APIs, but it's unbelievably important, and that's full keyboard navigation. So like Rick did for his demonstration, unplug your mouse and see if you can drive your application. And if you can, then you're successful. But if you can't, that makes it that much harder for a user who has limited or no use of a mouse to use your application.
And it requires using the voiceover cursor to get to some of those elements. Just the best possible environment is if every aspect of your application can be accomplished with full keyboard navigation. Now, one question that always comes up is, well, I have this drag gesture, and that's how I do things. It's a drag gesture. What do I do about that? So the best example that I like to talk about with that is Finder.
So in Finder, we typically copy files by dragging them from one place to another. But in Finder, you can also cut and paste files effectively using the menu commands, and that serves the same purpose. It will select a file, copy from the menu, move to a new location, paste, accomplishes the same task. So that's a perfectly acceptable alternative to accomplish the same goal, and it's totally doable by the keyboard. So I would encourage you to do that to parts of your application that only have keyboard gestures-- or, excuse me, drag gestures.
So we get to the end of this list, we do the basics, and then we provide a few things to add context, and then we make sure that we've got full keyboard navigation. And so we have success. So one of the questions that I get, I get internally and the few times that I've had exposure externally to folks who are accessorizing their applications is, "How hard is this going to be for us to do?" And it looks really easy when you present it like this, but how hard is this in reality? And so the best way to answer that is a real simple answer: it depends.
It depends on what your application is. And the real answer is going to come when you go to session 424 tomorrow, but my answer is always, if you have a fairly modernly written Cocoa application or an HIV-based Carbon application, you're going to be in pretty good shape. The frameworks have had several releases of iteration, and they're going to be in pretty good shape. Things are going to happen automatically for you.
But if you have a lot of custom views in Cocoa, or you even have your own type of view, you've done some very, very custom thing, or you've got some legacy code, either Carbon or Cocoa, and you haven't taken the opportunity yet to bring it up to date to current techniques for accomplishing the same thing, now might be the time to do that, because that's going to be a bigger challenge.
So perhaps adding accessibility to your applications and getting into the new technology that's coming out of it, that's going to be a bigger challenge. And getting into a new market of users might be a catalyst for perhaps bringing some of your older code into some of the new approaches that are used today.
[Transcript missing]
And then finally, like I've already said, the most important tool is VoiceOver and the screen curtain. Use this to test your application. This is where the rubber meets the road. If you don't do it first, somebody will. A school teacher will, an employer will, a government agency will.
It's really, really important that this new interface that's part of Aqua, if you will, is something that somebody's going to look at your application with. And you want to make sure that your application behaves and works well using this interface. So with that, I'd like to invite Aaron Haney on stage to give you a demonstration of some of these tools.
Thanks, Eric. So what I'm going to go over here is just a quick example. What I have is a small Hello World style app, and I'm going to take a look at it with the accessibility tools just to show you the procedure for checking out your application and seeing where it stands in terms of accessibility.
So just to start off with, let's just build and run the app. And you actually saw this in the screenshot when Eric was going through the slides a moment ago. All it is is a little Hello World application with a place for the user to enter their name and a nice OK button to click.
So, the first thing you want to do is take a look at this application using Accessibility Verifier. And as Eric mentioned, there's a new folder inside Developer Applications Utilities called Accessibility Tools. And that's where you'll find Accessibility Verifier. And when you launch it, you get a blank window like this with an application pop-up. And from that application pop-up, you want to select—I apologize for the small font—you want to select your application as it's running. This is runtime checking.
[Transcript missing]
There are several tests you can run. Click the Choose Test button. Each one has an explanation. Turn them on and off. They're all on by default. We recommend that you run them all. When you click Verify, it performs the test. Now that it's done, you get a list of warnings.
It's very comprehensive, so you may get a lot of warnings. So what we have here is a filter to look at just the critical ones. And you'll notice that some of these are highlighted like links. So when you click on it, it actually jumps to the point in your hierarchy where the error exists.
I'm going to focus on this one where it says, "Missing AX title with no AX description attribute." And what that refers to is, when I click on it, a button inside the window "Accessibility Demo." It says there's no description and no title. Well, now that I know where it is, I'd like to go take a look at it in more detail, so I'm going to launch Accessibility Inspector. And let's bump up the font just a little bit so that people can see it.
This is a lot like Pixie, it just follows wherever you got the mouse. So let's see, there's only one button in my window, so I know which one it is, but you can use this quickly to look at it. And you can see in the list of attributes down there that there's no description and no title.
This means that an accessibility application is going to not know what to say or how to describe this UI element when it comes to it. Now, just to quickly demo a few other features of Accessibility Inspector, there is a hotkey to lock the view, just like in Pixie.
When it turns red like that, now I can move the mouse around and it's not following it. Unfortunately, there's a hotkey conflict, but if you just click somewhere, it goes away. And it also adds this extra palette when you've got it in locked view. You can take a look in more detail at any of the elements.
There's also a small menu that lets you go to the window container. And let me turn on the highlight. So now you can see... What's currently focused in Accessibility Inspector? It's highlighted in red. And I can navigate down to one of the buttons. Let me just turn off the lock real quick and go back to that button. Now you can see it highlighted.
If there are any actions or any settable attributes, you can actually control them here inside UI Element Inspector, or as it's now called, Accessibility Inspector. This gives you a lot of power when it comes to debugging. So now I want to fix some of these errors that I found. So let's go back to Xcode and open the Nib in Interface Builder. And now I'll show you the accessibility features that have been added to Interface Builder.
So if you just go to the Get Info panel, you'll see that there's a new item here in the pop-up: Accessibility. And now, as Eric mentioned, you can link two elements together so that, let's say, the first name field, obviously, it's going to have a title first name. Now, you don't want to enter that same text in two different places. What you want to do is link them together. So that the Edit field gets its name from this Static Text field. So you Control-click and drag.
Then select Tile UI Element and click Connect. And now from then on, Accessibility will know that this edit field is getting its name from the static text right next to it that I've linked up. Let's link the other one up. and David If you also wanted to have a link UI element, such as the example in Mail where you have items in a list view linking to items in the window below, you can also hook that up here.
And lastly, for this button, which is a fairly common occurrence where you have an image button, it's got no description and no title, let's type in a description for it. And as Eric mentioned, we don't want to include the word "button" in the description. It's just "OK." We don't want to say "OK button," because then we'll get "OK button button." Now I have to mention that this support in Interface Builder, unfortunately we found some last minute problems with it.
So the version that's on the DVD, we do not recommend that you use it. This is just to show you the direction that we're moving in. We're hoping to have those problems fixed as soon as possible, so we apologize for that. So to test it, I'm just going to use run interface to the test interface feature in Interface Builder.
So, for the last part of this, I'm going to turn on spoken user interface or voice over as it's called now. Spoken interface is inactive. Press Interface Builder, Application, Window, Accessibility Demo, onto First Name, Edit Text Field. And now you can see it's picking up the name of the text edit field.
Let me just go down to the next one so you can hear it. Last Name, Edit Text Field. And from now on, if I go into Interface Builder and change the string in the static text field next to it, it'll automatically get picked up. Okay, built in. And now the description that I've added has also been hooked up. And that's it. It's that easy. I've accessorized my app. And so now I'm going to turn it back over to Eric.
[Transcript missing]
All the things I wanted to cover. Hopefully you have a good grasp of what VoiceOver is all about and some basics of accessibility and how you can provide context in your application and we've shown you some tools. Now, there's a place you can get more information. There's documentation online, also on our website.
If you're curious, a good place to go is actually dive down into where the header files exist for accessibility because if you're going to accessorize your applications, these are things that—this is the real deal. That's where the information is. A little bit more information: Human Interface Guidelines, Software Design Guidelines. And then, I can't stress enough, down here at the bottom, tomorrow, 10:45, in this room, Developing Accessible Applications. It's going to go in-depth into how to develop accessible applications.
So, a few people to contact. I'm going to invite Travis Brown on stage. Travis is a technology evangelist for Apple, has a lot to do with accessibility. Some of you may already know as well Mary Beth Jaynes, Assistive Technology Partnership Manager, and Mike Schbanek, who is Product Marketing Manager and, among other things, voiceover. So, with that, I'd like to ask Travis to come up on stage. I'm going to have Travis moderate some questions.