Mac • 57:59
Understanding the flow of user events is an essential skill for every Cocoa developer. Learn how the responder chain routes user input events through your Cocoa application and use that knowledge to insert detours into the event path and monitor events effectively. These practices are sure to make your Mac application more interactive and responsive than ever.
Speaker: Raleigh Ledet
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
Welcome to User Events in Cocoa, my name is Raleigh Ledet I'm a Cocoa Software Engineer. We've got a lot to talk about today and it's going to run the gamut of some beginner stuff and some intermediate stuff and some more advanced stuff. We're going to talk about Event Routing in a Cocoa application which means we're also going to spend a fair amount of time talking about the Responder Chain. And we're going to talk about the new Gesture API that we have along with the new Multi-Touch API that we're introducing in Snow Leopard.
And we're going to finish this up by talking about how you can get into the Event Queue and watch what's going on with monitoring for events that are coming to your application and events that aren't even coming to your application that are actually going to other applications. Everything I'm going to be talking about here today is specific to the Desktop Mac OS X. If you're more interested in Multi-Touch Events on iPhone please check out the Processing Multi-Touch Events on iPhone session on Thursday at Pacific Heights at 9 AM. You'll see during the talk this badge on occasion.
At various points in the talk we're going to talk about areas where you can get into the Event Routing and change how things flow in Cocoa and you can modify where events go, so this is a badge to let you know about some more advanced topics that you might want to really pay attention to.
So let's get started; Event Routing. When an event comes into your Cocoa application, let's say a Swipe Event, it comes into NSApplication and NSApplication looks at the event and tries to figure out which window should this event go to. That event will get sent to the NSWindow class and that class looks at the event and figures out, oh, this is a Swipe Event so which view should I send this event to and it figures that out and it calls the swipeWithEvent method on that view. Let's take a look a little bit at this.
NSView inherits from NSResponder and it's NSResponder that actually has the definition for swipeWithEvent and the default implementation. NSView inherits from NSResponder and NSControl inherits from NSView which, of course, inherits from NSResponder. So your classes generally are NSView subclasses or NSControl subclasses and you will automatically inherit all the defined implementations -- the default implementations of NSResponder's various event methods.
Here's an example of some of the event methods that we have on NSResponder. For Pointer Events, Touchpad Events, Keyboard Events, you know, mouseDown, mouseDragged and for our example that we're following right now swipeWithEvent. You'll notice here that the pattern we follow is it's the Event Type and, for instance, the method name along with an event so we pass in the event for you so you can get further information in your method override.
Another thing that NSResponder does for you, which is really nice, is that it hangs on to something we called an nextResponder which is just another NSResponder class of subclass, and this allows us to quite literally chain these together and we would call this by the obvious name the Responder Chain.
So what happens once we have this all set up in a Responder Chain is your swipeWithEvent comes in and NSWindow is going to figure out which view to send this to first and your view is a subclass of NSResponder and if you don't override swipeWithEvent or if you do and you call super swipeWithEvent to let the default implementation handle it the default implementation is just going to pass this to the next responder for you. And this continues up the Responder Chain until there's no responders left and the swipe just falls off the stack; this is what happens for most events.
keyDown is one of the special events where we do something slightly different. When we get to the end if there's nobody else to respond to the keyDown we'll issue a System Beep. So if we put some subclasses on our Responder Chain it might look something like this. You got a button and a text field sitting on top of a view inside of a window and that looks a lot like the View Hierarchy.
Well that's not quite right as it turns out because a window isn't a view. A window contains a content view and in our example the content view is going to contain our view with our button in our text field. Now the window here is a different color because while it doesn't inherit from NSView it does inherit from NSResponder, so when our Swipe Event comes in if none of the views in the Responder Chain down to the Content View handle it it'll get forwarded automatically to NSWindow. If you have an NSWindow subclass you can override swipeWithEvent there and you can handle it.
NSWindow's default implementation just lets NSResponder's default implementation do the work and that will forward it to the next responder which is the NSWindow Controller, and the NSWindow Controller is the last responder in this chain. If you have a Window Controller attached and if you have your own subclass you can -- this is the last point you can decide to override swipeWithEvent and handle it yourself if none of the other views have. The great thing about Cocoa is this is all automatic.
You go into Interface Builder and you set up your View Hierarchy Layout, you add your Window Controller, assign it to the window and the Responder Chain is set up for you automatically; this even works if you add views programmatically. So you just add subview to a view and the Responder Chain is going to be set up for you automatically, so there's nothing you have to do to set it up it just works. So that's the basics of the Responder Chain. It's the key thing that's used in Event Routing in a Cocoa application.
It's also used for a number of other places in Cocoa applications, for example, Action Routing. When you go in Interface Builder and you drag an action normally from like a menu to FirstResponder that action does Action Routing to figure out the correct object in your View Hierarchy to send it to. And actually with Action Routing it's a little bit different in how they look at the Responder Chain because they'll pull in things like the document class if you're document based and they'll also look at your window delegate and your document delegate.
And so the same thing goes with Automatic Menu Enabling and Error Presentation. They all use the Responder Chain slightly different but they're heavily relying on the Responder Chain. We're not going to have time to go into the other uses of the Responder Chain we're going to focus on events today.
If you want more information you can just do some Web searches here, these phrases will get -- the top link will get you to Apple's Documentation and I'll also provide links at the end of the presentation directly to the documentation if you want the full path links.
[ Background noise ]
[Raleigh Ledet]
We're going to talk about 2 types of Event Types, Keyboard Events and Pointer Events and we're going to start off with Keyboard Events. Keyboard Events are probably one of the most complex event types that Cocoa application routes. There's a lot that goes on here before the event even hits the Responder Chain. The first thing that happens when a Keyboard Event comes in is Cocoa's going to send it to the right window and the NSWindow class is going to look at it.
Before that even we ask if it's a Key Equivalent. If it's a Key Equivalent we're going to perform the right Key Equivalent method for you so with the Hit Command-A we're going to call Select All or send that down the Action Routing System. If they do a Command-N to open a new document these are your Key Equivalents.
If it's not a Key Equivalent we ask Is this a Key Interface Control? A Key Interface Control is something like Control-F5 which will actually move your keyboard focus up to the toolbar for you, or Control-F2 which will actually move your keyboard focus all the way up to the menu. So if it's one of these Key Interface Controls we're going to modify the key focus and it's going to be handled for you.
And just to point out Tab isn't one of the special Key Interface Controls that will actually go down further. If it's not a Key Interface Control we then finally figure out which view to send it to in the Responder Chain to start the Event Routing through the Responder Chain, and this is the FirstResponder. The FirstResponder is the designated responder in the Responder Chain when there's no other context in the event we send it to this designated responder as the FirstResponder.
The FirstResponder generally has the blue circle around it, so, for example, we have up here the buttons highlighted there for Add Rule. On the other side we have the List View's got the blue circle around it. On the bottom we have the blue circle around an item in the toolbar it's the one that has key focus and is the FirstResponder in this case. And on the other one we have a Text Code, you see it has the blue ring around it.
So normally the key focus for the user they can easily specify this by looking at what area has the blue focus around it, that's going to be the FirstResponder. You don't always -- not every control that's FirstResponder has the blue focus around it, for example, in Text Edit when you're typing away in the Content Area we don't put a blue focus around the whole Content Area.
It's kind of obvious where the FirstResponder is in that case. So your view is the FirstResponder and keyDown gets called in your view and you override keyDown and you can now do your custom implementation for the keyDown method but what we suggest you do is simply call interpretKeyEvents and pass in an array with the event, and this will go to Key Bindings.
And the reason you want to let it go through Key Bindings and let Cocoa interpret the event for you is we will call back a whole bunch of other methods depending on what the key is. For example, if they use a Press Command-Right Arrow we want to move to the beginning of the line, Command-Left Arrow to the end of the line, move up or move down in the document, all these different ways of moving.
We'll even recognize eMac commands so if you press Command-Control-F we'll move forward for you. So let Key Bindings do the hard work of looking at the Keyboard Event, figuring out what the user intended to do and you'll get consistent look and feel and user interaction across applications on the system. If it's one of these Key Binding events we will send the appropriate action message and the event is handled.
If it's not a Key Binding event Key Binding will then call back onto your view, insert text and give you back the string so you don't have to worry about the virtual key codes, you don't have to deal with any of that, you don't have to deal with the Unicode stuff you have an NSString so you can easily enter NSString wherever you need to do that.
So this also goes through the various input methods and this deals with all the proper things it's doing, perhaps, a Chinese character keyboard or various things where there are multiple key codes coming in. Let Cocoa do the hard work for you, let Cocoa do the heavy lifting and let Key Bindings do the work for you with Interpret Keyboard Events.
Like I said, Keyboard Events' one of the more complicated ones there was a lot there so let's just do a quick brief example to go over it real quick. Let's say the user presses A on your keyboard. First thing that happens is Cocoa asks is this a Key Equivalent and it's not it's just the letter A so then we ask is this a Key Interface Control and no it's not so we figure out who's the FirstResponder in the window and we call keyDown and this continues to go up the Responder Chain but the FirstResponder happens to be your view and you do the appropriate thing and you just call interpretKeyEvents so you let Key Bindings take over.
Key Bindings looks at the key and it says Is this one of our Key Binding actions, no it's not it's just the letter A. So we finally call back on [inaudible] view, Insert Text. You get back a string and all you have to do is the appropriate thing and the letter A is put into your Text Field. So there's a lot that goes on for something that seems really simple but this handles all your Key Equivalents, your Key Interface Control and all these Key Binding actions, so let Cocoa do the heavy lifting for you.
[ Background noise ]
[Raleigh Ledet]
So, Pointer Events. Pointer Events are all these events associated with the cursor and we use the cursor as the context to decide where in the View Hierarchy that we're going to send the event to in the Responder Chain to start it off in the Responder Chain. So this will take care of your mouseDowns and your mouseDraggeds and this actually includes the new Gestures and the new Multi-Touch Events, so these are all dependent upon where the cursor location is. So let's look at a quick demo of Pointer Events.
[ Background noise ]
[Raleigh Ledet]
So I have a little TargetGallery application here and you can click in the targets and they get a nice little green dot and if you miss we get a system beep and we will go ahead and switch into Edit mode and I can make this easier by just grabbing one of these and, you know, making it move really slowly or I can make it more difficult by making it go way too fast.
Now we'll go back into Shooting mode and, you know, I can try and hit the fast one. I'm not that good I'll stick with the slow one right there that works a lot better. So, a fair number of interesting things has happened right there and we're going to go over some of that.
[ Background noise ]
[Raleigh Ledet]
So, we have mouseDowns occurring and if we look back at our example in this example the mouseDown is occurring over a button. So the window is going to get the event and the window's going to try and route it to the appropriate view, and the way it does that is it looks at the View Hierarchy and we turn that around and we do Hit Testing. So NSWindow will ask its Content View do hitTest and by default NSView looks at all of its subviews and it does the appropriate Hit Testing on each one of those subviews and those subviews ask their subviews and this continues on down until finally we get to NSButton.
NSButton doesn't have any subviews and NSButton wants to be the appropriate view for this event so it returns Self. So NSWindow will then send the mouseDown Event to that button and it will go down the Responder Chain from there. Sorry -- will go up the Responder Chain from there. But one of the great things about this is this is one of those detour points I was talking about earlier. In your class custom NSView you can override hitTest and change where in the Responder Chain Cocoa is going to send the event.
So this is a great place for you to interject yourself and we actually do that in the sample, so we're going to show you that. So here's that TargetGallery demo and what we have is a custom Target View that contains all of our little targets. And each target is actually another little subview and all they do is draw a little target but they're a custom view and they override the mouseDown Event so that when you click in them they draw a little circle in their view where you did the mouseDown.
But in the Target View overrides hitTest and if it's in the Editing mode it returns Self and that stops the Hit Testing from ever getting to any one of those Target Views and the mouseDown and all the mouseDraggeds will then go to the Target View, and this is how we're able to actually move things around.
And if you notice in this example when we look at the mouseDown if we're not editing we do the System Beep. If we are editing we just let the mouse continue up the Responder Chain as normal by calling super mouseDown Event to get to the default implementation from NSResponder.
And what ends up happening in the sample code, I suggest you check it out, is the controller implements mouseDown and it is responsible for doing all of the dragging of the subviews. And I did this in the sample code to illustrate how events flow through the Responder Chain and where they end up, so you should really check that out. Let's talk a little bit specifically about Mouse Events in particular.
We're going to talk about a field called subtype. We're going to talk about some best practices and we're going to talk about when to use the delta XY Values versus the locationInWindow. So it turns out that there are more than 1 type of device that can generate Mouse Events.
Mice, obviously, Tablets and trackpads and you can easily determine from your various Mouse Event, mouseDown or mouseDragged, if you just ask for the subtype from the event. In the Mouse Event if it's from a mouse, you know it's from a mouse, there's no other additional information in the event other than what you normally think of with your location in window and your deltaX and your deltaY values. But if it's from a Tablet there might be additional information.
Tablets, for example, can tell what the pressure is applied to the tip of the device or what the tilt is of the device on the Tablet, or its rotation. And if your application can take advantage of this extra information you can find out if it's there by looking at the subtype. I also suggest if you can take advantage of this information that you look at tabletProximity and tabletPoint.
And tabletProximity is an event that will tell you more information about exactly what that device supports. Some devices support Tilt some don't so you can get more information on what type of device is being used and what it supports. I'm not going to go into everything about Tablets but you can do a search for a Cocoa Event-Handling Guide: Handling Tablet Events and you can get more information there.
You could also find out if the Mouse Event is coming from a Touch device or a Trackpad. There's no additional information in the event to get but sometimes it's nice to know that it's coming from a Touch so you can use this to make some decisions.
[ Background noise ]
[Raleigh Ledet]
So, some Mouse Best Practices. Tracking areas; tracking areas are the way that you should go about doing roll-overs or try to find out when the mouse enters in exit areas, the different areas of your application. You could try and look at all the Mouse Moves. We generally don't issue the Mouse Moves for performance but if you want the Mouse Moves you can go into NSWindow and turn them on but they won't work exactly like you might think so it's, particularly if it moves out of your view, you -- the Mouse Moves are actually sent to the FirstResponder, actually, so you might not get the way you would think and your roll-overs wouldn't work and it's harder to implement.
And it actually uses more of a performance problem because you're looking at every single mouse move. With Tracking Areas it's nice and easy you set them up and you will be notified when the mouse enters your tracking area and when it leaves. If you want more information about Tracking Areas you can search for Using the Tracking Area Objects.
Please use the three-method tracking approach. What's the three-method tracking approach? It's when you implement mouseDown, mouseDragged and mouseUp in your view. The other way of doing this is in the mouseDown is to do NSNextEventMatchingMask and look for the events yourself and do your own Tracking Loop, that will actually cause problems and it will be one of the reoccurring mantras later on during the talk where I say that Tracking Loops -- if something goes into a Tracking Loop you can't do something.
So if you use the three-method approach you don't run into that problem. And Cocoa does something really nice for you here. When the mouseDown occurs and it does its Hit Testing it locks on to whatever view was returned from Hit Testing, so all your mouseDraggeds until the mouseUp occurs will continue to be sent to that view even if the mouse is not in that view.
So if you're in a button and your mouseDown occurs in your button you will continue to get mouseDraggedged events. If your cursor moves out of your bounds you'll get a mouseDraggedged event but you can easily test that against your bounds and you can un-highlight, for example. When the mouseUp occurs you can test that against your bounds and decide if it's appropriate to send the action.
But while you're doing your Mouse Tracking you might want to consider looking at Drag Thresholds. A lot of people don't do this. But what Drag Threshold is is when a mouseDragged event comes in don't start your dragging action right away wait to find out if the user has moved the mouse a few pixels.
And the reason for this is my elderly next-door neighbors, they got their computer and they constantly called me over because they were having trouble clicking on some different things. And when they would go and they'd click with the mouse they were using their whole hand and their mouse would move a few pixels and sometimes this would cause a drag or it wasn't an actual just a click and so nothing would happen at all in various applications and this was frustrating for them.
So you might want to look at Drag Thresholds as a way of making sure that you do what the user intends to do and not what they accidentally did. Finally, deltaX and Y versus locationInWindow. Why would you want to use one over the other? Well, let's take a look at what happens when you get it wrong. We're going to drag this object with the mouse and the object is no longer lining up with our cursor.
Now that doesn't make for a very good user experience. Generally you'll want to use locationInWindow; this is the default thing that you should be looking at. When your mouseDragged Event occurs or your mouseDown Event you get the locationInWindow from the event and you convert in your view, you convert that location fromView nil and this will convert from the Window Coordinate System to your Views Coordinate System so now you have a point in your Local Coordinate Space. It's real easy for you to do things like test if this point is within your view's bounds.
And you really want to use locationInWindow when you need to align something with the cursor; this is the best way to make sure that the cursor and whatever you're dragging or you're doing stays in sync. And it's also the best way and the easiest way to get the Coordinate System to be localized into your view.
Because deltaX, deltaY, on the other hand, is actually in screen pixels not in the points of the view so it's a little bit harder to get that with Device Resolution Independence it'll be harder to get that into the localized location of the cursor. deltaX and deltaY, what that actually is, for those of you that don't know, is we will tell you the amount of screen pixels that changed since the last mouse event.
And it's not actually screen pixels, because what can happen is the mouse as the user drags it can get pinned up against the screen but the user continues to move the mouse in the same direction. The deltaX and deltaY will change but the locationInWindow will be the same because it's pinned against the screen; this turns out to be really useful if you're doing something like panning, or, for example, you might have a modeling application and you want to rotate the model so the user clicks on the model and they start dragging the mouse and you're rotating the object.
Well, even though the cursor hits up against the edge of the screen the user might still want to continue to rotate that object in the same direction. They can continue to move their mouse and you can look at deltaX and deltaY and continue your rotation. So this is best used in a situation like that in panning or, you know, through your model rotation where the cursor location is really irrelevant Other than, perhaps, starting the action.
Alright, let's move on to Gestures some of the new stuff. So Gestures are another Pointer Event. And it turns out that these Magnify, Rotate and Swipe Gestures are there from 10.5.2. We're publishing them for the first time in Snow Leopard but these will work backwards compatible back to 10.5.2, so that's kind of nice.
We use the cursor location to determine where in the Responder Chain to start sending the Magnified, the Rotate or a Swipe event. So these are some textures in the Track Pad; this is what a Magnify Event looks like, something like that. And this is the NSResponder Method that you need to override if you want to handle the Magnify Event.
magnifyWithEvent and you ask the event for the magnification and you can get the change in magnification. A Rotate looks something kind of like this, these are the touch points on the Track Pad, these are Rotates and you can get a rotateWithEvent if you ask the event for the rotation and you'll actually get the delta Rotation between the last Rotate Event, so you might need to wait for the whole Rotate to complete if you want to know what the complete rotation is for the whole Gesture you'll need to combine all of the different deltas together. And then you have Swipe where the user can use 3 fingers and they can Swipe. Looks something kind of like this.
There's 2 methods on Event you need to look at, deltaX and deltaY. You can use this to determine if they're swiping up, down or left or right. The left deltaX is 1 and if you swipe right the deltaX will be negative 1. Your deltaY will be 0 in that case and you have the up being 1 for deltaY and your deltaX will be 0 then your down being negative 1.
So that's Gestures really quick. And then we have the new Multi-Touch API; this is going to be brand new for Snow Leopard but we now allow you to get the individual touches and not just the higher-level Gesture, so this will be really great in some applications. But the thing I need to get across here is a Track Pad on the desktop is not a touch-screen on the iPhone. There's going to be some fundamental differences here. Namely that you have an indirect input method with the Track Pad versus a direct input method with the phone.
With the phone you have a touch-screen and you want to -- the user wants to directly manipulate what's underneath their finger. Well, since you're not touching the screen on the desktop we need some other kind of way of determining indirectly where the user is trying to do their Multi-Touch and what object on the screen are they trying to manipulate, and the way we do that, again, is with the cursor.
So Multi-Touch Events are another Pointer Event. So we use the cursor location to determine where to send the Multi-Touch Event. We do similar Hit Testing that we did for the mouse. The difference is once Hit Testing is complete we go up the View Hierarchy until we find a view that is accepting touches and then we use that view as the FirstResponder in the Responder Chain.
And similar, as I discussed with the 3-method approach for dragging, once a touch has come down on the Track Pad and we do Hit Testing all touches will continue to go to that view even as the user adds more touches, even if the cursor has moved until all the touches have been released. Once all the touches have been released from the Track Pad and a new touch comes in we will do Hit Testing again and we will lock on to another view. So let's give a quick demo of that.
[ Background noise ]
[Raleigh Ledet]
So this is my hello, world of Multi-Touch applications. From what I've gathered the hello, world of Multi-Touch applications the very first thing you need to write is a LightTable, so there's my LightTable. So I've drawn some images on here and you can do the normal thing of grabbing a Drag Handle with the mouse and moving that around and changing your sizes and we can double-click and we can change the size inside the mask and, well I can't grab my Drag Handle anymore without repositioning everything.
Well, this is one of the great places that Multi-Touch comes in handy. So this LightTable's also Multi-Touch aware so I'm going to start using 2 fingers on the Track Pad without grabbing a Drag Handle I can just resize this exactly how I want and we can double-click to get out of Editing.
We can move the whole thing around and it's great because I can pin one side and stretch it or move just the top or just the bottom whatever I need to do there, and we'll bring that around, say, right there. And this application also looks at Gestures so I can use 3 fingers and I can Swipe in the tools and I can adjust the frame thickness, perhaps, and we'll crank up the corner radius and make this more like a circle there, that's kind of nice.
And so, again, I'm using Multi-Touch so I don't have to worry about grabbing the little bitty Drag Handles as long as my cursor's over the image for this particular application I know what I'm going to be adjusting so I can adjust this one over here. And so this is an example of using Multi-Touch.
[ Background noise ]
[Raleigh Ledet]
So Multi-Touch on the desktop is very similar to Multi-Touch on the iPhone as far as the API but there are some subtle differences. We now have an NSTouch Object similar to a UITouch and we have Phases on the Touch which are also similar to the Phases on UITouch. For example, when a Touch first comes down on to the Trackpad you see a little dot that's a Touch coming down.
The Touch moves into the NSTouch Phase Began that's this phase. As the user starts moving the Touch on the Track Pad the Touch moves into the NSTouch Phase Moved. Perhaps they leave it stationary and they have another Touch coming in and out so while the Touch is stationary and there are other Touch things going on the Touch is moved into the Stationary Phase and it'll move -- the Touch might move back and forth between Stationary and Moved and eventually your Touch is going to get down to Ended when the user finally lifts their finger off of the Track Pad.
And at any point in time the Touches might have actually been cancelled, the system might have cancelled the Touches, for example, because you've clicked out of the application even though you were tracking Touches in one window you've clicked out of the application your application's lost focus we will cancel the Touches. We also have an Identity. For those of you who are familiar with the iPhone you'll notice that this is something we don't have on the iPhone.
On the iPhone Touches are mutated in place but the way Events work on the desktop is they're an immutable event so all the Touches inside the event must be immutable as well. So Identity is a way that you can track a Touch as it moves over time. For example, a Touch A comes down and it starts moving and then Touch B comes in you need to know which one is Touch A and which one is Touch B and Identity is how you determine that. You need to be sure to use isEqual.
It's an object that supports the Coppering Protocols so that you can stick these in a dictionary, so you can use your Touch 1 as your key in a dictionary and when future events come in you can compare that against -- you can take the Identity out of your set of Touches from the event and use that as a key in your dictionary or if you want to do it yourself just use the isEqual to compare 2 Touches to find out which 2 they are over time.
We also have an isResting property; this is different than Stationary. On the new trackpads they don't have a button so like on the image up there there's no external physical button on the trackpad and what we allow you to do is you can rest your thumb on the trackpad as you move the cursor around and then, perhaps, use your thumb to click, and it might look something like this. You touch and you're moving around and you bring your thumb down and you click and you do a drag somewhere in there and you notice the little gray Touch was wiggling a little bit.
Resting doesn't mean stationary it generally means that it can be ignored, and so it might move around a little bit. It might be the thumb that we're ignoring and it could be in this resting phase. If it moves too far or it moves higher into the right location it might turn into an Active Touch and then isResting will return No.
So since this isn't a touch-screen we can't give you locations in your view or on the screen where the Touch is occurring so we give you a normalized position between 0 and 1 so you can find out where the Touch is on the trackpad. And so you have to, again, indirectly manipulate things. Well we also give you the device size of the trackpad in case you want to know the actual physical dimensions of how far they moved.
The device sizes and points, 72 points per inch. And so if you take the normalized position and you just simply multiply that times the device size and that will get you the physical distance from the lower-left-hand corner of the trackpad over to where the Touch is. There's also a Device Property and this is mainly used for in case there are multiple Touch devices on your computer. In an Event the collection of Touches will only be associated with one device at a time so if the user is on multiple devices entering touches you'll get separate events with their own collection of touches just for that device; this is one way to determine the difference.
An interesting thing about this is if touches are coming from multiple devices the Identity will actually be unique across devices. Multi-Touch is completely opted in on the desktop. On NSView you have to call setAcceptsTouchEvents Yes. If there are not any views in the window that accepts touches, Touches won't even be routed to that window it won't even come in and that's a performance optimization, so you must opt-in to Touches if you want them.
Once one view accepts touches, touches will be sent to that window. You can also decide if you want resting touches. By default we don't send you resting touches. Generally resting touches can be ignored and when they're ignored we don't even include them in the set. If you're not accepting resting touches they're not even included in the set of touches.
If a touch does move from a resting stage to an active stage and you're ignoring resting touches we will fake that out to be a Touch Began at the point they become active. If they transition from Active to Inactive we'll actually go ahead and issue a Touches Ended if you're ignoring Resting Touches. Generally leave setWantRestingTouches at its default value.
No the user's not intending to anything with that touch but if you have some special application and you want resting touches you can look at the isResting phase and make up your own mind. Another quick point with Resting Touches, when they are ignored they are not included in the Hit Testing either. So earlier when I described the first touch that comes down and we do the Hit Testing when we lock on. If you're ignoring Resting Touches they're not included in that Hit Testing until they become active.
These are the Responder Methods you need to implement if you want to accept touches, and when you implement them please be sure to implement them all, touchesBeganWithEvent, touchesMovedWithEvent, touchesEndedWithEvent and touchesCancelledWithEvent. For those of you familiar with the iPhone you'll notice this looks slightly different; this is to keep in consistent with the other event methods that we have on NSResponder it's the Event Type and followed by an Event Parameter. So I mentioned earlier to implement them all and the reason you do that is, perhaps you are implementing just touchesBeganWithEvent and there's another view higher up in the Responder Chain that's also touch-aware.
If you're eating up all the touchesBegan and you don't implement touchesEnded, for example, and they all of a sudden get some touchesEnded they might get confused because they never got the touchesBegan, so if you implement touchesBegan please implement all of these methods especially the touchesCancelled in case that happens you need to be able to back out your state.
So the iPhone says touchesBegan and they pass you in a set with Event and we don't provide you that set because we want to keep consistent with all of our method names on the desktop. But on NSEvent the way you get this set of Touches is you ask touchesMatchingPhaseinView and you pass your own view as your view.
So in touchesBegan if you wanted to begin Touches you can just say NSTouchPhaseBegan and you will get all the Touches that began in the touchesBeganWithEvent, though you might want to ask for the Moved and the Stationery's. Well, this turns out to be a bit field so you can ask in the MatchingPhase you can say Give Me the NSTouch Phase Moved or with the NSTouch Phase Stationary and you will get all the stationary touches and all the moved touches, so that might come in handy. But what we think you're going to want most of the time is all the touches that are touching give me the touches that are just began, the moved and the stationary. If they've ended I don't care.
If they've been canceled I don't care about those. And we have predefined values for you you can just say NSTouchPhaseTouching and that is equivalent to saying NSTouchPhaseBegan or'ed with NSTouchPhaseMoved or'ed with NSTouchPhaseStationary. And another one that might come in handy that we predefined for you is NSTouchPhaseAny. If you just want all of the Touches and you want to inspect the properties, inspect the phase, inspect the isResting states and make up your own mind you can just ask for NSTouchPhaseAny.
So in the sample code we were tracking touches and this is what the touchesBeganWithEvent looks like. The first thing we do is I want to know what touches are currently touching the device. I know at least 1 touch began, perhaps, 2 began, perhaps this is the second or third or fourth touch, so we get the set of touches that are currently touching and we find out how many touches are currently touching the Trackpad.
If it's 2 we want to go ahead and set up for 2-Touch tracking and then we'll do our 2-Touch tracking after a threshold in the touchesMovedWithEvent. If it's more than 2 touches it's a third touch or a fourth touch then I'm just going to get out of Dodge and cancel tracking because I only want to track 2 touches. And if it's less than 2 touches because it's only 1 touch that just came down the very first touch I'm just going to ignore it and not do anything.
Of course I don't want to set up any tracking I don't want to track 1 touch, which brings up a couple of interesting points some best practices for Multi-Touch. Another interesting difference between the iPhone and the desktop is the touches can also be issuing Mouse Events and Touch Events at the same time because you're moving the mouse cursor.
So you'll need to take that into consideration on when you're looking at each individual touch and when you want to track touches. At the same time at a higher level we might have interpreted a Gesture so while we're sending out Touch Events we'll send out a Gesture or we might just be sending out individual Touch Events.
So you might need to selectively decide when you want to ignore the mouse, when you want to ignore certain Gestures or when you want to ignore the Touches; this is where it comes in real handy earlier looking at the subtype of a Mouse Event and seeing that the Mouse Event is coming from a touch device. If you know you want to selectively ignore Mouse Events while you're tracking something else that's a great way of doing that.
[ Background noise ]
[Raleigh Ledet]
Another thing we have is to consider Tracking Thresholds. So even though the user has 1 finger and they're doing something with 1 finger and they bring a second finger down they might have accidentally brushed the trackpad. Well, you don't necessarily want to go into 2 tracking and all of a sudden magnify something when that's not what the user wanted or intended. So just like Mouse Events you should consider using Tracking Thresholds in your Multi-Touch tracking.
Since we have Gesture Events going on and they're captured at a higher level you'll want to consider forwarding these to the next responder when you're not specifically ignoring them. Let these flow through the Responder Chain because often what'll end up happening, for example, Safari it's more like the Window Controller that is going to handle the Swipe Gesture or some of these higher level gestures you go backwards and forward in the view.
In the LightTable example when I pulled up the tools that was actually done at the controller level, at the Window Controller and not in the View. The View had no concept that these tools existed. So I went ahead and forward the Gesture Events when I wasn't ignoring them up the Responder Chain, got to the Window Controller and I appropriately showed the tools or hid the tools depending on what type of Swipe Event it was.
So you need to take that into consideration as well. So we have these Multi-Touches, individual Multi-Touches and we have these Gestures, the higher level thing. When do you use Gestures and when do you use the Multi-Touch Events? Well, it comes down to a question of what are you really trying to do? If possible let Apple do the hard work for you and let it recognize the event at a higher level.
So if you're doing a Magnify and that's all you're really doing because you don't care about the exact position. For example on the LightTable application I could pin one side and just expand one side out, so I need to look at the individual touches to determine that. But if it was some other kind of control or the image was fixed and you just wanted to Magnify it in place or it's like your whole view you're magnifying the whole view then let Apple do the high-level magnify recognition for you, and the same thing with Rotate and Swipe.
And as it turns out if you were to actually look at the individual touches associated with the Gesture and try and do your own magnify because of resting and when touches might be canceled and some various other little aspects you might not get a magnify with 2 touches you might only actually see 1 touch in various things like this.
And so don't worry about the complication that's going on there let Apple do the hard work for you when it's appropriate, when there are occasions where you need to know the exact individual touch because it makes sense to pin something on one side versus the other then go ahead and use touchesBegan, Moved, Ended and Canceled. So that's Multi-Touch. It's rather a small API there's not that much there.
It's really fun to play around with. I suggest that you download the LightTable application and play with it yourself. And also make your own LightTable application. I want to see a flurry of them out there on the Web that would be really awesome. Now let's talk a little bit about Event Big Brother. So we're going to start monitoring events that are going on on your app and elsewhere in the system.
And the first thing you need to know is a couple of funnel points. We talked earlier that when an external event comes in such as a keyboard or mouse event it goes to NSApplication. Where it actually goes is through NSApplication Send Event Method. So you can have your own custom NSApplication Class, you can override Send Event, you can get the event, you can look at it, you can determine what type it is, you can route it to a different window, you can just let it fall on the floor or you can call super sendWithEvent and let NSApplication continue processing the event as normal, or you can let application route that event down to NSWindow and you can have your own custom NSWindow that overrides SendEvent.
NSWindow uses the same method and you can look at the window level once you know it's going to the right window and you can now route an event by inspection to the right view to the right place or let it fall on the floor, or call super and let it continue on its merry way. And that's fine and dandy but it's kind of a pain sometimes to create your own NSApplication subclass and then go into Interface Builder and set the primary class and deal with all that. We have some new things in Snow Leopard which are really nice called Event Monitor.
We have Local monitors and Global monitors. Local monitors are a way that you could easily monitor an event that is already targeted to your application so you won't need to override SendofEnded NSApplication you can install an Event Monitor at the appropriate place inside your class and it works out really nicely. And then we have Global Event Monitors which is a way of monitoring vents that are going on that aren't specifically targeted to your application. If that event is targeted to another application you'll be able to see that event on the Global monitor.
So at no time if you have both a Local and a Global monitor you won't get the same event coming through both at the exact same time because it's either targeted to your app or it's not, and that's how you decide if you need a Local monitor or a Global monitor. So let's look at this at a little bit more detail.
I want to point out, again, that the Local monitor and the Global monitor are a new API in Snow Leopard and this is what the API looks like for a Local monitor. It's a Class Event on NSEvent so you just call AddLocalMonitorForEventMatchingMask and you pass it a block as a handler.
And you'll also happen to notice that this block returns an NSEvent. So with the Local Event Monitoring you can not only monitor the events that's coming through the system but before the NSApplication routes them you can change the event, put in a different event or just return NIL and end processing from there, so this comes in really handy. You might want to be careful if you have multiple Event Monitors all watching for the same event in your application.
If one of the Event Monitors returns NIL it won't get past to the other event handlers -- Event Monitors I should say. And here's one of those places where you don't get events that are in a tracking loop so if there's a mouse tracking loop going on in NSEventMatchingMask that bypasses the normal flow of events through Cocoa and you won't get them, you can't monitor them.
And finally, the Local monitor for Mask when you add a new one you get back a Monitor Object. You need to hold on to this object because you'll need to remove it using the Remove API but you don't need to retain it, Cocoa is going to handle the memory for you.
This is the Global monitor it is also a Class Event, AddGlobalMontiorForEventsMatchingMask. So events that are not targeted to your application already will come in on your Global monitor. For those of you familiar with the Monitor Target on the Carbon API when you can add a handle for a Monitor Target; this is our Cocoa equivalent.
And it has the same limitations in that you can observe only, you can't modify the events and you generally don't get Keyboard Events this way unless you have User Assistive Devices turned on and Accessibility Preferences or you're a Trusted Accessibility Application. And I want to point out that in both the Global monitor and the Local monitor's you can't get Keyboard Events in Secure Fields so you can't sniff passwords this way.
And as we pointed out you can't modify the event so there is nothing to return. And similar to the Local Event Monitor you get back an Event Monitor Object and just hold on to it. You don't need to retain it but you will need to remove it. And this is how we remove the Event Monitors. So Cocoa is going to manage the memory for you.
We are going to take care of the Run Loop, take care of all the appropriate things that need to be done with your Event Monitor. You just need to tell us when to get rid of it, so you need to pass it back to us and tell us to remove the Event Monitor. And you must explicitly do this none of this is going to get done for your automatically not even for Garbage Collection. So let's take a quick demo of Event Monitoring.
[ Background noise ]
[Raleigh Ledet]
So I have an application here that looks at the Tablet and it can tell when you bring the device near the Tablet; this is a Proximity Event I was talking about earlier. And, you know, Tablet's kind of neat. You can find out when you're using the opposite end of the pen and know that the user actually intends to erase.
That's one of the kind of really neat things but generally Proximity Events are something you want to send to all the Tablet-aware objects in your application and this view over here -- if I come -- now I can get things there but the other one's not getting it; this isn't what we want. So what we're going to do is we're going to use Event Monitors to track this.
So we're going to quit that and we're going to come into Xcode here. Here's our Tablet Proximity Event Method Override and we look at -- we only care if it's an Entering Proximity. If the device is leaving we set the image to No. If we look at the pointing device type on entry and we decide if it's a pointing device we'll show the pen.
If it's an eraser pointing device, we have pen pointing device and eraser pointing device; we'll show the eraser image. And Tablet's actually have a cursor pointing device or a puck or a mouse, other definitions you might hear and we'll show the mouse image here. So we've already set up in this application a couple of menus to toggle adding a Local monitor on and off and to add a Global monitor on and off, so we'll go ahead and fill in the definitions here.
Since it's a toggle we have an IVAR for a Local Event Monitor. If we already have one then we want to turn it off and we will remove the monitor and set our Local IVAR to NIL. Otherwise we need to go ahead and add one so we'll do that real quick for NSEvent and add Local monitor for Matching Mask NSTablet Proximity Mask and so this is the way we do a handler.
You've probably have been to some of these sessions -- NSEvent -- Start Event. So we'll open up our block and we'll go ahead and close our block and then close the method and add our semi-colon. And so when our block gets called we already have a Tablet Proximity Method that's going to do all the right thing with the images for us so all we need to do is on self call Tablet Proximity then pass in the event and that -- so now we need to return because this is a Local Event Monitor.
We don't need to modify the event we're going to let the event go through the normal processing and let it go through all the Local monitors and we'll now go ahead and set up our Global Event Monitor because we want to know when we're in the background we also want to know what the current state of the device is on the Tablet so we're always in sync.
So it does the exact same thing about removing the monitor if it already exists, but if we need to create a new one we just do NSEvent@GlobalEventMonitor this time and we need to look for the Proximity Event. Proximity Mask and we will go ahead and set up our handler.
Open our block, close the whole thing. And, again, we'll just do the same thing here when the background it's going to be real simple we'll just pass the event on to our handler as it already exists and we don't need to return anything this time it's just a Global Event Monitor we can't modify the event in any way.
So we'll go ahead and make sure that builds, it does, great, so here's our application again. And this time when we come in with the Tablet it's still doing the same thing because we have it set on a Menu Toggle so we will go ahead and turn on the Local Event Monitoring.
Now that we have the Local Event Monitoring you see that both windows are getting updated. So inside of each window they have that same view that we just modified so each view adds a Local Event Monitor so they both get their call back and they're both doing the same code so they'll just display the appropriate brush or eraser. Now, if we're in the background say we come into the Finder you notice it doesn't get any changes.
So let's go ahead and turn on our Background Toggle. We'll go ahead and turn on the Global Event Monitoring, there we go. So we have our Local Event Monitoring and now that -- if we bring Finder to the foreground we're not the target application anymore and the Global Event Monitor is getting called for us and we can -- this application can maintain the appropriate state even though it's in the background. So that's one of my favorite uses of Event Monitors. [Applause] Thank you.
[Raleigh Ledet]
So a word of caution about Event Monitors. There can be a potential performance impact here. If you actually install a Global Event Monitor for all the mouse moves we'll send you all the mouse moves and that could potentially be a lot of information coming through. If you set a Global Event Monitor and put a matchEventMask you'll get all sorts of stuff and, you know, this can have an impact on the system so you really need to be careful. When you use a Global Event Monitor make sure it's the appropriate thing for what you want to do that there isn't a better way to do what you're trying to do.
While the Tablet is a great example I want to give you one more example briefly. Say you want to do your own custom cell tracking which, in a list, which actually brings up another window and you want to know when the user clicks out of this window because you just want to have that window go away.
That works great normally unless the mouseDown occurs outside your window if you're not doing a tracking loop. But if the mouseDown occurs outside of your application's windows it's going to go to some other application even though that application's in the background the mouseDown's going to go to that application and bring that application to the foreground.
Well, if you install a Global Event Monitor and you look for just the mouseDown you'll be notified when that occurs outside of your application you can close your window, you can let your application go into background, turn off your Event Monitor and it's a nice way of solving the problem without too much overhead.
And you have the completion block, the handler block in the Event Monitor so it's nice and localized right there where you need it and you don't have some custom subclass of NSApplication just to do this little thing, so it'll keep your code nice and clean.
[ Background noise ]
[Raleigh Ledet]
So we've looked at a whole bunch of different stuff. We've looked at Event Routing. We've looked at how you can come in and you can change the way that events are routed through the system and we've looked at Multi-Touch and Gestures. We have the TargetGallery and the LightTable demos that I've done. Their code is out there you can go and get them from the Attendee website, download them, play with them, check them out. And here are the links that I promised earlier for all the different search fields that I had shown earlier.
I'll keep this slide up for a few seconds for anybody that wants to attempt to try and copy down that long link, and here's some more. And as always please go ahead and take a look at the Cocoa Release Notes. You can get to those from the documentation right out of Xcode you can go and look at them. Lots of good information in there.