App Frameworks • iOS • 55:22
This session will help you take advantage of the Multi-Touch features available in iOS. You will learn practical information about Multi-Touch APIs, touch routing, gesture recognizers, as well as guidelines for interoperating with the system and other apps, and pointers for how to create complex real-world user interfaces which make great use of Multi-Touch.
Speaker: Ken Kocienda
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it may have transcription errors.
Welcome, this is Making the Most of Multi-Touch on iOS, section 118. And I'm Ken Kocienda. And again, welcome, thanks for coming. So, this talk is, as the title suggests, about iOS Multi-Touch, the wonderful direct manipulation system we have on iOS devices. It's a natural and fun interaction model.
get wonderful visual effects like this. Users really feel good about interacting with apps when they behave like this, like things they're familiar with from the real world. Keyboards, of course, also familiar with in the real world, great apps like GarageBand. And then you can have fun with apps like Photo Booth. So great examples of using multi-touch in Apple's apps. But of course, this, why you're here today, and why we're having this conference is we're all interested in your apps. So the talk is about how to make the best use of multi-touch in your apps. And hopefully, I'll give you some ideas to help you direct your effort, get the most out of your effort, the most out of your development time. And to do that, I've got four big ideas that I'd like to go over during the session.
First, just a brief introduction, some kind of an overview of multi-touch strategies from a high level. And then we'll get into some touch system concepts so that you understand what happens when touches come down in the screen and the whole lifetime of a touch while the user is contacting the screen.
And then thirdly, some touch system tasks, so how you can get in there and customize that whole process to do what you want in your app. And then finally-- how to interact with the rest of iOS and other apps running on the system, of course, with special attention to touch and touch APIs and frameworks, et cetera. So now, a quick word, though, is that we've got all of our sessions from last year's WWDC on iTunes U. If you're particularly interested in this material, and I guess maybe you are, at least to some degree since you're here, There are two really, really interesting sessions that you can go back and look at from last year if you didn't see these. Gesture recognition and advanced gesture recognition. Now, this is not a repeat of those sessions. It's new material, but there's quite a bit of overlap. And so if you like this talk, you'll love those. How about that? So go back and take a look at those. Some good information in there, too. But today, we've got those four big ideas. So let's get started.
Multi-touch strategies. I like to think that there are four general approaches to handling multi-touch on iOS. And the first is you can ignore it altogether. Simple apps, particularly on the phone, but also on the iPad. If you've got an information app, like Stocks, you're just ignoring multiple touches.
It's really just a single-touch interaction. Of course, you get some of the advantages, and there's plenty of material in the talk that you can take advantage of if you've still got just a single-touch application. But again, the concept is you can just completely ignore multiple touches throughout your app.
Another example, sort of another approach, is that you can handle multiple touches on the screen independently. Here's an app that I love, Bloom HD, wonderful music-making app. And the idea is you touch down on the screen, and each of those touches are independent. It's not like a gesture or anything like that. Of course, the touches do interact with each other, again, to make some wonderful music. But really, the touches are independent from one another in terms of handling them.
Then there's another example of sort of a third strategy of having multiple touches interact with each other in this way. So we've got one finger on the keyboard and going in and moving that pitch control. At some level in your program, those touches should know about each other.
Kind of change having that one key play its pitch and then going and modifying the pitch. So again, at some level, it's these touches really cooperating together to give a single effect to the user. So that's a third strategy. And then fourthly, you know, a very simple example is having a gesture, right? Two touches cooperating for a single effect. And of course, a great example of that is just pinch to zoom. So you can handle multiple touches as a gesture.
And so those are the four basic strategies. And I think probably what's pretty common too is that you wind up with a mix and match. I mean, even if you have, say, a multi-touch game, entering into the game isn't multi-touch. You're just pressing the button. It's single touch. So this kind of mix and match strategy will happen throughout your app. Now, I like to think when I'm designing an app that I know, at least in some level, maybe it's just even subconscious, which of these strategies is being employed at a particular time. and when I might be transitioning to something else. So, you know, even maybe if you're prototyping a new app, you might even want to think about these things at a high level. Are we going to be doing multiple touch? How might multi-touch help you at a particular spot in the app or not? You might ignore multiple touches, 'cause that just might be better. So again, kind of thinking about it at a high level, I think it's a good idea to know which one of these strategies you're using at a particular time.
So again, that's pretty high level introduction about multi-touch. So now to kind of get into, well, how do you make that work? Now you've decided on one of those strategies. How do you actually go ahead and implement? And so that's what this section begins to talk about, touch system concepts.
And so if you're new to iOS development, this is the cast of characters. These are the classes that you need to know about in order to make the best use of multi-touch. A couple of that you might not think about, like UI application, UI window, UI view controller, and so forth. So we've got really wonderful documentation, really great sample code. Again, if you're new to iOS development, these are the classes that you should know about. be able to describe, maybe even just to yourself, how these classes contribute to the whole multi-touch system in order to make the best use of it.
And of course I'll be talking a lot more about these classes as the session goes on. So let's begin doing that and talk about touch processing, kind of a simple example of touch processing. So I wrote a demo app which I'll be using a few times during the session, kind of a simple app which puts some shapes up on the screen, and you can just drag them around, direct manipulation, kind of manipulate them with some gestures as well.
And so the first thing that I'd like to do with this sample app is to look at some just very, very basic single-touch processing using these four touch handlers. So let's do a demo. So now, here I am in Xcode, and so those shapes that you saw on the screen are implemented with this shape class. Very, very simple class.
It's just a UIView subclass that you'll see just does some pretty simple drawing to implement different shapes. Okay, so now you'll also see if I scroll through here that there's no touch handling. And so I'd like to add that in. And so now you'll see that I've got some very, very simple basic implementations of touches began, touches moved, touches ended, and touches canceled.
And you'll see that in the touches move, sort of where the action happens, where the kind of the direct manipulation happens. I'm just going and getting the point that corresponds to a touch, and I'm using it to just move the shape around, changing the center point of the shape, using the data that's coming through from the touch. So let's build this. Have a look on the iPad. Okay, so I've just got a simple shape on the screen, and if I go and touch it, you'll see that I'm tracking my touch. So if I touch with lots of fingers, you'll see that where it is that I'm doing that. But now you see I'm just going with one touch and I'm moving around and the shape is kind of giving little updates on which of those handlers is getting called at a particular time. You'll see that touch has began, if I can hold my finger steady, if I move it at all, it changes, the touch has moved. Right, it began, moved, and ended. And of course if I do another one, of course that yellow one doesn't change. I'm just getting the touches coming through in the shape that I'm touching. Of course, very, very simple, right? But of course this is the first example. So let's go back and talk a little bit about what's happening to have those handlers be called. What's the life cycle of those touches such that it actually winds up coming out, resulting in a callback in your code?
So there's a timeline for a touch. When a touch comes down on the screen, we have this cycle. You kind of think about it along a series of steps. We already saw a series of steps, but let's kind of go in and talk about one important step which happens before you wind up getting your touch callbacks. And that's this, finding the hit test view. So this happens at the very, very moment that a touch comes down on the display. The first thing that happens is the whole OS goes and finds which touch is under your finger. This is special work for UI application. It's not something you need to worry about. It's something we need to worry about as the OS provider, as the framework provider. So in UI application, what it does is it goes and it finds the deepest view in the entire view hierarchy in the application that's under your touch. The very, very deepest view.
Okay? Finds that view. And it's all about view containment, right? It has nothing to do with first responder. I mean, if you're coming from the Mac development, you kind of think about that event handling is tied up with the responder chain and first -- That's not what happens with touches. It's all about view containment. It's all about taking that touch and finding out which view is underneath your finger.
Okay? And once this happens, once this determination is made, that touch and that view are linked for the remainder of the lifetime of that touch on the screen. If you move that, move that touch around, right, the same hit test view and that touch, they remain linked together. Now, why is that important? Well, because now we start delivering events. And where do the events go? Well, the events go to that hit test view. Okay? So once that touch comes down, hit test view is determined. And now we start delivering events. Event delivery is all about UI application and UI window send event.
And the hit test view winds up receiving the event as part of this sort of built-in method that, again, is part of the framework, part of what we provide to you. And of course, as we saw in the code, right, as the label updated in the shape, touches began, right? Because that's the stage that we're at. The touch just came down.
Okay, so now subsequently the touch moves. Well, guess what? We don't have to go through that first step again. There is no rerunning of that hit test view determination, right, 'cause the touch and the view are linked again for the whole life of the touch. So this process just runs again, only a different method gets called. Touch is moved gets called instead.
And then afterwards when your touch finally lifts, again, it's simple. The view just continues to get delivered to the same place that it has been all along. Touch is ended. So now if we back up a step, and if something else happens, touch canceled, if you're on, say, an iPhone or if you're on an iPad or an iPod Touch, any iOS device, if you've got a touchdown on the screen and, say, you get an alert, Well, your application is going to get touches cancelled. And you should really have an implementation of touches cancelled. I'll talk more about that later. But you really, really should think through what it means when the rest of the system sort of interrupts your touches.
Okay, so that's it. I mean, it really is pretty simple. Once you get these two concepts, right? It's that two-step process, finding the hit test view right when the touch starts and then event delivery happens from there. So now, what about processing a gesture? Like a pinch gesture, zoom gesture, right? A long press gesture, any of the gestures that are built into UIKit or any of that you've implemented yourself.
Now, you might sort of have a question, well, touch handlers or gestures, which one are you supposed to use? I mean, I just did this direct manipulation example, but there's also a pan gesture recognizer. So which one should you use, right? Well, there are certain situations where gesture recognizers, I think, are clearly better. And this is an example when you're trying to deliver kind of a standard gesture that users have come to expect from iOS apps, like pinch to zoom. You should definitely use the pinch gesture recognizer to implement pinch, or just take advantage of it in scroll view, UI scroll view, if you're using that.
may want to come along and if your app, you've got maybe a little text editing app and you want to implement a custom gesture, sort of a little rub out gesture to delete some text, that might also be a really good idea to implement as a gesture recognizer rather than just touch handlers. Because again, gesture recognizer lets you encapsulate the behavior of that movement in such a way that you kind of deal with it in all one thing. You don't have to kind of sprinkle that around, sprinkle that code around in touch handlers and all of the the views which may want to implement that gesture.
So that's two sort of reasons why gesture recognizers might be better. A reason why touch handlers really might be better, even in sophisticated cases, is if you're porting software from another platform. Let's say you've got a drawing app that you're bringing over to iOS from someplace else. And you probably don't have gesture recognizers or anything like them in that existing code. And so maybe the quickest and easiest way to get your code up and running would be to just kind of hook it up to those touch handlers like I just showed in the demo.
Again, it's just the quickest and easiest way. You might do something different for 2.0, sort of do rethink. But to getting up and running, touch handlers might really be just the simplest and easiest way for you to go. Thank you. So is it really this kind of six or half dozen thing with touch handlers and gesture recognizers? Well, you know, in some respects it really does come down to a matter of personal preference, which style you like better. Maybe you've already got existing code, again, like that porting example. But what I hope, in some other examples I've got coming up, I'll show you that there really are some sort of subtle points which will help you to decide which one might be better, depending on the kind of the details of the behavior that you want to deliver. So what about processing a gesture, kind of stepping through that in the same way as processing touch? So I've got a demo for that. So gesture process-- whoops-- gesture processing demo. And let's see.
Okay. So now, what I'd like to do is I'd like to add a pinch to zoom gesture. So I've got one of those shapes on the screen. And what I'd like to do is now put two fingers down into it and pinch to zoom to change the size of the shape. Really pretty simple. So in the init method for the shape, again, I'm in the shape class.
So I just go and I allocate and initialize a pinch gesture recognizer and just add it to self. And then I go and I set a target So I've got a pinch handler. And in many ways, a pinch handler is very, very similar to touch handlers. I've got these state callbacks coming in. And you'll see, again, just like in the touches moved example from before, the real action is going on in UI gesture recognizer state changed, where I take the state of the pinch, and I just go and set a transform on the view.
Okay, so now just like before, again, this is really unchanged. I can, you know, move this around, but now also I can go in and touch and pinch the view to zoom it around, and of course I can still move it subsequently. Okay? So that's really pretty simple. Of course, again, if I, you know, have a second one, of course the first one is unaffected. It just doesn't get any calls back or anything like that. Okay, so now if I go back to the code, You'll see that what I did, of course, was add the pinch to... the shape itself. But wouldn't it be kind of cool if I could just go out into sort of empty area where there isn't a shape and pinch to zoom all of the shapes at the same time? So let's take a look at that. So instead of putting the pinch gesture recognizer in the shape, I'm going to go and put the gesture recognizer in a ViewController class, because now there is a view which, of course, contains those couple of shapes that I've had, and there's a ViewController associated with that view. So I can go and put the gesture recognizer on that ViewController, assign it to the ViewController's view, and sort of get this sort of higher level effect. So you'll see that I'm taking a pinch gesture recognizer in the ViewController's viewDidLoad, I've got that content view associated with the view controller, which is again what is containing all of those shapes, and that's where I add the pinch gesture recognizer. And you'll see, sort of like before, it winds up being very, very similar in the UI gesture recognizer change. The only thing is now I'm stepping through all of the shapes that are there and transforming each one in turn.
So let's take a look at that. So now you'll see if I go and just pinch out an empty area, I've got that sort of status update happening in the top of the view. And if I add a couple more shapes, you'll see that I can also do this. Now there's a kind of a really interesting interaction which I'd like to show you now, which is I can drag this view around and then land a finger afterwards, long afterwards, long after I've moved the view, and the pinch gesture recognizer for the view controller will kick in. So again, let's look at that again. I can move this around. I've just got one touch on the screen. I can land a second touch later and start pinching.
pretty interesting. And what that means, of course, is that somehow, those touch handlers in the shape-- touches began, move change ended-- and the pinch gesture recognizer callbacks are somehow cooperating with each other. One of them starts, but then the other one can take over. So that's kind of an interesting interaction, and I think it's worth going over. So let's talk about that.
Okay, so now really kind of focusing in on that last case of processing the gesture. Right? So now when touches come down and you've got gesture recognizers in the mix, you've created some and you've added some. So now this touches come down, right? When that second touch came down, not only did I find the hit test view for that second touch, but I gathered up all the gesture recognizers which are associated with the touches. And how does that happen? Well, again, this is special work for UI application. This gesture-gathering task starts with the hit-test view for each touch.
And starting from that hit test view, right, so we drill down to the deepest view in the hierarchy to find the hit test view, this gesture gathering is now sort of a bubbling back up. So starting from that view, it goes and sees, have any gesture recognizer been added to the hit test view? Well, what about the hit test view super view? What about the hit test view super view, right? All the way up.
and they're kind of added in order so that the deepest ones get added on a list first and the ones later, which were added up into views higher in the view hierarchy get added later. Okay? So again, this kind of gathering, this kind of initial determination step goes on. Finding the hit test view, gathering up all the gesture recognizers. Okay. Right? Okay, and of course that priority, the deepest view.
first added is the priority of the gestures. Okay, so now, Just like before, event delivery happens. But the difference is that event delivery is two-tracked. We saw that. Event delivery is two-tracked. Touches and gestures are in the mix. Well, how does that happen? Gestures get tested for recognition. And the hit test view receives events.
Okay? Views get touches began if no gesture has been recognized. So in the case of a pinch, you can't pinch until your fingers start moving together. So that's really what happens here. I had that one touchdown, and so the view got touches, touches began, and started moving around.
But now, if touches move, event delivery happens again. Of course, we don't do that first step, just like before. We've already gathered touches and done that hit test view determination. Event delivery is still too tracked. But now let's say, yes, you did begin to move your fingers together, and the gesture recognizes. this.
Then, as we saw, the view-got-touch is cancelled. So that just continues to happen. As you are moving around, all those gestures are recognising until they fail to recognise. That pinch gesture requires two fingers, it just kind of keeps going until such time as it recognises, or of course the touches lift. But in this case, those two fingers did. recognize, cause the gesture to be recognized and touches cancel got sent. Now the gesture runs its handler, runs that pinch handler, and of course the scaling takes place just as we saw.
Okay, so now if touches move again after this, after this initial gesture recognition, now event delivery is just single tracked, right? Touches are no longer involved in the mix. Hit test view will no longer receive any events. Only the gesture will. Gesture runs its handler, as we saw, you can scale all the views, right? And touches lift, again, it's the same from that point, right?
So we start out with this two track. event delivery process, and if gesture recognizer, we kind of go over to a single track. And at that point where that changeover happens from being two tracked to one tracked, your views will get touches cancelled. The hit test view will get touches cancelled.
Okay, so as we see, we can, you know, kind of going back to that idea of, well, does it really matter whether you use gestures or touches? It really does matter depending on the kind of behavior that you want your application to exhibit, right? Touch handlers and gestures can really work together, work well together, if you understand how the event delivery happens and how they interact with each other.
So that's touch system concepts. And so next, going over to touch system tasks. How do we kind of get in there and change some of these processes and procedures to customize them a little bit? And there's four topics that I'd like to look at. So implementing direct manipulation, a little bit more about that. Picking an event handler using the responder chain and using some details of gesture recognizers. Changing the event flow in interesting ways. And then finally, some notes about subclassing. So about implementing direct manipulation first. Of course, again, we've seen this, right? Touch handlers, you implement these touch handlers in your view, and you can use those to respond to touches appropriately. Right? Gesture recognizers don't use these. They don't use them. If you've got a pan gesture recognizer, or as we saw in the example, a pinch gesture recognizer, you have to implement code that looks like this, right? a gesture recognizer callback which you set up using target action. When you allocate and initialize your gesture recognizer, you set the target and you set an action method and that action method will look like this. And you'll get callbacks kind of with the gesture recognizer state machine.
state by state doing things as appropriate. Okay? So we'll take a look at another example with that in a second, but first I'd like to talk about picking an event handler. So now touches do not go to first responder, as we talked about before. Again, this is kind of maybe, you know, this is kind of a change a little bit from if you have experience developing on the Mac. But touches don't go to first responder. But you can still use the responder chain if you wish.
and have a higher level object than the view that got touched, you can have a higher level view handle the event. if you want. Alright, so this is the example, you know, kind of that we looked at so far. It's really pretty simple. The hit test view was the one that was responding to touches because that's where the touch handlers were written. That's where they were implemented.
But you can take advantage, again, of the whole responder chain. You can use a super view, you can use the window, you can use the application. Now I think as a kind of a general sort of best practices sort of thing, kind of keeping your touch handling down as close to the hit test view is probably what you want most of the time.
of course these options are available to you. And then of course, really if we take a look at the example that we have, we've also got a view controller in the mix. And view controllers do participate. They are a member of the responder chain. So you can put touch handlers on view controllers.
Okay, what about gestures and the responder chain? Well, gestures don't use the responder chain at all. At all. Again, gesture recognizers, which ones recognize, it's all about that view containment and that gesture gathering process related to which hit test view was determined, and then gathering up gesture recognizers from there. So it's all about view containment, not the responder chain.
And of course, gestures are attached to views, right? So they're attached to views, this drill down and then come back up process, and those gesture recognizers are attached to individual views along the view hierarchy. And since gestures use target action, You can specify which handler will respond and which object will implement that handler. You can implement, I think, what's a pretty interesting pattern, which is a kind of a gesture controller pattern. You might even have a whole new object.
And that object's only job is to respond to, you know, be the target of gesture recognizers. This is not something that I have in demos, but this is something that I've done. We've used the term interaction assistant quite a bit to have that as an idea of a class that all that it does is sort of implement a bunch of gesture recognizer handlers, kind of centralizing that event handling, gesture recognition handling into one place. It's kind of an interesting idea, maybe you want to think about it. So now let's take a look, a demo of implementing direct manipulation a little bit more and picking an event handler.
So what I've done is in the ViewController, you'll see that I still have the pinch gesture recognizer handler up in the view controller, but now I've taken the touches, the touch handling, and moved it up to the view controller. And if we go and take a look at the shape, you'll see that there is no touch handling there at all. Here's where it was before. The touch handling is gone from the shape completely. And this is kind of interesting, because now you can actually implement a view which doesn't have any sort of even a little sort of controller-like behavior. It just really is a visual representation.
You're kind of passing along the behavior to a higher level object. That's really, I think, a good reason for doing this, is the view is just about visual representation. So let's take a look at this. You'll see that it winds up behaving just like before, only I changed the little status message there with the little sort of VC. You sort of see that the view controller is the one responding. But in terms of, you know, from the user's perspective, right, it behaves just like it did before. Okay, but it's sort of different organization for you. This just might make more sense for your program to implement touch handling in a higher level object. So now back to the code. What I'd like to do is now come along and sort of restore, even though I pitched you wonderfully on removing event handling from your low level, so your leaf level view objects, I'd like to sort of go back and show you that you can kind of do thing as you do with touch handlers, but instead use a pan gesture recognizer.
Okay, so now in my, in it with frame method back in the shape object, okay, so each of those individual shapes I've gone and added a pan gesture recognizer, very, very simple, and you'll see that the pan event handler is again very, very similar to what was in touches, right? just going and manipulating the center point of the shape in relation to the Where the touch is okay? So now I go and I pan around and you'll see well it behaves just like it did before And you'll see that if I touch outside here that I'm still pinching right, but now if I start panning Landing that second finger will now no longer pinch Right? So that behavior has gone away. Now that might be what you want, but I can't do that landing one finger, pan or, you know, move the shape around and then land that second finger and pinch. Well, why is that? It's because once one gesture recognizer is recognized, it prevents all of the others from getting recognized. So it's a sort of a first pass to post, right? There's a winner take all for gesture recognizers. Again, if you remember, gestures are gathered and multiple gestures are added as you go up the view hierarchy.
So that pinch gesture recognizer, if you remember, was in the view controller's view. It's in a higher level view. The pan gesture recognizer is on that deeper view. So it wins. And so that landing that second finger later won't cause the pinch gesture recognizer to recognize. Whereas it did before because all we were using were touch handlers earlier in shapes. So that's kind of, again, a subtle point. but this might help you to get the behavior that you want, either one way or the other. You might want it one way or the other. But again, to try to figure out, you know, how touches and gesture recognizers relate, and then how multiple gesture recognizers relate to each other. Okay? I've got one more example. is to, you know, of course I can just go and add the pan gesture recognizer up in the view controller.
So if I add a shape, I've got the pinch gesture recognizer now up on the view controller to do that pinch to scale all of the shapes. But now for each individual shape, I can go and add a pan gesture recognizer to the shape. So the shape view is still getting the gesture recognizer on it. But what is this illustrating? Again, it's kind of illustrating this notion that I'm bringing the event handling up to a higher level object. because even though I'm adding the gesture recognizer to the shape object, to that leaf level view object, right, the handler for the gesture is going to be up in the view controller.
So again, if I go over to the shape, there's really nothing interesting in the shape at all. If you just looked at this shape code, you wouldn't see anything that would lead you to believe that you can actually do, you know, you can actually act on the shape directly. But of course you can. So now I've got a shape. I've got now the pan gesture recognizer being recognized up in the view controller. Again, I can do the pinch, you know, to scale the shapes, right? And again, it's that same behavior as before, right? One of them is winning, right? And it prevents the other one from recognizing later, right?
So, kind of some interesting details, some interesting options about direct manipulation. I mean, it kind of seems like a simple idea, but it does turn out that there are quite a number of options for implementing it. Kind of getting very, very similar behavior with some subtle differences, again, depending on what you want and how you set it up and pick an event handler.
So now going on, the next step, changing event flow. So pretty much, kind of a high-level idea about changing event flow. Changing event flow is about changing which view becomes the hit test view. Of course, again, because that's the view that gets touch handlers, and that's also the starting point for the gesture gathering process, which, of course, matters quite a bit for which gestures will get recognized. So what can we do here, right? Well, this is the point that I'm talking about, right? That initial touch is down, right? We talked about before in the timeline, right? Finding that hit test view before any events get delivered.
All right, so kind of a quick side note on changing event flow. There is public API, which you all have access to on UI application and UI window send event. And you can override this to get a look at every single event, which gets delivered to your application, and then in turn gets delivered to every window. Now, it used to be in older versions of iOS, before we had gesture recognizers, before OS 3.2, before we had gesture recognizers, that this was the only way, really, to get that kind of land that first touch, land a subsequent touch, and have a gesture take over. Really kind of, you had to get this very, very high-level view of how events were flowing into your application. But you don't need to do that anymore.
Really, overriding send event is not recommended. You should really kind of, you know, understand kind of how the examples that I'm showing here are working. And if possible, use those mechanisms. Use the way that touch handlers and gesture recognizers interact to get the behavior that you want rather than kind of drinking from the fire hose and trying to get everything right with send event. All right, so really, really think again if you think this is a good idea for you. Thank you.
Okay, but now, so once we are kind of, you know, using the standard send event, which we provide to you from UI application and UI window, you still have some pretty interesting options. And one of them is turning off events for a view. So you have a view and you do not want it to become the hit test view. Okay, what options do you have? Well, the simplest perhaps is just remove it from the view hierarchy. Well, review, remove super view. Well, if it's not in the view hierarchy, it won't become the hit test view, right? And it won't be then the target for all of touch events. Okay, but alternatively, you can leave the view in the view hierarchy and call other UIView APIs, set user interaction enabled to no. So the view will still be there. You can still move it around programmatically, but of course then users can't land their touches on it. It will not become the hit test view.
You can also set the view to hidden, which keeps it in the view hierarchy, which might actually be an interesting option if at some point maybe that pinch gesture recognized, maybe you want to even scale invisible views, and when you bring the view and make it unhidden again, maybe you want it to be scaled. That might be a good way to do it, just make it hidden and you can still iterate over it and change its transform. That will all still work, but of course the user won't see it. and of course it won't become the hit test view. Or you can set the opaque property to no and set the alpha to zero.
Okay, so if any one of these things are true, a view will not become the hit test view. And again, events won't flow to it. They will flow someplace else, which may be what you want. Thank you. Okay, well what about turning off touches for your entire app? You can do that too. There's API and UI application, right? You go and you get a pointer to the shared application and you call begin ignoring interaction events. So you do this and you go and you run some code, right? Nothing will happen. Well, why would you want to do this?
I think there are some good, interesting situations. Let's say you have a game, and you've got sort of the startup screen for your game, and the user presses a sort of begin the game button, and you don't really want anything to happen. You're sort of like in control of the whole process of transitioning from your startup screen to the game. Perhaps even it's a multiplayer game, and you want to sort of synchronize the beginning of the game. You don't want anything to get in the way of that in that maybe two or three second period. You just really, really want to be in control. You don't want any other event handlers firing, anything like that. So kind of situations like that, this sort of makes sense. But of course, when you're done with that process, you have to then call end, ignoring interaction events. And of course, it's really important to balance out these calls. I mean, I've had many, many bugs where it was, oh, I can't touch anything in the app now because sometimes the process of beginning is separated from ending and it can get to be a little complicated to make sure that you get it right.
But again, this is a way you can turn off events for your entire application. Okay, so now touches during animations. What if a view is animating? Can you touch it? This is an interesting issue, and there's not enough time to sort of go into all the details, but this has actually changed in different versions of iOS. I will tell you that for version 5 of iOS, animating views will not become the hit test view, although they really sort of will.
They will become the Hittites for you, but they won't get touches delivered to them. The touches will get eaten. Amen. It's kind of a subtle point. And even doing hit testing can be interesting, depending on what kind of animation you're doing. So if you have questions about this, if this is something that you're really interested in-- this is in some ways maybe even not so suggesting, not really always something you should expect users to do, kind of hit a moving target as it's going across the screen. So maybe it's just kind of a special case where you'd really want this to happen. If you do, maybe just kind of come to the lab and find me or find a UIKit engineer. We can kind of talk over the finer points.
Okay, so now what if you want to direct event delivery to a specific subview? There are two UIView API calls which are interesting. One is hit test and one is point inside. Now, hit test is what gets called from the start, from the very, very top of your view hierarchy, sort of drilling down through all of your views, trying to find that hit test view. This is the method which gets called, UI application and then UI window, calls hit test on all of your views.
So you can override this if, let's say, maybe you've got sort of a more complicated version of my program, my demo program, where you might have to select a view. You might have to touch a view to maybe get some grab handles on it before you can drag it around. And so you want just kind of a different set of sort of more complicated set of tests to be done before you let a view become the hit test view. Now, I will say if you kind of think back a couple of slides to that, the one where you can change whether a view becomes the hit test view, removing from the super view or checking hidden or checking its alpha or checking if user interaction is enabled, this is what hit test does by default. This is why if a view has any of those properties true on it, why it won't become the hit test view because the default implementation of hit test tests those very things.
Okay? So you can write a custom version which adds a little bit of extra algorithmic smarts to hit test, if that seems appropriate to you. Now the second one, point inside, I mean, I think a really, really good example of why you'd want to use point inside is just a very, very simple geometry test. A lot of applications, it's pretty common in iOS to have pretty small circular, you know, buttons with a little italic I in it, a little info button. And it's really small. It can be really hard to actually land in that, even when you intend to.
And so if you want to make that button small and unobtrusive, but still make it easy for the user to interact with it, you can implement a custom view, implement point it to draw that little view, that little circle, and override point inside to change the geometry, make it sort of geometrically bigger. Not visually, but just with respect to hit testing. So a very, very simple little bit like that can make small views still really, really easy to interact with.
Okay, so that's changing event flow. Now, a few notes on getting subclassing right. So if you subclass a UIKit responder class, a UI view typically, right? If you're going to implement one touch handler, if you're going to implement touches begin, you really should then go through and implement all the rest. Touches move, touches end it, and touches cancel.
If you implement one, implement them all. Again, there are some finer points here, and you can wind up with some pretty difficult to diagnose bugs, I'll tell you. If you don't do that, if you do, you're all safe, you're all good. Also, don't draw in touch handlers. Touch handlers are really, we're trying to keep sort of 50, excuse me, 60 frames per second on sort of responding to events, sort of event frames per second. So don't do expensive things like drawing in touch handlers. You might be tempted to. And again, maybe if you're coming from the Mac and sort of thinking still in terms of draw rect to sort of make your screen representation, update your screen representation, right? Maybe come to the labs and we'll talk through maybe why that's not such a good idea and that there are cheaper ways to get done what you want to get done.
Also, don't forward events yourself. If you're interested in using the responder chain to propagate touch events, and maybe you've got a touches began implementation that doesn't handle the event, you want to pass it up to a higher level object, do not call next responder. Instead, just call super, and it'll do the right thing. It'll propagate up the responder chain just like we saw earlier. So now, what if you're interested in subclassing UI view or UI control? So now, the question is that you want to implement a widget that sort of behaves like a UI control.
It sort of has that kind of control concept attached to it, if you will. It's somehow manipulating something else, manipulating a value, changing a number. It's maybe a knob in an application that changes a value. So which do you do? Do you subclass UI view or UI control? I have to say it's really a personal preference. I've done both quite a bit. Now, I will say that UI control, if you choose it, it does give you some common extras, which are really useful.
Things like target action is just built into UI control, and you get that for free. You can just go and set a target action, and it all just works. We've done the work for you. You don't have to worry about setting that up. And, of course, I think this is really advisable if you've got something that you're going to be reusing a lot in different places in your application. You're going to kind of put two knobs next to each other, maybe a whole bank of knobs next to each other. they're probably going to be hooked up to then different code to actually respond when a user interacts with them.
So UI control makes that easy. And of course, in terms of anything that acts like a button, you get that sort of touch up inside behavior where if you touch down in the control and then drag out and then lift up, the control won't fire, won't call its action method. And you sort of just get that for free with UI control. You can just sign up for it using control states. So that's a few reasons why you might choose UI control instead of UI view.
Well, what about subclassing an existing UIKit control? Kind of generally, you know, this is not really recommended, except I would say I'd make an exception for UIButton because there's sort of custom buttons, and it sort of implies that you will have to implement the drawRect method on a custom button. Otherwise, really, really check out the delegates and notifications that are on the existing controls that are in UIKit and see if, you know, sort of make sure that, you know, Any custom behavior that you want isn't already available through a delegate or a notification.
Now subclassing UI gesture recognizer, get in making your own, well first again, make sure that you look at the UIKit provided classes. And there are a number of properties on these classes for number of fingers and top recognizer has top counts and some other interesting properties. Make sure that you can't get what you want by just going and using an existing class and setting a property. If you decide that you do want a subclass, this class will help you. UI gesture recognizer subclass. Check it out. there are some interesting methods there for you to override.
And sort of the last bit of advice there is to really keep gestures simple. I mean, I don't think that you want sort of a five finger, I need to move up and then to the side, but more up then to the side. Users a lot of times have difficulty. The more complicated that a gesture is, the more difficult it is for it to do. And as we saw in the example, touch handlers are still firing. So users might wind up getting frustrated by doing something that they didn't intend to as they were trying to sort of trigger the higher level gesture. So really try to keep gestures simple if possible. Easy to do.
And finally, interacting with the rest of iOS. So I think one of the best things about iOS and iOS devices is that when the user is running your app, it's like the device becomes your app. If you've got a music application, a musical instrument, you know, on an iPad, an iPod becomes a musical instrument. It's really great. I mean, it's like the whole rest of the system kind of melts away while your app is running. And of course, you want to deliver that great experience to your users. you want users to enjoy your apps and to love them. But even while you're kind of thinking about providing this great experience, having the device become your app, you do still have to work and play well with others.
Other code, other facilities are running on the system. So like what? Well, there are things like on the phone. Well, on an iPhone, it's a phone. You might get a phone call at any time. Or an alert might fire from-- maybe you've got some push notifications set up. You can interact with the device and press the lock button at the same time. Or if your app is running while the multitasking switching bar is running, you're still drawing, but events are getting routed elsewhere in order to interact with the multitasking switcher bar. And there are also multitasking gestures, sort of swipe side to side to change between apps. So there's a bunch of other things which are going on, which the users might do, which may lead you to cancel touches. So what do you need to do to be a good multitasking citizen? At least in terms of this talk, it's implement touches cancelled, but that also the gesture recognizer state callback. You should really, really do this. Don't miss it. Don't neglect to think it through. What happens if you've got a game on the phone and the user is touching the screen and a phone call comes in? What happens?
All right, so I think there are three sort of general strategies for handling canceling touches. So you can have canceling is just like ending. I mean, that's really the simplest thing to do, is that just however-- whatever you do and touch is ended, just call that same factor it out into a shared function and have both touches ended and touches canceled do that. I think that's perfectly reasonable. You can also leave your application in a kind of a provisional state, where you try to make it so that the user can kind of picked up right when they left off. Again, that more sophisticated example of my app, if a shape was selected, well, you might want to leave it selected so that when the user comes back, they can just kind of pick up right where they left off. Or kind of another interesting idea is you can implement undo. Really think about touches canceled as just forget like that touch ever happened and just put the application back the way that it was. So I've got a simple example for that. So now, just a very, very simple-- Touches canceled example. So now I'm back in the shape class. And you'll see that all I've done-- before, all I was doing was kind of putting up the touches ended label. But now I've gone over in touches canceled. And all I've done is have a little animation. And this reference point that I set the shape back to was just set up in touches began.
A really pretty simple example, almost the simplest possible undo. OK, so now I start interacting with my shape. I'm going to drag it up there. And so now I'm going to press a button. I'm going to schedule an alert to run. So now I'm moving the thing, and now the alert fires. And all it does, I get touches canceled, and I move the shape back to where it began. The simplest little way of implementing undo. But it seems, I think to the user, if that happened, it would seem natural enough. Now, I did also talk to some of the UI kit engineers in preparation for this talk. And we don't see any reason why you can't even use an NSUndoManager for a more sophisticated application. And use that UndoManager to undo right here. So you can really take this quite a few steps further and implement undo and touches cancel. And if you really want to do that, come to the labs. I'd be kind of interested to talk it over with you. Okay, so strategies for canceling touch is right. It's like ending or kind of a provisional state or undo. And that's interacting with the rest of the OS. And that's the four big ideas I have for you today. And I hope this will help you make the most of your effort. Thank you very much. Thank you.