App Frameworks • iOS • 55:22
This session will help you take advantage of the Multi-Touch features available in iOS. You will learn practical information about Multi-Touch APIs, touch routing, gesture recognizers, as well as guidelines for interoperating with the system and other apps, and pointers for how to create complex real-world user interfaces which make great use of Multi-Touch.
Speaker: Ken Kocienda
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Welcome. This is Making the Most of Multi-Touch on iOS, Section 118. And I'm Ken Kocienda. And again, welcome. Thanks for coming. So, this talk is, as the title suggests, about iOS Multi-Touch, the wonderful direct manipulation system we have on iOS devices. It's a natural and fun interaction model.
Get wonderful visual effects like this. Users really feel good about interacting with apps when they behave like this, like things they're familiar with from the real world. Keyboards, of course, also familiar with in the real world, great apps like GarageBand. And then, you know, you can have fun with apps like Photo Booth, right? So great examples of using Multi-Touch in Apple's apps. But of course, this, why you're here today, and why we're having this conference is we're all interested in your apps.
So the talk is about how to make the best use of Multi-Touch in your apps. And hopefully, I'll give you some ideas to help you direct your effort, get the most out of your effort, the most out of your development time. And to do that, I've got four big ideas that I'd like to go over during the session.
First, just a brief introduction, some kind of an overview of Multi-Touch strategies from a high level. And then we'll get into some touch system concepts so that you understand what happens when touches come down in the screen and the whole lifetime of a touch while the user is contacting the screen.
And then thirdly, some touch system tasks, so how you can get in there and customize that whole process to do what you want in your app. And then finally, How to interact with the rest of iOS and other apps running on the system, of course with special attention to touch and touch APIs and frameworks, etc.
So now, a quick word though, is that we've got all of our sessions from last year's WWDC on iTunes U. If you're particularly interested in this material, and I guess maybe you are at least to some degree since you're here, there are two really, really interesting sessions that you can go back and look at from last year if you didn't see these.
Gesture recognition and advanced gesture recognition. Now, this is not a repeat of those sessions. It's new material, but there's quite a bit of overlap. And so, if you like this talk, you'll love those. How about that? So, go back and take a look at those. Some good information. There's a lot of information in there too. But today, we've got those four big ideas. So, let's get started.
Multi-Touch strategies. I like to think that there are four general approaches to handling Multi-Touch on iOS. And the first is you can ignore it altogether. Simple apps, particularly on the phone, but also on the iPad. If you've got an information app like Stocks, you're just ignoring Multi-Touches. It's really just a single-touch interaction. Of course, you get some of the advantages, and there's plenty of material in the talk that you can take advantage of if you've still got just a single-touch application. But again, the concept is you can just completely ignore Multi-Touches throughout your app.
Another example, sort of another approach, is that you can handle multiple touches on the screen independently. Here's an app that I love, Bloom HD, wonderful music-making app. And the idea is you touch down on the screen, and each of those touches are independent. It's not like a gesture or anything like that. Of course, the touches do interact with each other, again, to make some wonderful music. But really, the touches are independent from one another in terms of handling them.
Then there's another example of sort of a third strategy of having multiple touches interact with each other in this way. So we've got one finger on the keyboard and going in and moving that pitch control. At some level in your program, those touches should know about each other.
[Transcript missing]
And so those are the four basic strategies. And I think probably what's pretty common too is that you wind up with a mix and match. I mean, even if you have, say, a Multi-Touch game, entering into the game isn't Multi-Touch, right? You're just pressing the button, it's single touch.
So this kind of mix and match strategy will happen throughout your app. Now, I like to think when I'm designing an app that I know, at least in some level, maybe it's just even subconscious, which of these strategies is being employed at a particular time. And when I might be transitioning to something else.
So, you know, even maybe if you're prototyping a new app, you might even want to think about these things at a high level. Are we going to be doing multiple touch? How might Multi-Touch help you at a particular spot in the app or not? You might ignore multiple touches, because that just might be better. So again, kind of thinking about it at a high level, I think it's a good idea to know which one of these strategies you're using at a particular time.
So again, that's pretty high-level introduction about Multi-Touch. So now to kind of get into, well, how do you make that work? Now you've decided on one of those strategies, how do you actually go ahead and implement? And so that's what this section begins to talk about, touch system concepts.
And so if you're new to iOS development, these are the, this is the cast of characters. These are the classes that you need to know about in order to make the best use of Multi-Touch. Right, a couple of that you might not think about, like UI application, UI window, UI view controller, and so forth.
So we've got really wonderful documentation, really great sample code. Again, if you're new to iOS development, these are the classes that you should know about. You should be able to describe, maybe even just to yourself, how these classes contribute to the whole Multi-Touch system in order to make the best use of it.
And of course I'll be talking a lot more about these classes as the session goes on. So let's begin doing that and talk about touch processing, kind of a simple example of touch processing. So I wrote a demo app which I'll be using a few times during the session, kind of a simple app which puts some shapes up on the screen, and you can just drag them around, direct manipulation, kind of manipulate them with some gestures as well.
And so the first thing that I'd like to do with this sample app is to look at some just very, very basic single-touch processing using these four touch handlers. So let's do a demo. So now, here I am in Xcode, and so those shapes that you saw on the screen are implemented with this shape class. Very, very simple class. It's just a UIView subclass that you'll see just does some pretty simple drawing to implement different shapes.
[Transcript missing]
And you'll see that in the touches move, sort of where the action happens, where the kind of the direct manipulation happens.
I'm just going and getting the point that corresponds to a touch, and I'm using it to just move the shape around, changing the center point of the shape using the data that's coming through from the touch. So let's build this. Have a look on the iPad. Okay, so I've just got a simple shape on the screen, and if I go and touch it, you'll see that I'm tracking my touch.
So if I touch with lots of fingers, you'll see that where it is that I'm doing that. But now you see I'm just going with one touch, and I'm moving around, and the shape is kind of giving little updates on which of those handlers is getting called at a particular time.
You'll see that touch has began. If I can hold my finger steady, if I move it at all, it changes. The touch has moved. It began, moved, and ended. And of course, if I do another one, of course, that yellow one doesn't change. I'm just getting the touches coming through. Okay.
And then the shape that I'm touching. Of course, very, very simple, right? But of course, this is the first example. So let's go back and talk a little bit about what's happening to have those handlers be called. What's the life cycle of those touches such that it actually winds up coming out, resulting in a callback in your code? So, there's a timeline for a touch.
When a touch comes down on the screen, we have this cycle. You kind of like think about it along a series of steps. We already saw a series of steps, but let's kind of go in and talk about one important step which happens before you wind up getting your touch callbacks. And that's this, finding the hit test view.
So, this happens at the very, very moment that a touch comes down on the display. The first thing that happens is the whole OS goes and finds which touch is under your finger. All right, this is special work for UI application. It's not something you need to worry about. It's something we need to worry about as the OS provider, as the framework provider. So, in UI application, what it does is it goes and it finds the deepest view in the entire view hierarchy in the application. that's under your touch. The very, very deepest view.
Okay, finds that view. And it's all about view containment, right? It has nothing to do with first responder. I mean, if you're coming from the Mac development, you kind of think about that event handling is tied up with the responder chain and first response. That's not what happens with touches. It's all about view containment. It's all about taking that touch and finding out which view is underneath your finger.
Okay, and once this happens, once this determination is made, that touch and that view are linked for the remainder of the lifetime of that touch on the screen. If you move that, move that touch around, right, the same hit test view and that touch, they remain linked together.
Now why is that important? Well, because now we start delivering events, and where do the events go? Well, the events go to that hit test view. Okay, so once that touch comes down, hit test view is determined, and now we start delivering events. All right, event delivery is all about UI application and UI window send event.
And the hit test view winds up receiving the event as part of this sort of built-in method that, again, is part of the framework, part of what we provide to you. And of course, as we saw in the code, right, as the label updated in the shape, touches began, right, because that's the stage that we're at. The touch just came down.
Okay, so now subsequently the touch moves. Well, guess what? We don't have to go through that first step again. There is no re-running of that hit test view determination, right, because the touch and the view are linked, again, for the whole life of the touch. So this process just runs again. Only a different method gets called. Touch is moved gets called instead.
And then afterwards when your touch finally lifts, again, it's simple. The view just continues to get delivered to the same place that it has been all along. Touch is ended. So now if we back up a step, and if something else happens, touch canceled, if you're on, say, an iPhone, or if you're on an iPad, or an iPod Touch, any iOS device, if you've got a touchdown on the screen and say you get an alert, well, your application is going to get touches canceled. And you should really have an implementation of touches canceled. I'll talk more about that later. But you really, really should think through what it means when the rest of the system sort of... interrupts your touches.
Okay, so that's it. I mean, it really is pretty simple. Once you get these two concepts, right, it's that two-step process. Finding the hit test view right when the touch starts, and then event delivery happens from there. So now, what about processing a gesture, like a pinch gesture, zoom gesture, right, a long press gesture, any of the gestures that are built into UIKit or any that you've implemented yourself? You might have a question: "Touch handlers or gestures, which one are you supposed to use?" I just did this direct manipulation example, but there's also a pan gesture recognizer. So which one should you use? There are certain situations where gesture recognizers, I think, are clearly better.
And this is an example when you're trying to deliver kind of a standard gesture that users have come to expect from iOS apps, like pinch to zoom. You should definitely use the pinch gesture recognizer to implement pinch, or just take advantage of it in scroll view, UI scroll view, if you're using that.
may want to come along, and if your app, you've got maybe a little text editing app, and you want to implement a custom gesture, sort of a little rub-out gesture to delete some text, that might also be a really good idea to implement as a gesture recognizer, rather than just touch handlers.
Because, again, gesture recognizers let you encapsulate the behavior of that movement in such a way that you kind of deal with it in all one thing. You don't have to kind of sprinkle that around, sprinkle that code around in touch handlers and all of the views which may want to implement that gesture.
So that's two sort of reasons why gesture recognizers might be better. A reason why touch handlers really might be better, even in sophisticated cases, is if you're porting software from another platform. Let's say you've got a drawing app that you're bringing over to iOS from someplace else, and you probably don't have gesture recognizers or anything like them in that existing code.
And so maybe the quickest and easiest way to get your code up and running would be to just kind of hook it up to those touch handlers like I just showed in the demo. Again, it's just the quickest and easiest way. You might do something different for 2.0, sort of do rethink, but to getting up and running, touch handlers might really be just the simplest and easiest way for you to go.
So is it really this kind of six or half dozen thing with touch handlers and gesture recognizers? Well, you know, in some respects it really does come down to a matter of personal preference, which style you like better. Maybe you've already got existing code, again, like that porting example.
But what I hope, in some other examples I've got coming up, I'll show you that there really are some sort of subtle points which will help you to decide which one might be better, depending on the kind of the details of the behavior that you want to deliver. So, what about processing a gesture, kind of stepping through that in the same way as processing touch? So I've got a demo for that. So, gesture processing demo, and let's see.
[Transcript missing]
So I just go and I allocate and initialize a pinch gesture recognizer and just add it to self. And then I go and I set a target. So I've got a pinch handler, and in many ways a pinch handler is very, very similar to touch handlers.
I've got these state callbacks coming in, and you'll see sort of, again, just like in the touches moved example from before, the real action is going on in UI gesture recognizer state changed, where I take the state of the pinch and I just go and set a transform on the view.
Okay, so now just like before, again, this is really unchanged. I can, you know, move this around, but now also I can go in and touch and pinch the view to zoom it around, and of course I can still move it subsequently. Okay? So that's really pretty simple. Of course, again, if I, you know, have a second one, of course the first one is unaffected. It just doesn't get any calls back or anything like that.
Okay, so now if I go back to the code, You'll see that what I did, of course, was add the pinch to... The shape itself. But wouldn't it be kind of cool if I could just go out into sort of empty area where there isn't a shape and pinch to zoom all of the shapes at the same time? So let's take a look at that.
So instead of putting the pinch gesture recognizer in the shape, I'm going to go and put the gesture recognizer in A view controller class, because now there is a view which of course contains those couple of shapes that I've had, and there's a view controller associated with that view. So I can go and put the gesture recognizer on that view controller, assign it to the view controller's view, and sort of get this sort of higher level effect.
Right, so you'll see that I'm taking a pinch gesture recognizer in the view controller's viewDidLoad, I've got that content view associated with the view controller, which is again what is containing all of those shapes, and that's where I add the pinch gesture recognizer. And you'll see, sort of like before, it winds up being very, very similar in the UI gesture recognizer change. The only thing is now I'm stepping through all of the shapes that are there and transforming each one in turn.
So let's take a look at that. So now you'll see if I go and just pinch out an empty area, I've got that sort of status update happening in the top of the view. And if I add a couple more shapes, you'll see that I can also do this.
Now, there's a kind of a really interesting interaction which I'd like to show you now, which is I can drag this view around and then land a finger afterwards, long afterwards, long after I've moved the view. And the pinch gesture recognizer for the view controller will kick in. So again, let's look at that again. I can move this around. I've just got one touch on the screen. I can land a second touch later and start pinching.
It's pretty interesting. And what that means, of course, is that somehow those touch handlers in the shape, touches began, move change ended, and the pinch gesture recognizer callbacks are somehow cooperating with each other. Right? One of them starts, but then the other one can take over. So that's kind of an interesting interaction, and I think it's worth going over. So let's talk about that.
Okay, so now really kind of focusing in on that last case of processing the gesture. So now, when touches come down and you've got gesture recognizers in the mix, you've created some and you've added some, so now this touch has come down, right? When that second touch came down, not only did I find the hit test view for that second touch, but I gathered up all the gesture recognizers which are associated with the touches. Ken Kocienda And how does that happen? Well, again, this is special work for UI application. This gesture gathering task starts with the hit test view for each touch.
And starting from that hit test view, right, so we drill down to the deepest view in the hierarchy to find the hit test view, this gesture gathering is now sort of a bubbling back up. So starting from that view, it goes and sees, have any gesture recognizer been added to the hit test view? Well, what about the hit test view super view? What about the hit test view super view? Right? All the way up.
And they're kind of, you know, added in order so that the deepest ones get added on a list first, and the ones later, which were added up into views higher in the view hierarchy, get added later. So again, this kind of gathering, this kind of initial determination step goes on. Finding the hit test view, gathering up all the gesture recognizers.
[Transcript missing]
Just like before, event delivery happens. But the difference is that event delivery is two-tracked. We saw that. Event delivery is two-tracked. Touches and gestures are in the mix. Well, how does that happen? Gestures get tested for recognition, and the hit test view receives events.
Views get touches began if no gesture has been recognized. So in the case of a pinch, right, you can't pinch until your fingers start moving together. So that's really what happens here. I had that one touchdown, and so the view got touches, touches began, and started moving around.
But now, if touches move, right, event delivery happens again. Of course, we don't do that first step, just like before. We've already gathered touches and done that hit test view determination. Event delivery is still two-tracked. But now, let's say, yes, you did begin to move your fingers together, and the gesture recognizes.
Then, as we saw, the View Got Touches cancelled. So that just continues to happen. As you are moving around, all those gestures are recognizing until they fail to recognize. That pinch gesture requires two fingers, it just kind of keeps going until such time as it recognizes or of course the touches lift. But in this case, those two fingers did. caused the gesture to be recognized and touch is canceled, got sent. Now the gesture runs its handler, runs that pinch handler, and of course the scaling takes place just as we saw.
Okay, so now touches move again after this, after this initial gesture recognition. Now event delivery is just single-tracked, right? Touches are no longer involved in the mix. Hit Test View will no longer receive any events. Only the gesture will. gesture runs its handler. As we saw, you can scale all the views, right? And touches lift.
Again, it's the same from that point, right? So we start out with this two-track Event delivery process. And if gesture recognizer, we kind of go over to a single track. And at that point where that changeover happens from being two-tracked to one-tracked, your views will get touches cancelled. The hit test view will get touches cancelled.
So, as we see, we can, you know, kind of going back to that idea of, well, does it really matter whether you use gestures or touches? It really does matter depending on the kind of behavior that you want your application to exhibit. Touch handlers and gestures can really work together, work well together, if you understand how the event delivery happens and how they interact with each other.
So that's touch system concepts. And so next, going over to touch system tasks. How do we kind of get in there and change some of these processes and procedures to customize them a little bit? And there's four topics that I'd like to look at. So implementing direct manipulation, a little bit more about that, picking an event handler using the responder chain and using some details of gesture recognizers, changing the event flow in interesting ways, and then finally, some notes about subclassing. So about implementing direct manipulation first. Of course, again, we've seen this, right? Touch handlers, you implement these touch handlers in your view, and you can use those to respond to touches appropriately.
Gesture recognizers don't use these. They don't use them. If you've got a pan gesture recognizer, or as we saw in the example, a pinch gesture recognizer, you have to implement code that looks like this. A gesture recognizer callback, which you set up using target action. When you allocate and initialize your gesture recognizer, you set the target, and you set an action method, and that action method will look like this. And you'll get callbacks, kind of with the gesture recognizer state machine.
State-by-state doing things as appropriate. Okay? So we'll take a look at another example with that in a second, but first I'd like to talk about picking an event handler. So now, touches do not go to first responder, as we talked about before. Again, this is kind of a change a little bit if you have experience developing on the Mac. But touches don't go to first responder. But you can still use the responder chain if you wish. and have a higher level object than the view that got touched. You can have a higher level view handle the event.
[Transcript missing]
But you can take advantage, again, of the whole responder chain. You can use a super view, you can use the window, you can use the application. Now, I think as a kind of a general sort of best practices sort of thing, kind of keeping your touch handling down as close to the hit test view is probably what you want most of the time.
of course these options are available to you. And then of course really if we take a look at the example that we have, we've also got a view controller in the mix. View controllers do participate. They are a member of the responder chain, so you can put touch handlers on view controllers.
What about gestures and the responder chain? Well, gestures don't use the responder chain at all. At all. Again, gesture recognizers, which ones recognize, it's all about that view containment and that gesture gathering process related to which hit test view was determined, and then gathering up gesture recognizers from there. So it's all about view containment, not the responder chain.
And of course, gestures are attached to views, right? So they're attached to views, this drill down and then come back up process, and those gesture recognizers are attached to individual views along the view hierarchy. And since gestures use target action, You can specify which handler will respond and which object will implement that handler. You can implement, I think, what's a pretty interesting pattern, which is a kind of a gesture controller pattern. You might even have a whole new object.
And that object's only job is to respond to, you know, be the target of gesture recognizers. This is not something that I have in demos, but this is something that I've done, kind of a... We've used the term interaction assistant quite a bit to have that as an idea of a class that all that it does is sort of implement a bunch of gesture recognizer handlers, kind of centralizing that event handling, gesture recognition handling into one place. It's kind of an interesting idea, maybe you want to think about it. So now let's take a look, a demo of implementing direct manipulation a little bit more and picking an event handler.
So what I've done is in the View Controller, You'll see that I still have the pinch gesture recognizer handler up in the view controller, but now I've taken the touches, the touch handling, and moved it up to the view controller. And if we go and take a look at the shape, you'll see that there is no touch handling there at all. Here's where it was before. The touch handling is gone from the shape completely. And this is kind of interesting because now you can actually implement a view which doesn't have any sort of even a little sort of controller-like behavior. It just really is a visual representation.
You're kind of passing along the behavior to a higher level object. That's a really, I think, a good reason for doing this, is the view is just about visual representation. So let's take a look at this. You'll see that it winds up behaving just like before, only I changed the little status message there with the little sort of VC.
You sort of see that the view controller is the one responding, but in terms of from the user's perspective, it behaves just like it did before. But it's sort of different organization for you. This just might make more sense for your program to implement touch handling in a higher level object.
So now back to the code. What I'd like to do is now come along and sort of restore, even though I pitched you wonderfully on removing event handling from your low level, so your leaf level view objects, I'd like to sort of go back and show you that you can kind of do this. You can kind of do the same thing as you do with touch handlers, but instead use a pan gesture recognizer.
[Transcript missing]
So that pinch gesture recognizer, if you remember, was in the view controller's view. It's in a higher level view. The pan gesture recognizer is on that deeper view. So it wins. And so that landing that second finger later won't cause the pinch gesture recognizer to recognize. Whereas it did before, because all we were using were touch handlers earlier, right, in shapes.
So that's kind of, again, a subtle point, but this might help you to get the behavior that you want, either one way or the other. You might want it one way or the other. But again, to try to figure out, you know, how touches and gesture recognizers relate, and then how multiple gesture recognizers relate to each other. Okay? I've got one more example.
[Transcript missing]
So if I add a shape, I've got the pinch gesture recognizer now up on the view controller, to do that pinch to scale all of the shapes. But now for each individual shape, I can go and add a pan gesture recognizer to the shape, so the shape view is still getting the gesture recognizer on it.
But what is this illustrating? Again, it's kind of illustrating this notion that I'm bringing the event handling up to a higher level object, because even though I'm adding the gesture recognizer to the shape object, to that leaf-level view object, the handler for the gesture is going to be up in the view controller.
So again, if I go over to the shape, there's really nothing interesting in the shape at all. If you just looked at this shape code, you wouldn't see anything that would lead you to believe that you can actually act on the shape directly. But of course you can.
So now I've got a shape. I've got now the pan gesture recognizer being recognized up in the view controller. Again, I can do the pinch to scale the shapes. And again, it's that same behavior as before. One of them is winning, and it prevents the other one from recognizing later.
So, kind of some interesting details, some interesting options about direct manipulation. I mean, it kind of seems like a simple idea, but it does turn out that there are quite a number of options for implementing it. Kind of getting very, very similar behavior with some subtle differences, again, depending on what you want and how you set it up and pick an event handler.
So now going on, the next step, changing event flow. So pretty much, kind of a high-level idea about changing event flow. Changing event flow is about changing which view becomes the hit test view. Of course, again, because that's the view that gets touch handlers, and that's also the starting point for the gesture gathering process, which, of course, matters quite a bit for which gestures will get recognized. So what can we do here, right? Well, this is the point that I'm talking about, right? That initial touch is down, right? We talked about before in the timeline, right? Finding that hit test view before any events get delivered.
So kind of a quick side note on changing event flow. There is public API which you will have access to on UI application and UI window send event. And you can override this to get a look at every single event which gets delivered to your application and then in turn gets delivered to every window.
Ken Kocienda Now it used to be in older versions of iOS before we had gesture recognize before OS 3.2, before we had gesture recognizers that this was the only way really to get that kind of land that first touch, land a subsequent touch and have a gesture takeover. Ken Kocienda It's really kind of the, you had to get this very, very high level view of how events were flowing into your application. But you don't need to do that anymore.
Really, overriding SendEvent is not recommended. You should really kind of, you know, understand kind of how the examples that I'm showing here are working. And if possible, use those mechanisms, use the way that touch handlers and gesture recognizers interact to get the behavior that you want, rather than kind of drinking from the fire hose and trying to get everything right with SendEvent. All right, so really, really think again if you think this is a good idea for you.
Once we are using the standard Send Event which we provide to you from UI application and UI window, you still have some pretty interesting options. One of them is turning off events for a view. So you have a view and you do not want it to become the hit test view.
What options do you have? The simplest, perhaps, is just remove it from the view hierarchy. If it is not in the view hierarchy, it will not become the hit test view. It will not be the target for all of touch events. Alternatively, you can leave the view in the view hierarchy and call other UI view APIs, set user interaction and enable to know. So the view will still be there. You can still move it around programmatically, but of course, users cannot land their touches on it. It will not become the hit test view.
You can also set the view to hidden, which keeps it in the view hierarchy, which might actually be an interesting option if at some point maybe that pinch gesture recognize, maybe you want to even scale invisible views, and when you bring the view and make it unhidden again, maybe you want it to be scaled. That might be a good way to do it.
Just make it hidden, and you can still iterate over it and change its transform. That will all still work, but of course the user won't see it. And of course it won't become the hit test view. Or you can set the opaque property to no and set the alpha to zero.
Okay, so if any one of these things are true, a view will not become the hit test view. And again, events won't flow to it. They will flow someplace else, which may be what you want. Well, what about turning off touches for your entire app? You can do that too. There's API and UI application. You go and you get a pointer to the shared application and you call begin ignoring interaction events.
So you do this and you go and you run some code. Nothing will happen. Well, why would you want to do this? I think there are some good, interesting situations. Let's say you have a game and you've got sort of the startup screen for your game and the user presses a sort of begin the game button and you don't really want anything to happen.
You're sort of like in control of the whole process of transitioning from your startup screen to the game. Perhaps even it's a multiplayer game and you want to sort of synchronize the beginning of the game. You don't want anything to get in the way of that in that maybe two or three second period.
You just really, really want to be in control. You don't want any other event handlers firing, anything like that. So kind of situations like that, this sort of makes sense. But of course, when you're done with that process, you have to then call end, ignoring interaction events. And of course, it's really important to balance out these calls. I mean, I've had many, many bugs where it was, oh, I can't touch anything in the app now because sometimes the process of beginning is separated from ending and it can get to be a little complicated to make sure that you get it right.
But again, this is a way you can turn off events for your entire application. Okay, so now touches during animations. What if a view is animating? Can you touch it? This is an interesting issue, and I'd say, you know, there's not enough time to sort of go into all the details, but this has actually changed in different versions of iOS. I will tell you that for version 5 of iOS, animating views will not become the hit test view, although they really sort of will. They will become the hit test view, but they won't get touches delivered to them. The touches will get eaten.
It's kind of a subtle point, and even sort of doing hit testing can be sort of interesting, depending on what kind of animation you're doing. So if you have questions about this, if this is something that you're really interested in, you know, this is in some ways maybe even not so suggesting, you know, not really always, you know, something you should expect users to do, you know, kind of hit a moving target as it's going across the screen. So, you know, maybe it's just kind of a special case where you'd really want this to happen. And if you do, maybe just kind of come to the lab and find me or find a UIKit engineer. We can kind of talk over the finer points.
Okay, so now what if you want to direct event delivery to a specific sub-view? There are two UI view API calls which are interesting. One is hit test and one is point inside. Now, hit test is what gets called from the start, from the very, very top of your view hierarchy, sort of drilling down through all of your views, trying to find that hit test view. This is the method which gets called, UI application and then UI window, calls hit test on all of your views.
So you can override this if, let's say, maybe you've got sort of a more complicated version of my program, my demo program, where you might have to select a view. You might have to touch a view to maybe get some grab handles on it before you can drag it around. And so you want just kind of a different set of sort of more complicated set of tests to be done before you let a view become the hit test view.
Now, I will say if you kind of think back a couple of slides to that, the one where you can change whether a view becomes the hit test view, removing from the super viewer, checking hidden, or checking it's alpha, or checking if user interaction is enabled. This is what hit test does by default. This is why if a view has any of those properties true on it, why it won't become the hit test view, because the default implementation of hit test tests those very things.
Okay, so you can write a custom version which adds a little bit of extra algorithmic smarts to hit test, if that seems appropriate to you. Now the second one, Point Inside, I mean, I think a really, really good example of why you'd want to use Point Inside is just a very, very simple geometry test. A lot of applications, it's pretty common in iOS to have pretty small, circular, you know, buttons with a little italic I in it, a little info button.
And it's really small. It can be really hard to actually land in that, even when you intend to. And so, if you want to make that button small and unobtrusive, but still make it easy for the user to interact with it, you can implement a custom view, implement Point Inside to draw that little view, that little circle, and override Point Inside to change the geometry, make it sort of geometrically bigger. Not visually, but just with respect to hit testing. So a very, very simple little bit like that can make small views still really, really easy to interact with.
Okay, so that's changing event flow. Now, a few notes on getting subclassing right. So if you subclass a UIKit responder class, a UIView typically, right, if you're going to implement one touch handler, if you're going to implement touchesBegin, you really should then go through and implement all the rest. TouchesMove, touchesEnded, and touchesCancelled. If you implement one, implement them all. Again, there are some finer points here, and you can wind up with some pretty difficult to diagnose bugs, I'll tell you, if you don't do that. If you do, you're all safe. You're all good.
Right? Also, don't draw in touch handlers. Right? Touch handlers are really-- we're trying to, you know, keep sort of 50-- excuse me, 60 frames per second on sort of responding to events, right? Sort of event frames per second. Right? So don't do expensive things like drawing in touch handlers. You might be tempted to.
And again, if maybe if you're coming from the Mac and sort of thinking still in terms of draw rect to sort of make your screen-- update your screen representation, right? Maybe come to the labs and we'll talk through maybe why that's not such a good idea. And there are cheaper ways to get done what you want to get done.
Also, don't forward events yourself. If you're interested in using the responder chain to propagate touch events, and maybe you've got a touches began implementation that doesn't handle the event, you want to pass it up to a higher level object, do not call next responder. Instead, just call super, and it'll do the right thing. It'll propagate up the responder chain just like we saw earlier. So now, what if you're interested in subclassing UIView or UIControl, right? So now, the question is that you want to implement a widget that sort of behaves like a UIControl.
It sort of has that kind of control concept attached to it, if you will. It's somehow manipulating something else, manipulating a value, changing a number, maybe a knob in an application that changes a value. So which do you do? Do you subclass UI view or UI control? I have to say it's really a personal preference. I've done both quite a bit.
Now, I will say that UI control, if you choose it, it does give you some common extras which are really useful. Things like target action are just built into UI control, and you get that for free. You can just go and set a target action, and it all just works. We've done the work for you. You don't have to worry about setting that up.
And of course, I think this is really advisable if you've got something that you're going to be reusing a lot in different places in your application. You're going to kind of put two knobs next to each other, maybe a whole bank of knobs next to each other. They're probably going to be hooked up to then different code to actually respond when a user interacts with them.
UI Control makes that easy. In terms of anything that acts like a button, you get that touch-up-inside behavior where if you touch down in the control and then drag out and then lift up, the control won't fire, won't call its action method. You get that for free with UI Control. You can just sign up for it using control states. That's a few reasons why you might choose UI Control instead of UIView.
What about subclassing an existing UIKit control? Generally, this is not really recommended. I make an exception for UIButton because there are custom buttons and it implies that you will have to implement the drawRect method on a custom button. Otherwise, really check out the delegates and notifications that are on the existing controls that are in UIKit and make sure that any custom behavior that you want isn't already available through a delegate or a notification.
Now, subclassing UI gesture recognizer, get in making your own. Well, first again, make sure that you look at the UIKit provided classes. And there are a number of properties on these classes for number of fingers, and top recognizer has top counts, and some other interesting properties. Make sure that you can't get what you want by just going and using an existing class and setting a property. If you decide that you do want a subclass, this class will help you. UI gesture recognizer subclass. Check it out. there are some interesting methods there for you to override.
And sort of the last bit of advice there is to really keep gestures simple. I mean, I don't think that you want sort of a five-finger, I need to move up and then to the side, but more up, then to the side. You know, users a lot of times have, you know, difficulty. The more complicated that a gesture is, the more difficult it is for it to do.
And as we saw in the example, touch handlers are still firing. So users might wind up getting frustrated by doing something that they didn't intend to as they were trying to sort of trigger the higher-level gesture. So really try to keep gestures simple if possible. Easy to do.
And finally, interacting with the rest of iOS. So I think, you know, one of the best things about iOS and iOS devices is that when the user is running your app, it's like the device becomes your app. If you've got a music application, a musical instrument, you know, on an iPad, an iPod becomes a musical instrument. It's really great. I mean, it's like the whole rest of the system kind of melts away while your app is running.
And of course, you want to deliver that great experience to your users. You want users to enjoy your apps and to love them. But, you know, even while, you know, you're kind of thinking about providing this great experience, having the device become your app, you do still have to work and play well with others. Other code, other, you know, facilities are running on the system.
So like what? Well, there are things like on the phone. Well, on an iPhone, it's a phone, right? You might get a phone call at any time. Or an alert might fire from maybe you've got some push notifications set up. You can interact with the device and press the lock button at the same time.
Or if your app is running while the multitasking switching bar is running. You're still drawing, but events are getting routed elsewhere in order to interact with the multitasking switcher bar. And there are also multitasking gestures, sort of swipe side to side to change between apps, right? So there's, you know, a bunch of other things which are going on which the users might do, which may lead you to cancel touches. All right, so what do you need to do, right, to be a good multitasking citizen? At least in terms of this talk, it's implement touches canceled, but that also any, the gesture recognizer state callback. You should really, really do this.
Don't miss it. Don't neglect to think it through. What happens if you've got a game on the phone and the user is touching the screen and a phone call comes in? Right, what happens? I think there are three general strategies for handling cancelling touches. You can have cancelling just like ending.
That's really the simplest thing to do. However, whatever you do when touch is ended, just call that same factor out into a shared function and have both touches ended and touches cancelled do that. I think that's perfectly reasonable. You can also leave your application in a kind of a provisional state where you try to make it so that the user can kind of pick up right when they left off.
Again, that more sophisticated example of my app, if a shape was selected, well, you might want to leave it selected so that when the user comes back, they can just kind of pick up right where they left off. Or, kind of another interesting idea is you can implement undo. Really think about touches cancelled as just forget like that touch ever happened and just put the application back. The way that it was. So I've got a simple example for that.
So now, just a very, very simple... Touches cancelled example. So now I'm back in the shape class and you'll see that all I've done, you know, before all I was doing was, you know, kind of putting up the touches ended label. But now I've gone over in touches cancelled and all I've done is have a little animation and this reference point that I set the shape back to was just set up in touches began.
A really pretty simple example. Almost the simplest possible undo. Okay, so now I start interacting with my shape, I'm going to drag it up there, and so now I'm going to press a button, I'm going to schedule an alert to run. So now I'm moving the thing and now the alert fires, and all it does, right, I get touches cancelled and I move the shape back to where it began.
The simplest little way of implementing undo, but it seems, I think to the user, if that happened, it would seem natural enough. Now, I did also talk to some of the UI kit engineers in preparation for this talk, and we don't see any reason why you can't even use an NSUndoManager for a more sophisticated application.
And use that undo manager to undo right here. So you can really kind of take this quite a few steps further and implement undo and touches cancel. And if you really want to do that, come to the labs, I'd be kind of interested to talk it over with you. Okay, so strategies for cancelling touches, right, it's like ending or kind of a provisional state or undo.
[Transcript missing]