Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2017-219
$eventId
ID of event: wwdc2017
$eventContentId
ID of session without event part: 219
$eventShortId
Shortened ID of event: wwdc17
$year
Year of session: 2017
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2017] [Session 219] Modern User...

WWDC17 • Session 219

Modern User Interaction on iOS

App Frameworks • iOS • 36:17

Touch user interactions are fundamental to the user experience on iOS. Learn how to master the power of UIKit's gesture recognizer system in your application. Find out how to integrate with the new Drag and Drop features and the system gestures. Get some great tips for debugging your custom built interactions.

Speakers: Dominik Wagner, Michael Turner, Glen Low

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Good afternoon, everyone, and welcome to modern user interaction on iOS. My name is Dominik Wagner. I'm an engineer on the UIkit frameworks team and I am going to talk today about mastering the UIkit's gesture recognizer system. I will be joined by my colleagues, Glen and Michael later on.

So what are we going to talk about? We are going to talk about multi-touch. Really just touch interface at this time. We have great other ways of interacting with the system but multi-touch is the subject of this talk. I will talk you through the UI gesture recognizer system in depth, showing you how to interact with it and to customize it to your liking. Glen will come up and talk to you about the new APIs for system gesture interaction in iOS 11. And finally Mike will come up and tell you how to play nice with the new interactions for drag and drop.

So this talk works best if you already know about how touch is handled in UIkit. However, if you're new to the system, let's walk over the two building blocks for this. We have the UI touch. The UI touch is a representation of a finger touching the screen. So UI moves from begin, why I move to cancel, or end it, and represents one interaction with the screen. And on top of that we have UIGestureRecognizer, a more powerful abstraction. UIGestureRecognizer can be configured with a target and action and put on a UI view.

And we have gesture recognizer for all sorts of common interactions, tap, pan, and pinch. And we have the gesture recognizer system actually coordinating between those. And that's what I will be focusing about. And I will do this using a simple example. Like the AssistiveTouch widget. You have a widget that it recognizes a tap and you can pan it around.

And maybe you even do regular responder based touch handling on it to pass through some touches. So this example is very simple and purposefully so. I don't want you to think about this example in particular. I want you to think about your gesture setups. So where the secrets of [inaudible] I'm explaining is fitting in your setups and how you can improve yours. So let's go through this example. We have a view and we have the responder based touch handling implemented on the view. And also we begin with one tap gesture recognizer.

So when a touch goes down on this view, it begins in its state began. And the first thing that happens is the gesture recognizer system is delivering it to all gesture recognizers interested in that touch. That means the tap gesture recognizer in our case. And the tap gesture recognizer takes the touch and keeps its state in its initial state that it's possible.

And by receiving the touch, it will be marked as being including in this interaction. And after the gesture recognizer have their go we move on and deliver that touch into this state, to the touch handling methods. And when we lift up again, then the touch moves its state to end it and will be delivered again first to the gesture recognizer system.

And the tap gesture recognizer in this case will recognize by setting its state to end. This is how your gesture recognizers signal that they recognize. They move their state. And the gesture recognizer system, in turn, will now take this marked state and send out the action for this tap gesture recognizer.

And after that, we continue onward with sending the UITouch to the touch handling methods. The responder based one. And as you notice we send it onwards as cancelled and not as ended. So why do we do that? That is because gesture recognizers hate the regular touch handling methods, and you have wasted influence set. So there are three properties on your gesture recognizer that you can set and the first two are delaysTouchesEnded and cancelsTouchesInView.

Both of them are set to true as default and lead to the behavior we just saw. That you deliver the touch as cancelled if a gesture recognizer recognizes. And you have a third one that is delaysTouchesBegan. And if you turn that to a yes then the regular responder based handling will not even see the touch if your gesture recognizes.

So the takeaway here is that the gesture recognizer system comes first in touch delivery. That means that it's one of the many reasons why you should move on to actually implement gesture recognizers or use our system gesture recognizer whereas implementing the touch handling methods. And only if you need to, you should implement them.

So let's expand on our example and add another recognizer to make it a little bit more interesting. So let's add the pan gesture recognizer to this. And again, we follow a touch sequence of a touch going down. And it's -- it will be delivered as began to all the gesture recognizers interested in this. And the gesture recognizers will start out possible and be marked because they received a touch as interest as part of this interaction by the system. And then we will deliver this touch again to the touch handling methods and began.

Now we move the touch. And if we move the touch, this is delivered to the gesture and the pan gesture, in this case, will recognize because we moved enough that the pan gesture will recognize. And it will do so by moving its state to began. However, we didn't move enough for the tap gesture recognizer to actually fail. So we didn't exceed the allowable movement. So the tap gesture recognizer on its own will stay in the state possible.

However, in this case, it still will fail. That is because of another aspect of the gesture recognizer system. In its default configuration only gesture recognizer will recognize and win and will exclude all the others. And this is what happened here. And note that the began actually will be sent by the gesture recognizer system and the touch handling will be sent as cancelled already again because the pan gesture recognizer cancels touches in view and therefore it began, it will be cancelled which is an interesting thing to look at because your finger is still on the screen and the gesture recognizer see it as such but your responder based handling doesn't. So that's something to think about.

And then if you lift the touch, it will move its state to ended. And this ended will be delivered again to the gesture recognizer system first. The pan gesture recognizer will move its state to ended and mark itself to one -- its action being sent. And then you might think that the tap gesture recognizer will also see the ended because you usually see all the touch sequences in your handling methods. However, because it failed you don't.

And that's very important if you do your [inaudible] gesture recognizer. So whenever your gesture recognizer fails, you need to reset all your state and make sure that you don't leave behind old references to touches that you didn't receive an end or cancel for. The ideal place for this is reset in your custom, your gesture recognizer subtasks. And then finally the gesture recognizer system sends out the ended -- the pan gesture recognizer action.

So what you just saw was exclusion. This is something that happens whenever the state change and you have a way to influence that. So in your UIGestureRecognizer delegate you have the method, gesture recognizer should recognize simultaneously with and you can opt in to not be excluded. Note that this is the only thing you can do.

If any of the participants wants to recognize simultaneously, they will. You have no way to prevent that. And on the UIGestureRecognizer itself, you have the inverse logic which can prevent -- and can prevent it by in your sub classes. So you can do this for gesture recognizers that you know you want to recognize simultaneously with. So let's see how this looks.

So again we have our setup. Our touch goes down and began. And nothing changed in the beginning. It moves, again, far enough for the pan recognizer to recognize but not far enough for the tap recognizer to fail on its own. And at this point and time, the gesture recognizer system will perform exclusion because state change. So the included gesture recognizers have changed the state. So we ask and this is where the delegate callback gets called and you return true. And if you return true, the tap gesture recognizer will be allowed to continue. And the pan gesture recognizer's action will be sent.

And if you lift the finger now, the began, the pan gesture recognizer will move to ended and finish. And the tap gesture recognizer will also move to ended and finish. So it's important to know that the gesture recognizers will change the state before the system actually reconciles them. That means, at this point, the gesture recognizer should recognize simultaneously will be asked again because we have a state change and you still need to report true in this delegate method if you want both to fire.

And now that both have recognized, there is another thing that's interesting. If you have simultaneous recognition of gesture recognizers, once the action will be sent, you don't -- you can't rely on a specific order. So the pan and the tap gesture recognizer will fire because they're recognized simultaneously but you have no guarantee in which order that will happen.

So that's all and well but now we have a little bit of a pan and the tap and maybe that's not what we want. If we want the pan recognizer to actually wait until the tap recognizer has exceeded its allowable movement, then we have another way to do that and that way is called failure requirements.

And failure requirements, you can set up between two gesture recognizers. For example with required to fail statically and this is the preferred one. And what that means is that one recognizer requires another one to fail before it sends its action. It still would traverse its state but it waits to send its action.

And if you need more dynamic and one dynamic way to do that you can do so also in the UIGestureRecognizer delegate with the two methods gesture recognizer should require failure of and gesture recognizer should be required to fail by. And note that those go both ways. You need to make sure to agree in your implementation with what your answer here because depending on the UI more complex gesture setup, that might be called the one way or the other way first. So that needs to be the same, otherwise, it might lead to several bugs.

And you have the same in the UIGestureRecognizer subclasses where you can implement your require failure or should be required to fail by. Again those directions are more for a channel mechanism. Let's also look into how this works. So we have setup our static failure requirement between the pan recognizer and the tap recognizer.

And again, our touch goes down. The state start out as possible. Then we move the touch. And the move will be delivered to both gesture recognizers and the pan recognizer will begin and the tap recognizer will stay possible. However because the existing failure requirement, the pan recognizer will not send its action at this point and time.

So going on, moving a little bit further, but still not far enough to fail the tap gesture recognizer we will move, and the pan recognizer will change its state to changed. And the tap recognizer will stay possible but still the failure requirement is not fulfilled. So that means still no action.

And it is at this point and time if you would move further away then the tap recognizer would fail and by failing would allow the pan recognizer to send its action. And there's an important message in there. If the pan recognizer sends its action as this point and time, it will send its action in its state began.

Although it already is in state change because we make sure the client see a consistent picture of this but the important part, you should never inspect the gesture recognizer state outside of its action method and think you know exactly what the state is, because only in the action method that is defined. But now let's go even further and just lift the touch and the touch then moves into the state into ended and will be delivered. The pan recognizer move its state to ended and recognizes. The tap gesture recognizer moves its state into ended and recognizes.

But now still no action. So we have a complete set of state changes in the pan gesture recognizer but it never send out its action, because the tap recognizer did recognize and therefore no failure requirement was fulfilled. And the tap gesture recognizer will finally send its action. So that is failure requirements. Failure requirements really just prevent the extra send of a gesture recognizer. And you should be aware of that.

Let's look at this picture for a moment. So this is a regular settings screen and think about how many gesture recognizers are ready to drop into action on this screen. Just think of a number. Yeah. It is 163. That sounds quite a lot because it is but our gesture recognizer system is built for that and the way we deal with this is we narrow that down. And for example, if a touch goes down to the -- to this switch here, we narrowed down the interest gesture recognizers for you.

And in this case that leaves about seven. And seven is really a good number to deal with. So how do we do this? So in channel, we do hit testing as soon as a touch goes down. And this hit testing is done by walking the view hierarchy. Starting from window and asking the hit testing methods if a touch is inside of a view. And when we find the deepest view, we keep that and assign that to the touch.

And this is a short overview about this. So you have hit test and point inside. In here this is your override points. Note that if you override this to extend or contract your hit testing area, you should override both of them to agree again. Also note that the event in this queue is not yet fully formed because hit test is the first thing we do.

That means that if you inspect the touches of the event, they might not be there because it's really -- we deal with one after another and then add the touches to the event. So that might be surprising to you but that's how it is on hit testing. So the only thing you can ask the event is kind of its existence in the callback and it's type.

There are more properties that influence hit testing. For example, in UIView there is user interaction is enabled. This is one of the most important ones. This is usually true other -- if you didn't change it but for example, for an image view that is false by default. So you need to switch it to yes otherwise you don't have any hit testing on this one.

And there's alpha and is hidden which we use to actually guard hit testing so if something is invisible we don't hit test it but in your customs subclasses you really need to also adhere to that. And then there's -- it's multiple touch enabled which for historical reasons defaults to no but doesn't affect the gesture recognizer system.

So if you ever did come in a situation where you actually have working gesture [inaudible] and everything but your responder based touch handling code only ever saw one touch, then probably one view in your hierarchy did have set this to no still. You just need to set this to true.

And then, of course, there's UIViewAnimationOptions when we default you out of end user interaction. And that's not because we don't want you to hit test during user interaction, it's more because you really need to do additional work if you enable this. So you need to override hit testing in point and side [phonetic] for this to work well.

And we talked about this in the past if presentation layer versus model layer tells you something then you're probably good. If not, please really watch that talk. And also last year we added the UIViewPropertyAnimator and it actually helps you with hit testing during animations. However, there is this property that is manual hit testing enabled and you have to set it to true to make -- to do custom interesting and more complex scenarios.

So let's go back to our hit testing in the view hierarchy. Now that we have hit tested. We walk back up the view hierarchy and collect all the gesture recognizers on those views. This is our base set of gesture recognizers we consider for this interaction but there is more you can customize on this. So you have a callback.

UIGestureRecognizer should receive touch. If you return false from that, you can actually exclude a gesture recognizer from participating at all. So it will never see that touch. It's also a great break point for you to see if your gesture recognizer plays -- gets a chance to actually play with the interaction.

And there's a point little later on that gesture recognizer should begin also could create problem for debugging. And this is asked whenever a gesture recognizer really tries to begin and you can say no. False in this case. And if you do so, it will fail. It will not -- it will fulfill its failure requirements by failing and its also a good point to look into.

There's static properties is enabled on UIGestureRecognizer. You can set that to false temporarily if you know your state and you want to do that statically, that is very helpful. And you also that trick we talked about more often that when you set this to false while it recognizes, it will fail. And if you set it immediately to true again, your gesture recognizer will begin respond to the next touch sequence directly.

And there's the allowed touch types which default to all the touch types that is direct, indirect, and stylus. But you can narrow it down to just the stylus or just direct touches, but if you really want gestures to recognize between different touch types you also have to set requires exclusive touch type to true. Only then you can really do a pinch between a pencil and a direct touch. Otherwise you can't. Speaking of properties. We have new properties for you in iOS 11. And that's a name.

Sounds simple but it's really helpful because we encourage you to actually use all of our gesture recognizer and just configure them. So you can add an additional name. So in debugging you don't have to look above the action method and do that but only use that for debugging. We will not guarantee you that we will mess with this by the system.

And speaking about debugging. We also have the break point opportunities in gesture recognizer should receive touch to actually see if your setup does work. There's touches begin [inaudible] responder base touch handling method that helps you look into the state of a gesture recognizer system in a known state as you have seen -- you've seen the order how we do this.

Now you can look into that and actually know what the values mean. And you can look at the gesture recognizer of a touch, for example, or you can look at the touches for some gesture recognizer by asking the event or you can also collect the gesture recognizers through the view [inaudible] debugging session.

Some words on custom UIGestureRecognizers. You should always begin late and fail fast. That means if you begin too early, for example, in touches begin you don't give any other gesture recognizer the chance to be not excluded. This is bad. So you should begin as late as you can by still preserving the intent of your gesture recognizer.

And you also should fail fast so you don't hang around failure requirements of other gesture recognizers which is very important and you should also always whenever you have seen a touch move to the state to fail even if we didn't move out of possible, because you have to be marked as being part of this interaction and otherwise you hang your system. Also be serious about ignoring touches. Ignore them properly not just by not handling them.

You should also call ignore. That's for event on your custom gesture recognizer subclass, otherwise your touches will be delayed although you weren't even interested in them. And also don't forget touches cancelled as always. And this becomes more important this year because we cancel touches more often with drag and drop.

So your takeaway from here should be revisit your setups. Look into your gesture recognizer setups and try to make sense of the information you've just heard and put them together. Revisit your exclusion and failure requirements. When you use one or the other, how many simultaneous gesture recognizers do you want to achieve the same effect? And also are your gesture recognizers on the right views? Are they deep enough? The deeper you move them, the less interference is with other gesture recognizers. The higher you move them the more general you can make them. So you these things you should think about. And by that, I want to hand over to Glen for system gesture interaction. Thank you.

[ Clapping ]

Thanks, Dom. My name is Glen Low. I'm a software engineer with UIkit. I'm here to tell you about some brand new API to improve the way you to interact with the system gesture recognizers. But first a little entertainment. After a hard's day coding, this software engineer needs to play.

So I fire up my demo bots and move the robot around by dragging up and down on the virtual controls. You see it on the upper right of the screen. The gray circles. And it looks like this. My little robot has to change the bad robots into good robots. As you see I'm doing quite well.

I think I missed one. I need to get -- I need to go down to get that one. No oops, what happened? I got that new cover sheet instead? That's not quite the user experience we're looking for here, is it? So what exactly went wrong? Well, we currently have a few special gestures we call system gestures. You swipe up from the bottom, you get the multitasking and dock. Swiping out from the -- to the side brings in the slide over. Swiping it from the top brings down the cover sheet.

Now these special system gestures recognizers fight with your apps own gesture recognizers and responders to try and get to the touches first. So what could we do? Well we do make an exception for gesture recognizers. Your tap, pinch, rotate, and long press gesture recognizers can [inaudible] touches at the same time as a system gesture recognizes. Although sometimes they will get cancelled if it's obvious later on that it's actually a system gesture. That's why just tapping on a button at the bottom won't actually accidentally bring up the dock. That's because that's based on a tap gesture recognizer.

On the other hand, we have pan and swipe recognizers as well as your old school responders. While these will get the touches after the system gesture recognizers are done with them. So when our users swipe from the bottom, do you really want to bring up the dock or do he want to pan your view? Well, we had to guess whether to defer that system gesture.

For example, if your app happened to hide the status bar, we would defer bringing up the dock but it was hard to guess right all the time. And really who knows best what your user wants? For iOS 11, you can now tell us when to defer and where to defer.

No more guesses. You are not responsible for deciding whether to defer system gestures or not. This means if your app hides the status bar, we don't automatically defer the dock anymore. What does this look like? Instead you need to use this new API we call ScreenEdgesDeferring System Gestures.

This API lets you declare which edges you want us to defer system gestures in favor of your own gesture recognizers. In your view controllers, override preferredScreenEdges Deferring System Gesutres to return the screen edges for deferring. Then whenever you want to return something different, call setNeedsUpdateOfScreen Edges Deferring System Gestures. Let's have a look at what is this referral actually achieves.

Now when the user starts off in the bottom, your gesture recognizers and responders get the touches immediately. At the same time, we show a tongue [phonetic], or a [inaudible] so the user can see that they can do more. Swiping up from the bottom a second time, then brings up the dock.

One more method if you're writing it all in container view controller. Most of the time this [inaudible] for container view controller is actually in the child view controller. So you'll want to return the right one in your override of childViewController For Screen EdgesDeferring System Gestures which, I think, is like the new candidate for the longest method name in iOS.

Now that you have all this power at your fingertips, when should you use it? The short answer is don't do it. Well why am I here, right? The long answer, though, think hard and long about when to use it and why is that? Well number one, your user is familiar with swiping up to get the dock and other system gestures. So using this API actually breaks the intuition. It breaks the expectation of what actually happens. Also if you're using tap, pinch, rotate, and long press gesture recognizers as we talked about before, you're going to get these touches first anyway without even touching an API.

Third we reserve the right to ignore or modify the suggested edges for deferring. So don't rely on the system doing exactly what you asked it to do. And finally this API is for when you really expect the user's full attention interacting the whole screen for long periods. For example, if you're writing a game or drawing apps ask yourself, is your app really that immersive enough to use this API? So on that note, I'd like to hand you over to my colleague, Mike Turner, who will talk to you about how to work with drag and drop. Mikey?

[ Clapping ]

Thanks Glow. So as you've seen with iOS 11, we have a lot of great new features and near the top of that list is drag and drop. And with drag and drop, the users have an opportunity to interact with your application in exciting new ways. But there is some things you'll need to know about how to best interact with drag and drop with some of the existing gesture recognizers in your application. So I'd like to start with a brief example of showing you how you can make a view implement drag and then what happens to your existing gesture recognizers when you do so. So in iOS 11, we added a new class called UIDragInteraction.

UIDragInteraction is really super simple to use. It's very familiar. Similar to the pattern of gesture recognizer or adding a subview to a view and you just initialize one of these with a delegate of your choice and you call add interaction on a view. And that's pretty much all you got to do.

You got to implement one delegate API and you have drag. Really simple. So let's look at a quick demonstration of what a view looks like when you add a drag interaction to it. And here we're going to long press on the view and then move a little bit.

So when you long press, the view lifts up off the screen. When you move, it kind of tears out of your application. So let's see how drag interaction accomplished this behavior in your app. So when we added the drag interaction, it gets added to a new interactions array on UIView.

And the drag interaction, in turn, creates some gesture recognizers on its behalf. Attaches them to your view and becomes the delegate of those gesture recognizers. And it creates gesture recognizers for initiating the drag, managing relationships with other gesture recognizers, and adding additional items to an existing drag in the application.

So to make this example a little bit more concrete, let's add our own UILongPressGestureRecognizer to the view and we'll wire that up to an action that will pop an activity view controller up for this image. So here when we get the gesture recognizer state began, we're going to present an activity view controller and we get the gesture recognizer state cancel, we're going to dismiss that activity view controller.

So let's look at that same example again where we long press on the view and move. Now that we have our own UILongPressGestureRecognizer alongside. So here we long press and move. And I didn't see any activity view controller there. And that's because your long press gesture recognizers are now delayed when they're alongside a UI drag interaction. So let's adjust our user interaction a little bit here to see if we can get the activity view controller to show up. This time we'll long press and we'll hold for just a little bit and then we'll move.

So here we long press, hold, and then move. So here we saw the activity view controller briefly and then it was dismissed. And that's because a beginning drag will cancel the touch you used to begin that drag in your application. So we'll send touches cancelled for that touch to the app. The app will then send it to its gesture recognizers and responders. The gesture recognizer will then send the cancelled state to its action and thus dismissing that activity view controller.

So in a compact trait environment, adaptivity will take that presentation of the activity view controller and it'll turn into an action sheet. So let's see what happens in that case. So we'll long press. Hold for a bit and then move in a compact trait environment. Here we long press. Hold and move. No activity view controller again. So let's see what happens here.

Your long presses are being delayed until the touch ends in this case because in a compact trait environment, most of the presentations turn into modal presentation styles or action sheets that would cover the impending drag and that's not a great user experience. So let's look at how we can adjust our user interaction here to get that activity view controller. So we'll long press and lift. So we long press and then lift the touch. And there you have it. Your action sheet is up with the activity view controller.

So UIDragInteraction also has some built in ability to add additional items to a drag and as I said, it creates another gesture recognizer to perform this ability. So here, this is a quick demonstration. We had a long press and move. And then we can tap again on the view to add additional items.

And it handles that tap for you but you do have a way of influencing this. First this is an optional behavior. So the delegate method, items for adding to on the UIDragInteraction is optional. You don't implement, you won't get the adding behavior but if you do implement it, you can conditionally return zero items in this API to continue processing this touch as you normally would in your app.

So it's really quite simple to adapt your application when you're using UIDragInteraction. It's pretty much hands off but you need a few basic concepts to get you the rest of the way. And the first thing you do is just examine your existing actions. Do they make sense alongside UIDragInteraction? In some cases, UIDragInteraction might replace some of that existing functionality that you had from a UILongPress.

And you need to be careful presenting modal UI when you have a long press -- excuse me, when you have a UIDragInteraction. And that's because in a standard trait environment, it's going to be delayed and it might end be presenting over top of the drag and you need to take the appropriate action here.

And then you need to handle the cancelled state in your UIGestureRecognizers. As you see when the drag begins, we send touch is cancelled. So if you're not handling the cancelled state in your gesture recognizer you won't have the opportunity to take the appropriate action in our example that was dismissing the activity view controller. And you need to remember one really important but super cool part of UIDragInteraction and that's your app is fully interactive during a drag. So you might have some interaction that you wouldn't expect. So you've been warned on that.

So Dom came up here and told us about how you can best leverage the gesture recognizer system to get the setup that you want and how you can use failure requirements and exclusion to influence how gesture recognizers fire with each other. And he made the important point of the gesture recognizers come first. So when we send touches through application the gesture recognizers can block your responder based touch handling. So you should always attempt to use gesture recognizers when at all possible to get the best results.

And Glen told us how we can use the new system gesture deferring APIs to give preference to the gesture recognizers within our application when swiping from the edges. And you should use these very sparingly, you know, in full screen situations. Things where your users are going to expect interaction in your app and not the system behavior. So very sparingly. And finally, we had a brief talk about how you can adapt your app to use UIDragInteraction and it's really quite simple. So I hope you add UIDragInteraction to most of your views.

And for more information, you can see the video in replay on the following website. We have some great past sessions that go in-depth on handling events in UIkit. If you haven't seen them, I highly would recommend those. And we have some other sessions related to drag and drop and animations that you should check out, and have a great WWDC.

[ Applause ]