App Frameworks • iOS • 37:10
Dynamic interfaces create fluid and rich interactions. Hear about new dynamic behaviors that have been added in UIKit and how to take advantage of them in your applications. Gain a practical understanding of how to integrate dynamics with AutoLayout. Learn about enhancements to UI Kit Visual Effects.
Speakers: Michael Turner, David Duncan
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
[Applause]
[Michael Turner]
Welcome to the final day of WWDC, and 'What's New in UIKit Dynamics and Visual Effects.' My name is Michael Turner, joined by my colleague David Duncan. We are both on the UIKit team here at Apple. So before we get started today, I just want to recommend some great sessions. Since this is an introductory talk, we have some great sessions from years past covering UIKit Dynamics and visual effects.
So today we are going to start with a brief overview of the dynamic animation system, and we'll do that with a basic example, and then we are going to dive right into what's new this year with UIKit Dynamics. Then David's going to come up and talk to us about visual effects, and how you can utilize those in your application. And finally, we are going to talk a little bit about best practices using UIKit Dynamics and auto layout in your application.
So when we talk about UIKit Dynamics we are talking about a 2D, physics-inspired animation interaction system. This has a very composable and declarative API for exposing high level animations in your app. We are not talking about replacement for Core Animation or UIView animations, rather another tool to help you create great custom effects in your app. So let's look at an example.
So here we have a basic sliding view and the user can pan the view, but if you let go it falls back down as if under the influence of gravity. Now, it doesn't fall through the bottom of the phone. Rather, it stops on the bottom edge, bounces a bit, and then comes to rest. So let's look at how we can create this basic example. First you need to start by determining a great reference view. Here, we've chosen the view controller's view that contains our sliding view.
And once we have a reference view, we then need to create a dynamic animator and associate that reference view. The dynamic animator will hold the overall context for our animations, and its main job is to keep track of behaviors and dynamic items. So for our sliding example, we have a sliding behavior.
And one of the great things about UI dynamic behavior, is that higher-level behaviors can be composed of more primitive ones. So our sliding behavior is nothing more than a composition of gravity, collision, and attachment. And later on, we will show you how we used UIAttachmentBehavior and the new things we've added there to make this even simpler than it would have been in the past.
So once we have our sliding behavior, we now need a dynamic item. Here we've chosen the sliding view, just a UIView which automatically conforms to the dynamic item protocol, so it's a great option. So we take our dynamic item, associate it with the sliding behavior, and the sliding behavior to the animator. Now the animator will automatically determine when the system is at rest or in motion, so this is all you need to create this great effect.
So now we have seen a basic example. This is what's new this year in UIKit dynamics. We have support for non-rectangular collision bounds on UIDynamicItem. We have a brand new UIDynamicItemGroup that allows multiple items to behave like one item in the engine, and we have a brand new behavior that models vector force fields. We have some basic enhancements to UIDynamicItemBehavior, as well as UISnapBehavior, and we'll see some great additions to UIAttachmentBehavior, and we will finish up with some new ways that you can use to debug your dynamic animations.
So, in iOS 9, we added UIDynamicItem CollisionBoundsType which offers you three new ways to specify the collision bounds for your dynamic item. And by default your collision bounds will be rectangular matching what's returned from the bounds accesser on the dynamic item protocol. And now you can specify an ellipse type, which will derive from the bounds width and bounds height on the protocol. And finally, you can specify a UI BezierPath to use for the collision bounds of your dynamic item.
Now, to accomplish this, we have taken the existing dynamic item protocol and we have extended it with two optional properties. And if you don't implement either of these optional properties you will receive rectangular collision bounds, as you have in the past. If you implement the first collision bounding type and return ellipse, will derive one, an ellipse from the bounds width, bounds height. If you implement the first and return a path, we will then call you back for the second at which point you will need to provide a UI BezierPath to use for the collision bounds.
So if we were to model a collision between items with different collision bounds, and let's add a few more here for good measure, it might look something like this, where this collision looks a lot more realistic than it would have in the past, had all the items had rectangular collision bounds.
Now, there are a few restrictions when using a path for collision bounds and in particular, the BezierPath must be convex, counter-clockwise wound, and non-self-intersecting. And these are pretty basic if you think about it, nothing too fancy there. We also need to keep in mind that the point 00 in the BezierPath will represent the dynamic item's center point when the item is on screen.
So that's what's new in collision bounds. Let's talk about dynamic item groups. And this is a basic way to take multiple items and make them behave as one item in the underlying engine. And the group preserves respective positions and each individual collision bounds for each item. So for this reason, you should associate items with a group and not with behaviors individually. Instead associate them with the group and the group with any behaviors you would like to associate with the animator. And this will impose those behaviors on the items as a whole.
And a group cannot be added to other groups. This is a one-level abstraction. So this can be a great way to create concave or other complex geometry not capable with the dynamic item bounds path, and also have that influenceable by behaviors as a whole. So, let's return to that sliding example for a moment.
So instead of just panning and having it to fall back down, let's say we want it to bounce slightly when the user taps, maybe to indicate that we can pan. And to do this, we just need to add a brief force to the sliding view. So let's take a look at that force.
So we can model the force as a vector at the item's center where the length of the vector corresponds to the magnitude of the force and the vector points up to indicate that the direction of the force, in this case we are trying to move the view up, so it points up.
And to apply the force, we're going to use UIPushBehavior -- where UIPushBehavior, if you recall, has two distinct modes. It has a continuous mode that represents a constant force over time, and an instantaneous mode which represents a brief force at an instant in time, otherwise known as an impulse.
So for this interaction, we just want a brief force so it bounces and comes back to rest, so we're going to use the instantaneous mode. So we do that; we get a brief force. But what is causing the view to come back down? We are putting a force on it that causes it to move up but it's falling back down.
This is our composite behavior that was composed of a gravity, a collision, and attachment. And so gravity is causing it to move back down and then it bounces with the collision behavior. But let's look at a little bit more at gravity and how that's affecting our item. So we look at the vertical motion over time of the sliding view, starting with the instant that we apply the impulse force. You will notice that the force is applied at one point and then the item kind of moves up and then arcs back down.
This is because gravity is affecting it at all positions and all times in our diagram here. If we add those forces in from gravity, it might look something like this, where the force is applied at all positions and all times. So this is tricky to model. Maybe we could try and use UI push behavior, but it would be pretty complex, pretty quickly. So we really need to think of gravity as more of a field.
And a field's quite simple, it's just a function that assigns a vector to each point within a given domain, where our domain, in this example, is the entire reference view. So we want gravity to affect our sliding view in the entire reference view. Pretty simple. So we have taken this idea of a field and we have extended it.
In iOS 9, we are introducing UIFieldBehavior. And UIFieldBehavior is a way that can be added to a region of your reference view, and the field is evaluated at each point within the reference view, and any resulting forces are automatically applied by the dynamic animator to the items that have been associated with the field.
And if you are wondering, our existing UIGravityBehavior has been implemented as a field all along, and it's important to keep in mind that this is simplified physics. It's been well tuned for performance. I wouldn't use it for building interstellar space stations or anything like that. So let's look at the built-in field types that we offer.
We've got a rich variety here. We have linear and radial gravity, velocity and drag fields, a vortex field, we have got a great spring field that models Hooke's law, we've got electric and magnetic field types. And if these all don't quite meet your needs, we also offer a custom force evaluator, and we'll see this in a moment. But let's start with linear gravity here first.
So the first thing you will notice is it exists in a region, in a domain as we previously said, and it has a strength, we've used the default strength of one here, but it's also a directional force, and then we have used the familiar direction of gravity here, down, to show this example, but this can be directed anywhere, really. So let's look at how radial gravity differs from that of linear gravity.
And along with existing in a domain and having a strength, this has a position which can be modeled with radial gravity as a point mass. And if you recall, the gravitational force between two masses is inversely proportional to the distance squared between the two masses. So, here, that distance squared between the masses, that exponent, this is the fall off value of the field.
So as you get farther away from the position of the field, the force due to the field, it gets -- decreases. And we also have a minimum radius property here, and this is just a way to specify how far from the position point an item must be in order to feel a resulting force due to this field.
So we also have a noise field, and the first thing you will notice about a noise field is, it's time varying, and you can adjust this using the animation speed with a default value of one, and zero would indicate a static field. You can also adjust the level of noise in the field using the smoothness property.
So let's look at a custom field evaluator. And this is really quite easy to use. You create a UIFieldBehavior and initialize it with a field evaluation block. We will then call your block with several field samples containing the position, velocity, mass, and charge, and time associated with that field sample and then you can use that to determine any resulting forces.
Here all we did was take the position, the x position, and map it to a sin wave. Pretty cool result. So these are the -- some of the basic built-in fields and a basic overview of UIFieldBehavior. I would like to invite up David to give you a quick example of showing an inaction. [Applause]
[David Duncan]
Hello, everybody, and we are going to take you through an example of something that I'm certain you have all seen before. As I'm sure you have all used FaceTime at some point or another, and so we just have an example of building a very similar UI for managing your face in the screen. As you have been able to see as I have gone through here, the square moves very nicely as I move it around the screen, and if I pull it a little away from the corner it bounces back.
If I pull farther, it has that nice ease-in curve as we are going. And if I throw it down, it kind of bounces off very playfully all around the edges. And you will note when I threw it down, it didn't just kind of go straight to where it was supposed to go, it actually had a little physics, it bounced off the side of the screen and came back into position.
Now, I can do a little thing and trigger a debugging view of what those forces fields look like. In this case you can see that we -- [Applause] In case, you can see that we have four spring fields running around here, and we have got an easy way to explain what's going on so if we put this right on the edge, we know it is going to bounce back.
And if we kind of straddle two, then depending on where we straddle it will pick one side or the other. And go through the middle, and it just picks whichever is closest. So let's see how we set this up, and how we can actually set this all ourselves.
So the first thing that we have is this StickyCorners behavior, and as Mike mentioned, it's built up of other behaviors to form this complex behavior that does everything we want. In this case, it has got a collision behavior, because we wouldn't want your face to rocket off the side of the screen. And we have a dynamic item behavior, that affects the properties of that face.
In this case, we reduce its density to it's really light feeling in the engine, but we increase its resistance to motion so that when it finds a place to settle it doesn't keep spinning around in that location. And finally we disable rotation, because that wouldn't make any sense.
You don't want your face spinning around as it's going around the screen. Finally, we have these field behaviors, the four spring fields that map out the four corners, and we add those to the behavior too. Now whenever somebody adds this StickyCorners behavior, they get all of this behavior for free.
Next, over here in the view controller, we go to all the usual stuff where we set up our view hierarchy, but then we also add a pan gesture recognizer so that the user can pick up the face and move it around the screen. This long press gesture recognizer we have here lets me actually toggle on and off the debug interface. We create our dynamic animator and add the StickyCorners behavior to it.
So how does that gesture recognizer work? Well, as usual, the gesture recognizer goes between states. It starts at begin, and when we begin, we do some bookkeeping so that we can keep track of the item, but we also disable the sticky behavior. And I will show you how we do that in a second. Similarly, when it's changed we just move the item around.
And when it cancels or ends, this is where we do something really special. We check the velocity that the pan gesture recognizer had when the user stopped interacting with it, and we use that to add velocity back into our dynamic item system. And this is so that when the user throws that view around, it continues moving with the force of the user's action, rather than just suddenly stopping and being taken over entirely by the field. And the whole reason why we disabled and enabled it is for the same reason. We don't want the fields to be active while the user is moving it around, otherwise it is going to slip out from underneath their finger.
So we can go back over here briefly and see how the enabled works, and as you can see, it's really simple. When it's enabled, we add all of the items back into the behaviors, and when it's disabled, we take them out. It's really that easy to create a system like this, and you can have your own FaceTime-like behavior in your applications. And so to show you how to put that debug UI into your own applications, I am going to bring Mike back up on stage. [Applause]
[Michael Turner]
Thanks, David. So it's really, really quite cool, in David's example, to visualize those field lines, to understand what was actually going on. This is pretty mysterious until he turned that on. So those lines were basically an overlay that shows the field in your animators' reference view.
And specifically, this overlay can help you visualize fields, collision bounds, attachments, and whether or not a particular item is in motion or at rest. Now, you might be wondering, it's not going to be API. But it will be accessible in LLDB, and we are advertising it as an available debug feature on UIDynamicAnimator. And it's really simple to use. Just pause the debugger, find a reference to your dynamic animator, set debug enabled to true, and you will have this great overlay depicting all the physics.
[Applause] Now in addition to debug enabled and disabled, we are also offering debug interval. And this is a way that you can tune how often we update that debug overlay. So, by default, that will be updated on every animation frame, but if you have a lot of complex physics, it might be beneficial to change that to five, for example, to only update the overlay on every fifth frame.
And, we're also allowing you to adjust the animation speed of the dynamic animator. Now this might be helpful for slowing down things to observe what's actually going on. And then it's important to keep in mind when using this, this can affect the results of the simulation, when you slow things down. So always ensure correction at 1x.
So, next, let's talk about UIDynamicItemBehavior. Now, if you recall, this is a way to alter the physical properties of your view or a dynamic item, and it can be applied to one or more dynamic items. And in David's example, he applied a lower density and a higher resistance to the FaceTime square to make it really stick to the field corners.
So a few more examples of the existing properties here. We have elasticity, friction, we saw density and resistance, we have angular resistance, and these are all great ways to adjust how your item feels in the animation engine. In iOS 9, we have added two additional properties: charge -- this affects the degree to which your item participates in our new electric and magnetic field types; and we also added an anchored property. This one is a little bit different. But what it does is it allows your item to participate in the dynamic system, and participate in collisions, but it will obtain no velocity of its own. So it really behaves more like a collision boundary.
So next, I would like to talk about UIAttachmentBehavior. And this allow you to constrain two dynamic items such that they maintain a particular distance from each other. And you can configure this with the damping and frequency to make it behave more like a spring as opposed to a rod. And this is a great attachment. You know, it's very useful, but it's really only one way to constrain two items with respect to each other.
So, in iOS 9, we have added some additional attachment types. The first of which is a limit attachment. This is quite similar to the distance attachment that we just described, however, instead of being constrained by what could be thought of as a rod or a spring, this one behaves more like a rope between the two items, where the only constraint is a maximum distance from each other. And you configure this similar to the distance attachment by specifying two points offset from each item's center. Very simple. Next, we have a fixed attachment.
And this one is a little bit different than the limit or the distance attachment. And you create this type of attachment by first specifying an anchor point. This anchor point is in the coordinate space of your reference view, with respect to each item's center. And this type of attachment offers no movement whatsoever between the two items. It's much like a welded rod between both items as opposed to a rod that allows them to spin on the ends.
And we have also added a pin attachment type. This one is similar to the fixed attachment where you start by specifying an anchor point between two items. But this type allows two items to rotate with respect to each other, about this anchor point. And this allows you to specify a rotatable range, which by default would be unbounded but we could bound it down to something smaller like so.
And finally we have added a sliding attachment. Now the sliding attachment is a little bit more complicated. We'll look at an example in just a second. but just like the fixed and the pin types, we first specify this attachment anchor point that's in the coordinate space of the reference view.
But unlike those types, we also need to specify an axis of translation. And this is where all relative movement between two items will be along the axis of translation. And this type prevents all relative rotation of the two items. So they are really fixed rotationally with respect to each other and they can only move along the axis of translation.
But just like the pin type, you can limit this translatable range. So if you do specify a translatable range, it needs to include the attachment anchor point, where the anchor point is defined as the point zero in the range. So if we set this system up with this type of attachment, we can have linear motion between the two items, like that. So that's pretty complicated. Let's look at a basic example. And to do that, I want to return to our sliding example one more time.
Now, I mentioned in the past, that had we tried to make this slidable behavior, we would have had to add a collision on the bottom and on each side and somewhere off the screen above on the top, to constrain the motion of the sliding view along the vertical axis. Well, with UI attachment behavior, we don't need to do that anymore, we can use a sliding attachment to do that. So we limit the system to one collision, making the performance better, and the code actually quite a bit more readable.
So if we enable our debug view here, you can see the sliding attachment depicted by the straight line along the vertical axis. It expands and contracts as we slide the view, but there's also another attachment there, and that's a distance attachment that we use to attach to an anchor point that's manipulated by a pan gesture recognizer. So, unlike David's demo, this one's entirely within the dynamics system. We don't disable or enable anything. We just stay in dynamics. Pretty cool.
So, finally, let me give you a quick update on UISnapBehavior. If you recall, UISnapBehavior is a higher level behavior. And it can be used to move a view from one location to another with a snap-like effect. And SnapBehavior allows you to customize the damping of the snap, which can really adjust, you know, the snappiness of how it feels. In iOS 9, we've added the ability to customize the snap point after init time as well, which is pretty cool. So let's look at a quick example, here.
So if we try to pan the view, here, with our debug on, it will fall back to the screen, like the original snap point. If we tap in another location, it will snap to the new point and that's just adjusting the snap point on an existing dynamic behavior. Pretty cool.
And you will also notice that with the debug overlay, this is actually a composite behavior of its own. There's four attachments here, configured as springs, just that snap the view to the new position. It's really quite cool. So that's what's new in UIKit Dynamics and iOS 9. I would like to turn it over to David to talk about visual effects. [Applause]
[David Duncan]
Good evening, everybody. So we are going to talk about using visual effects to add style to your application. So we are going to motivate this with just a simple image viewer application, where what we want to do is show the user some extra information about the photo they are currently looking at.
And so as you see right there, this image happens to have a little overlay that tells us what the file name is of the image. So we are going to walk through how we actually create that. And so, the first step is that you need to create a blur effect. And we have three different styles, the extra light, light, and dark styles.
And then you create a blur effect from those styles. And that's just how you do that right there. And finally, you create your visual effect view with that blur style. Then you just add whatever layout you need to do and you can get the blur that you see on screen there. The next step is we are going to add a vibrancy effect.
And what vibrancy does is, it really makes something pop, stand out from when it's over a blur. And so what we are going to do is we are just going to create that vibrancy effect from a blur effect. As mentioned, it's really intended to be overlayed from a blur, so we start with that blur effect to create the vibrancy effect.
We create our vibrancy effect, just like we did before with the blur. And then, in this case we're going to add it to the content view of the blur view. Now, it doesn't have to be directly added to the blur, but there should be a blur that you see behind that visual effect view. And finally, we add the label to the content view of the vibrancy view. And the reason we are adding these things to the content view of the visual effects view, is because that ensures that we get the correct effect for all the content that you are presenting.
And so, when you have done all of that, you get the lovely label on top of the blur as you see on the screen. So what's new in iOS 9 here? Well, the first thing we have done is we have made it really easy to cleanly animate the bounds of your view. So that you can show more information in that blur view to the user without having to do anything really complex. But, in addition, we have made it possible for you to animate the blur.
And so now, if you have a night load in your app, for example, you can do a really clean animation from day to night in your application and move the user along. Now, the next thing we are going to talk through, briefly, is how do we actually get these effects to the screen? What does it do and why do you need to know? It's important because this all has impacts on both performance and correctness. So little baby Sophia here is going to walk us through adding a little overlay to her little UI here.
So the first thing to do is figure out, where are we capturing? Whenever we see a visual effect, we figure out what the capture content we need for it is and we move it offscreen. So we copy that little piece out. And now that it's offscreen, we can actually work with it, but why did we take it offscreen? Well, for one reason is that we need to make sure that we get the correct effects and in this case that we capture everything we need to blur for that blur effect, but also we often do these things offscreen so that we don't mess up the content that's already on screen when we are doing effects like this. So we apply the blur to it.
And finally we copy it back into position where the effect view desired it, and all of this is the definition of something you might have heard before called an offscreen pass. It's whenever we take content, we copy it into an offscreen, do work, and then bring it back on screen.
So what are some other ways that we can get offscreen passes? Well, as you can see, we have got alpha, and you can see the way you do that because if you have a complex view hierarchy that needs alpha in it, then we can't just apply the alpha to individual views, because you won't get the correct effect. Instead we need to take the entire complex hierarchy offscreen, render it, and then apply the alpha to the whole thing. Masking has a very similar reasoning behind it, in that we need to have all the pixels for the mask.
As we just mentioned, blur and vibrancy also go offscreen, but snapshotting, why is that up there? You might ask yourself. Well, first off, what is snapshotting? We have got these two UIView methods, snapshot view after screen updates and draw view hierarchy in rect and and the UIScreen method snapshot view after screen updates, and these all hand you back content from a snapshot.
Well, a snapshot is basically doing the same thing as an offscreen pass but giving you control over the final step of copying it back to screen. We take all content that you asked us to snapshot, render it offscreen, and then hand you back a view or pixel content representing that image.
But, again, what does this have to do with making sure that your effects are correct.? Well, unfortunately, if you get a visual effect caught in all of this, as you can see, Sophia has lost her blur. And that's what you will see on screen if a visual effect ends up getting caught in an offscreen that you didn't expect it to go into. So back to motivating this, I'm sure you have all been to the multitasking sessions this year, and if not, you should watch them on video afterwards.
But one the key things that you need to have a great app participating in multitasking is good performance on screen. Because now the performance of your app also affects the other side app. And so, since we don't have any scrolling in this particular example, we have decided, let's instead of keeping the blur rendering all the time, let's just take a snapshot of it. And so we decided to take a snapshot of that particular visual effect view.
But then what happens is the capture area is happening offscreen. And since you only snapshotted the visual effect view, there's nothing in that capture area. And so the capture gives you back nothing, and the blur has nothing to blur, and you get the broken effect you saw before.
So, now that we have seen how you can have your effect broken, what can we do to fix it? Well, first thing is, we have this handy method on visual effect view, called, what's wrong with this effect? [Laughter] [Applause] Just like with the dynamics debugging flags this isn't available in the SDK as such, but you can call it from in the debugger, just like this, and you will get back a string that looks kind of like this. In this case, we found that there was a mask view somewhere up in the hierarchy that was causing the visual effect to go offscreen, and thus not capture as much content as it needed to, to render properly.
So how can you fix this? First way that this works, if you are using either alpha or masking, is to rearrange your view hierarchy. What we have here is just some container, maybe the window, and a container view that contains a blur and further contents. Well, in this case the blur doesn't actually need to participate in the alpha or masking we have, so we just rearrange to have the blur as the first subview, the container as the second subview, and thus the container and everything in it will render on top of the blur, and we can apply alpha or masking to the container view without messing up our blur.
A second thing that we can do for masking is, instead of masking the container view, we can move that mask down into the content that we actually need masking for. Now, as we mentioned before, masking will often take an offscreen pass, so you should be very careful about performance when do a transformation like this.
Finally, with snapshotting, as we mentioned before, snapshotting is only going to capture what you tell it to. So, in this case, that content view that we are asking to snapshot has transparency in it. So we can see things behind it. But if we snapshot just that view, we aren't going to get that in the blur and it is going to look a little funny.
So if we move the snapshot up, all the way to the window is usually the easiest thing to do, but sometimes you might need to move it all the way up to the screen. So if you are going to snapshot blurs, you should make sure that you snapshot as far away from the blur content as possible so that you are sure you get everything you need for it. And so with that, let's switch over to some best practices with dynamics and AutoLayout.
So, the first thing you might do is you might have fairly complex view hierarchies working inside of dynamics. And what you want is for the outer view to participate in the dynamic system but not the inner views. They are just going to be laid out like you would with any other thing. So you can use UIKit Dynamics for the outside view, just by turning translate auto resizing mask into constraints to true. And, yes, in the only slide at WWDC that says true.
And then you can use AutoLayout to just position everything else inside, just like you always have, or using the new syntax as shown on the slide. Another thing you can do is, often you will have items in the dynamic system but you might have labels for them that shouldn't participate, but they need to follow.
And so Lola here has her little tag that says what the file name is, and we just have this anchor that represents our AutoLayout constraints, and then when dynamics comes in and wants to move Lola around, the label moves with it, but the label does not end up interacting with dynamics.
Finally you can work with dynamics by creating a custom dynamic item, just as Mike mentioned earlier. You just subclass NSObject or some other appropriate object class, conform to the UI Dynamic Item protocol and provide the required methods. A bounds some of size -- it can't be 00, or the dynamic system is going to throw an exception -- and you implement center and transform, and you use the values of those in order to construct AutoLayout constraints or otherwise change something outside of your system. And so to close out, we are going to show you a demo of how you can do just that.
So what we have got here, is, again, a simple application that just shows a photo. But we want to be able to show the user the faces in the photo with some style. And so when we click our dynamic item system stretches out and animates in that blur.
And if you click again, of course, it will move out with a nice little effect. But if you keep clicking, then you can see that it responds very fluidly to the dynamic system and doesn't have very set, rigid path. So it's very reactive to exactly what the user is doing.
So how do we get that to work? So, the first thing we do is we have this face layout guide, and that's just a subclass of UI layout guide, and inside of it is a little bit of dynamics. We have this face layout guide dynamic item, that, again, subclass as NSObject and conforms to UI Dynamic Item, and it's going to manage a constraint, and by setting the constant of that constraint to either the x or y value of the center point, whenever that changes.
And then in here, when you set up the layout guide, it's got a centerrable position and it creates four additional dynamic items that represent the top, left, bottom, and right in that system. We assign it the constraints and the constraints just work from the top left corner of the dynamic item, of the dynamic item reference view, and we use slider attachments to limit where those four dynamic items can go with respect to the position.
And this keeps it from flying outside of the system or collapsing to too small of a position. Now, in the view controller, the way we get this behavior is we have a gravity behavior along with the face layout guide, and we center them on top of each other. So when the gravity changes, the layout guide will move appropriately, and we get the blur effect from our storyboard, so that we don't have to constantly twiddle with it if we decide we want to change the style we are using.
And we use constraints to attach that blur view to the face layout guide. So that when that guide changes size, the blur view will also change with it. Now, in order to make it look like the blur view is cutting out everything but the faces, we actually have a little trick.
We make imposters from the original image, just cut out the images that we want, apply a mask to them, and create additional UI Image Views to place on top of the one that's already there. And so it looks like the blur is just dodging the faces but what is actually happening is you're seeing the views that were placed on top that have the faces cut out on them.
Finally, we have this tap gesture recognizer that we set, and it will be the thing that causes our dynamic system to change. When we want to expose the faces, we just change the gravity. So normally the gravity is causing everything to be pulled into the center, but then when we flip it, it wants to push everything out like a repulsor.
And then we take advantage of the fact that we are always laying out subviews during this, to actually trigger the blur animation in or out. And that's all it takes in order to build this really nice effect from something that both autolayout and dynamics can both provide easily for you. And so we will go back to slides to finish out.
So in summary, we want you to use these technologies to really improve the user experience. So when you add a blur, you add it so that you can put additional information offset from your content. When you use dynamics, it's so that you can have a user interface that responds nicely and as the user expects to their input.
But you also want to consider performance when you are doing these things. Because if you have a lot of dynamic items on screen, that can really bog down the user interface, and you don't get the nice physics that you were expecting. So use it with caution. But also for visual effects, if you have a lot of them, you will end up with lots of offscreen passes and that can incur quite a cost as well.
So these are all the related sessions that we have got for you today. Unfortunately most of them have occurred before. There's only one that hasn't and that's occurring with us, called "Building Responsive and Efficient Apps with GCD." And we will be in the lab after this to take all of your questions and help you get the most out of what you have got for this year.
We have got various documentation, and assemble code for the StickyCorners sample that you saw earlier in the presentation should be up and available, and, of course, Curt Rothert to take your questions through email. And I'm glad you stuck with us on Friday. I hope you had a great WWDC and a safe trip home. [Applause]