iPhone • 1:04:21
Core Animation powers the dynamic user interfaces and visual effects seen on iPhone OS and Mac OS X. Come see Core Animation in action and learn about its layer-based architecture, advanced capabilities, and recommended practices. Find out how to use Core Animation in your application and go beyond the built-in animations provided with Cocoa and Cocoa Touch.
Speaker: John Harper
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
This is session 303, Core Animation Techniques for iPhone and Mac. My name is John Harper. I'm one of the-- is this actually working? That's annoying. How do we do these things?
[ Laughter ]
This one.
Oh, the Back button. OK. [Laughter] Right, I got it. OK. Sorry about that. So, my name is John Harper. I'm a member of the Quartz Engineering Team. I work on Core Animation most of the time.
So we're going to talk about a number of things today. So let's see what they are. First of all, I'm going to talk a bit about the background of Core Animation and why you'd want to use it, what it is, what can it do for you, that kind of thing. Then the bulk of the session is going to be about different ways to use it, how to use little bits of code, just kind of practical things and tips hopefully.
And then finally we'll wrap up with a little bit about performance at the end. So let's get started. So, Core Animation is really a 2D compositing -framework. Despite the name of a-- it being Core Animation, it's really, the bulk of it is layer compositing and then animation of those layers.
And layer compositing is really a very kind of well-used and well-known technology where you have a bunch of graphic elements and you composite them together somehow with a bunch of effects, and so you often see these in things like After Effects and Photoshop Motion, all those kind of graphic design apps.
And so when we came to look at how to design the next generation user interface software, we really decided that, you know, we wanted to marry the kind of hardware compositing of the, of course, desktop compositor with kind of a nice extra level of expressibility of those kind of graphic apps. So, that's kind of what Core Animation is I guess. At least that's where it aims to be -- we have some constraints that those don't.
And so really the way it works out is that your user interface, instead of just being a bunch of drawing commands and really now is a tree of layers, and the layers can have drawing commands applied to them. Subsequent layers can have positions and content and extra special effects and things.
And then once we have a layer tree, it becomes very natural that you know, we can now move things around and animate them because we have this nice kind of declarative structure. So, first I just want to show you a kind of a concrete example of what I'm talking about.
So let's hope that switches. So, we have this little demo of something you've probably seen before, which is the kind of iPhone lock screen. And obviously on the iPhone this is using Core Animation composite, and this one is here as well. And you kind of see, you have this kind of nice UI, a lot of graphics.
And then when I start to kind of bring out what the layers are, you can kind of see that this is really, you know, not one element drawn at once. It's a lot of different pieces and then that kind of composites together as layers, with you know, different bits of text over other text, the 2 copies of the battery for reflection. A gradient to give you the rim.
And then at the bottom, you can see that there's this kind of slides on lock thing and it has this nice posing text animation. And so, the way that is configured is really there are 2 layers there as well, and one of them is kind of a gradient, one of them is text.
And then when we move, you know, we animate one on top of the other and mass them together, we get back to the, you know, this animation you've probably seen before. And obviously, when you kind of re-- put it all back together, we end up how we worked for the different layers. So that's really what I mean by layer compositing.
And so that's, you know, that's what CA is, it's a compositor and an animator. And so, where does it live in the system? So, you know, you've probably seen there's something like this diagram before, but basically we have your applications sitting on top of some kind of Cocoa UI framework, sitting on top of some more kind of hardware specific graphics APIs and sound APIs. And then that sits on top of the hardware in Core OS in the low level kind of systems.
So that piece we're talking today obviously is the Core Animation block which sits kind of, you know, below Cocoa below-- above the hardware, but often your applications will need to call in to Core Animation directly to get the benefits when the UI frameworks don't quite do what you need. And obviously, this is just one OS, we now have two. So we look at the iPhone version, this is very similar and we basically have the same kind of stack which is pieces, kind of removed the other pieces added.
But they're very, very similar, so you can think of them in the same way. So as I kind of hinted, we believe that most of the time, we should be able to get away without using Core Animation directly and that kind of goes for lots of the graphic APIs.
Really, we have all these kind of high level frameworks and they're there for your benefit. You know, most of the time, you're dealing with UI widgets and TableViews and those kinds of things, so there's no real point, you know, going down to the low level APIs to try and reinvent all that stuff.
It just doesn't really make good use of your time. And so the really nice thing about those frameworks though is that they will be using Core Animation for you. So every time you create a view, you'll create a layer as well implicitly. At least that's always true on the iPhone.
I guess I should say you have to opt into that mode on the Mac but it's pretty easy to do so. And then the great thing about that is that when you do have layers behind all your views, then when you get-- you run up against the limits of the UI frameworks, you can really, you know, get-- kind of drop down very cleanly into CALayer and then start adding custom animations, custom kind of layer effects right into your existing view tree.
And I guess the one exception to this, these kind of rules is when you really have a-- an application that doesn't fit the standard UI paradigms. And if you had, think of something like Front Row on the Mac where really it doesn't have a lot of standard buttons, you probably might want to consider just using Core Animation as the entire composited UI. And that's kind of a gray area because if you think about something like the iPhone coverflow implementation, obviously it's a very kind of semi-immersive type UI in that you have these kind of album covers. But really, it's sitting inside the normal UIKit view tree.
And even when you flip over the covers, you see that we have the UIKit TableViews on the back. So you know the best use of these things is trying to marry them together in the best possible way to their strengths. So hopefully that's enough about why you should use this stuff.
So now we're going to move on to how to use it I guess. And we're going to start off very, very easily, very basically. So this idea here is we-- you know every layer has some position in space, so we're going to start by creating ourselves a layer object.
Then we'll give it a rectangle and we'll basically add it to a parent layer so that we have some way of being visible on the screen. So what you're seeing on the left here is the dotted outline which represents where this layer would be. Obviously, we have not actually done a thing to give it any content yet so we don't actually see anything. So, next thing we might want to do is give it a background color.
In this case, we're going to set it to red and we're using the UIKit kind of color creation syntax just because it's shorter. But you can see, we just-- all we had to do was set that property to be red and now we have the layer drawing itself on screen as a red rectangle.
Also, we could design an image and pretty much this happens the same way with the same amount of code, and we're actually loading an image off disk here as well. And then, you know, we can start to get fancy and just kind of do things like modify the corner radius so we have a round rect instead of a rect, or we could set the opacity to 50 percent just to kind of fade it out a bit.
And you kind of see that this is a-- basically, this is the bulk of our API. We have objects, you know, layers and animations and you set properties on those objects and somewhere in the background, the compositor goes off and kind of updates the screen to do what you asked it to. OK, so a little bit more deep now. You know that was providing an image which we loaded off disk, but a lot of times, that's not good enough because your content obviously mostly depends on the user behavior or, you know, whatever. Something is not predictable.
So you really need to be able to draw into the layers as well, and we-- the basic way to do that is you have to give us a method which will do your drawing into some CGContext. And so there's 2 ways of doing that. You can either provide a subclass of the layer with a drawInContext method. This is kind of almost exactly the same as the UIView drawRect method.
Or what's often easier is to basically just give the layer a delegate object and then implement the delegate version of the same method. Either way, it ends up kind of doing the same thing in that you will have some method which can do your drawing for you. Now when you've given us that method, you then need to tell the layer that it needs to redraw, which is the setNeedsDisplay method.
And what happens then is that once you told the layer I need to update and I have a way of doing that, then at some point before the layer is next put on screen, next composited, we will call your method with some kind of rendering context in which you can you know, draw your drawing commands such that they will end up on the screen at the right point, you know, as soon as possible. So let's talk a little about how to optimize this.
So, you know, typically you draw the first time you redraw the entire layer. Second time you draw, you may know that only this piece changed, so I only really need to draw that rectangle. So obviously, that will be a good thing to optimize is not redrawing the entire layer.
So using the setNeedsDisplayInRect method instead of setNeedsDisplay is just a way to say, you know, I know I need to redraw but I also know which part of the screen. Now, we can take that a little further in that, you know, now we know which rectangle we're drawing.
When you get us to draw, you can also ask the CGContext which rectangle are you letting me draw into, because obviously if you'd say I only want to draw this region, then we will only let you do that because we don't want to let you, you know, waste memory bandwidth.
So that's a good thing if you have a lot of complex kind of drawing state, you may have to be generating images to do your drawing, then you can kind of do your own calling up front and say, well, I know that this region is not on the clip rect so therefore I don't have to generate any of the state to update that part of the screen.
And finally one of the things we keep harping on about is that if you have opaque content, you really must tell the layer that because we can't work it out for you. > And if you have an opaque content in the layer, it really means a lot to us because we can, you know, avoid drawing everything beneath it. We can avoid compositing that object and it makes the performance a lot better if you do that across the application.
So, moving back to images, we're going to hop around a bit, but back to images, we have this idea of often your UI would draw kind of 3 or 9-part images. I hope probably some of you know what that means. And what that really is, is that you know when you have an image, it represents some part of your UI like a button or something.
And then if you need to scale the button, you would not want the image to scale uniformly, you want it to scale along linearly with the edge gaps remaining static. And so, you know, you could do this in the previous versions of CA by creating multiple layers or rendering the image using CG or you can, I don't know, I guess that's about it. But-- but now, we have a way of doing this kind of built-in to the framework and this works a lot better.
So what you can do is if you look at the top image on the left here, you can see we have this button artwork and we want to stretch it across the layer as the bottom image looks. So what we're doing in the code example here is saying, "OK, so I'm going to tell the framework that the center of this image which is the part we actually will scale starts 20 percent from left hand side and it extends for 60 percent on the width, which is this 2.2.6 thing.
And obviously it's the full height of the image. And then when we actually scale and resize the layer, what happens is, you know, we resize only the center part, and you preserve the kind of the look of the button artwork. And the reason that's really nice is because, obviously now we know about this, we can animate it really well. We can, you know, stretch it as the layer is animating, we don't ever get any cracks or seams or, you know, little artifacts, and it just works out a lot better.
It's also higher performance than creating, you know, 9 layers to represent the same thing. So finally, there is one other property we'll mention briefly, which is you may also want to have, you know, one art work image and then put it in multiple layers and reps and pull the different pieces out of it, 'cause that's another way to gain efficiency, and the way you can do that is by using this contentsRect property which you can find more about in the documentation. So, one more slide about images.
So, in the past, large images have been kind of a problem in that you know, that the GPU gives us some size limits. The images, your apps give us are large and that we basically just drop them on the floor, which was kind of unfortunate. So what we're doing now in the next, the-- current releases of both platforms is we'll actually do all that tidying work for you, so you can give us kind of 8K by 8K images if you have enough memory, and the framework will basically deal with the little problems of drawing that to the screen. The caveat here is this clause about if you have enough memory. And the problem obviously with 8K by 8K images, they're an awful lot of memory.
That's probably more memory than your iPhone actually has, so it's not really going to work, except for you know, certain cases. So for very large images like this, we still recommend that you use the TiledLayer mechanism. And the TiledLayer is really just another type of layer, but instead of asking you to draw the entire thing at once, it would really ask you to draw in lots of little tiles as they're needed, and that's great for memory usage because obviously if you're only looking at part of the image on screen, we don't need to ask you for the pieces that are off screen.
Similarly, we can do a lot of good stuff with multiresolution data so that, you know, if we're-- if your image is, you know, 8K by 8K again and it's scaled down so you're only seeing the screen size, you know, so the entire thing fits. Then you could have a res-- a resolution level which matches the screen size, then we could draw that a lot more efficiently. So that really means that we can ask you to provide data at like 50 percent and 25 percent and% so on.
And then finally, the one kind of thing you have to be aware of here is that, you know, if you are-- if you use the TiledLayers, your apps will look different because the kind of tradeoff is that, you know, we ask you for the data after we know it's been used, and it may take you a few bit of seconds to actually provide the data so there will be a brief kind of pause as your apps provide the tiles and the tiles kind of fade on when they're-- as soon as they're provided. So one last performance step here is that if you do have lots of large images, and we're talking really large, and you want to display them.
That's the problem-- you know, the bottleneck becomes when you start using the TiledLayer, but you will be-- still be decoding images out of the JPEG compression or whatever. So the trick there is the-- if you know you're going to be using tiles, then you should pre-tile your image files into like 256 square tiles or something. Because that way, you can load them off disk really fast and then everything will look-- work a lot better. OK, so enough about images. So, next piece of talk is really about animating.
And so again we're going to start off kind of simple. And as you probably heard, most of the animations-- excuse me, you can do with Core Animation are implicit. And that really means that when you change a property, by default, most of these properties will animate in the background somehow. So in this case, we're going to change the opacity and we're going to set it to 50 percent.
And assuming that, you know, originally it was 100 percent, and then we're going to make-- get a-- just automatically, we'll get an animation to change the opacity of that object from 100 to 50 percent over a quarter of a second. And that's great. I mean a lot of the time you can get a lot of mileage out of just, you know, programming the properties of the layers and just watching things animate. But you often want to, you know, modify those implicit animations somehow.
So we give you a bunch of ways to do that basing on this CATransaction kind of a thread object. And so first, you can just disable them entirely by, you know, just calling the setDisableActions method. Then other things, you can do a lot of things like change the duration of the animations. Like I said, the default is quarter second.
Or you can also change the timing curve which is really the ease function of the acceleration of the animation. Here, we're using one of the built-in curves, but you can also kind of define your own using this kind of Bezier layer technique. So, that was a quick look at inputs of animation. So often, that-- that's great, but it really would work for the state changes.
So you know, if you're changing something-- a particular property from X to Y, then that's-- that works. But if you just want some property to kind of animate on the background, then there is no state changing, so implicit animations aren't really the right thing. So in these cases, you really want to use kind of explicitly programming of the animation model. So here's an example of the people I often run into, they want to have a-- an image kind of flipping through a bunch of frames. For example a spinner animation or, you know whatever.
So, this code block is basically doing that. We're going to create a bunch of images, upload them off disk probably, and use, say N frames is the animation that would create a KeyframeAnimation which is going to be the object which represents the spinner, so that's what we're creating here.
And obviously, since we are animating the images, we need to say the key path to this animation, the thing the animation targets is the contents property of the layer. That's, you know, as we saw in a previous slide, that's where the image is stored. And so the timing, we're going to say the animation is going to, you know, loop, last for a second then it's going to loop repeatedly by sending a repeatCount to basically to infinity.
And then the next-- kind of the line in between the calculation mode, it's really only used in this case. And all that's saying is that, you know, because I have this flip-book and I have a lot of frames and they're going to be spinning pretty quickly, I don't want to bother trying to interpolate between images in the kind of the progress, I just want to pick whichever one is closest and that's what kind of discrete interpolation means, it means no interpolation. And then finally we're going to, you know, we have the images so we can now set them as an array on the animation just to provide the keyFrame values.
So at this point, we have this animation object and it basic-- exactly describes the spinner that we want to see, so the next thing then is we're going to add it to the layer. We use some kind of keyPath mechanism here just so we can remove it later if necessary. But once it's been added, then it will just sit in the layer, spinning around, doing its thing, and just live there until you remove it.
So that's, you know, it's more code than the implicit animations, but you can see you get a lot of control here and you can really do whatever you want to some extent. So there are a bunch of other cases where you may want to do things like this.
One of the examples I could think of is if you-- if you had a cursor flashing, you know a text cursor. Then one way you can do that is just by redrawing every second, but a better way would probably be to have a layer and, you know, ramp the opacity at a-- on a repeating loop, 'cause that way there will be no kind of class drawing happening.
And as I said, this is really just where there is no natural state transition. So, another kind of animation, the previous two slides, you know, implicit and explicit animations are both really dealing with properties of the layers. And more explicitly, they're dealing with properties that can be kind of numerically interpolated.
So we have something like opacity or color even, these little things that are represented as numbers and therefore we can interpolate, you know, from 1 to 0 or whatever. But we also have properties that are, you know, nonnumeric and therefore really-- not really interpolatable. So with these cases, we have something called the CATransition class, and the transition is really kind of a catch-all animator which will take the previous state of your layers and image into, you know, the current state and do some kind of image processing effect to kind of blend the two together.
So the default transition we use is a crossbreed because that's, you know, the very basic one, but there are several other types as well. And if you're on the Mac and basically you use Core Image to provide any kind of transition code to really be able to write your own transitions to get some kind of nice effects.
But again, these are things you might want to disable at times and we're going to show you a different way of doing that here because often, you know, in some cases, you just want to disable an animation for a particular instance of a property. But in other cases, you may want to say, "I never want sublayers properties to cause an animation to happen because you know, it's-- it looks too heavyweight or whatever." So you can also use the subclassing delegation style coding here.
And in this case, we're going to subclass a layer and just add this actionForKey method and see if the keyword given which means, you know, this property has been changed recently, then the animation we're going to return to kind of animate that state change is going to be nil, which means no animation in this case. If it's anything else, we just delegate the super layer-- superclass, sorry. OK, yeah.
You know, you want to be at a-- trigger another animation or do something to your layer tree or run some other code, whatever. And so if you have an explicit animation like the spinner example we saw, then you can set a delegate property on the animation, and then your delegate will be messaged whenever it starts or stops animating.
But something we've added recently-- currently it's only available on Snow Leopard. It's a way to use kind of a block based approach which in some ways can be a lot nicer. So what we're doing here in this example is we're creating a transaction block which is CA's way of kind of wrapping up state changes just to make it-- so they don't leak out at all. And then we're going to set completion block property to a block of code because we now have this nice new block syntax where we can reference code as closure type objects.
So our block here is going to basically do this thing we'll come back to in a minute, but so once we've set the block, we know that any animations we create in this transaction, right, kind of bracketed statement will-- the animations will trigger the block to run as soon as they complete. So in this case, we're obviously doing some kind of animate this layer, removing itself from the screen. So we're going to set its opacity to 0 and its position to somewhere way off on the right.
And what that will do obviously is trigger implicit animations. And once we've committed them and the animations have run, the code, this kind of engine will know that as soon as they've stopped, both of them have stopped, then the block would run once. In this case we're going to say, OK let's just remove that layer, now we know it's fully off screen. So the nice thing about this is, you know, it's a lot cleaner in coding style because you don't have to have another object, you don't have to deal with a lot of extra state and it just makes things a lot more localized.
OK. So, another point about animations is that they only really animate on the render side which means that the objects you see on which side is what we call the client side objects. The Objective-C objects really don't reflect animations that are happening at the current time. So for example if we had an opacity animation, animating from 1 to 0.5, then if you query the opacity property at the layer, you will never see anything but 1 or 0.5, and that's because only the compositor is actually evaluating these animations when it's drawing the scene. So we have a way to query these values though on the Objective-C side, and the way you do that is to call the presentationLayer method on your layer, and that will actually create another copy of your layer with all the kind of properties applied.
And then any animations that are running on the layer, at the current time will get evaluated and their value is also pushed into the copy of the layer. So that means that you can then query any properties on the presentation layer and get back a good approximation of where the things actually are on screen. So you know, you can find out where is the object which is animating from A to B or whatever. And so a couple of places we've found this to be very useful, just to give you an idea why it is important.
Firstly, you know when we create an animation, we simply want to animate the thing from where it is to where it's going to be, which means we have to look at the screen value of the property. You know, if the object is already animating, we don't want to stop a new animation from where it used to be, we want to have exactly where it is on screen at the current time.
So obviously we would set the form value there, you know, of the animation to that presentation layer property. And secondly, the presentation layer really forms a tree because when you create a sublayer of a presentation layer, then you don't get back the regular layers, you get back the presentation version side, the versions with all the animations applied.
So this means we can do things like hit testing across the tree as it exist on screen just by asking the presentation layer and asking it to hit test on this point and then asking for the model layer back so we actually kind of know where about in our object tree we ended up. The model layer just goes back from the presentation to the original version, so this is kind of like the inverse.
So this thing is very useful. OK, so here's another new thing for the current releases. We now give you a way to use our animation system to animate your own properties in your own drawing. So we're going to run through a quick example and hopefully explain it that way. So basically-- if you have a layer subclass you can obviously define your own properties in the subclass. So here we're going to define this lineWidth property which is just a float.
And then in the implementation, we're going to declare it dynamic so the Core Animation will implement it, this is nothing new. And that really means that we don't have to bother writing any accessor methods, everything will just get taken care of. And then the new part is that if we marked the property as needing display by implementing this needsDisplayForKey method, then what that is telling us is that the contents of the layer, the stuff you draw is now kind of dependent on the lineWidth property because we now have this kind of dependence relationship, and so that really means that it gives us the ability to say that when we see this lineWidth property animating, we know now that we-- that your layer has to redraw with that effect taken into account, and so obviously your drawRect method will just do the normal thing and query the property. So let's look at an example of this.
This will be very quick. So, that's wrong but...I'm-- so we have this Bezier layer class, and you could see we-- we actually have the lineWidth property which is on the slide but we also have a bunch of other properties. We have 4 points because we're going to draw a Bezier curve and we need 4 control points, and we have a line width and a color so we're just going to draw a line along the curves.
So we look at the implementation file. We have the same kind of dynamic thing on the slide, and the first thing we're going to do is we're going to create a set of our keys just so we can refer to them more concisely in the future, and these are all the properties that we implement. Then we will implement some default values. This is all old stuff.
The default value is just the way to provide a kind of a default value for your class. You don't need to set them every time you create an instance. Then we'll implement this needsDisplayForKey method just using the Bezier keys set. We'll do this which is implementing implicit animations for our class. This looks like a lot of code but it's really not that bad.
But what this is doing is you know every time one of these properties is changing, we want to set up an implicit animation. And then finally, and the most interesting part is in our draw method, we're not going to really do anything, particularly interesting, we're just going to draw the spline with the parameters.
So when we run this, I have a little controller app around this class and you can see that we're drawing the spline, we have some extra handles which were drawn using CA compositing. But the-- all this spline stuff is drawn using a CG drawing. And so the interesting thing now is when I click the flip button, we can animate that in the background, this layer is being asked to draw as the animation runs and it just kind of works like a regular CA animation. We can, you know, animate multiple things at once, different speeds and it just kind of works naturally. So it's really what I wanted to show about that. Let's quit this. [Applause] OK.
So that is kind of useful, but to get the best out of it, you need to kind of know exactly, well not exactly but you know a few details about how it works so you know what to expect. Obviously normally when we run animations, we run them on a background thread. In this case because it's your code, we can't really do that so we're running the animation on the main thread off the timer once we see that the animations are happening.
And again like the presentation layer the way we're doing this is we-- before we draw into your layer, we basically create a copy of the layer, apply all the animations for the time we think we should be drawing for and then call your draw method on the copy so that you know when you get the animating values kind of already installed and you don't have to do anything special.
So, you know, we can basically animate all the existing types we can animate before, you know, numbers, rects, points, colors, et cetera. And as I said earlier, it only works on, you know, formal property declarations. It doesn't work for KBC properties. And again, because we don't really know exactly when the frame will-- frame sync will happen, we're trying to-- we're basically guessing the animation times.
And if it works out, well you saw in the example that looked pretty smooth and the things tracked pretty nicely, but again this could be blocked by your main thread for example so, you know, there are some caveats there. Anyway, so that's enough for animations. So one of the things we've added in the next release is a bunch of new classes to draw different types of text content, and so we want to go over some of those and some of the old classes as well. So, first of all, one of the things we added is a way to draw your gradients natively.
So before you would have to, you know, draw the gradient into the CGContext, which obviously takes a lot of memory or create your own kind of gradient and ramp image somehow. And now we just have a gradient layer and you can set the gradient access and the gradient color stuff and locations as we're doing in this code and, you know, you end up with a gradient drawn in the layer.
Obviously all these properties can be animated so you can animate your colors, animate the-- the kind of the axis of the gradient and hopefully you should be able to see that, you know, the orange numbers on the right correspond to the orange numbers on the left. We have this, you know, black, blue, white gradient going on here.
Right now we only support axial gradients, but it is still -kind of a nice usual feature especially if you want something like a full screen gradient background because you really don't want to burn the image off the entire screen just to draw a gradient, so it's pretty short and sweet.
One of the things we've had to do from the start is a way to draw a text, but it's only on the Mac unfortunately right now. And so if you want to draw a simple string, this probably isn't useful for a very complex text use but for just drawing labels, it's pretty useful.
So you can create a text layer, give it a string and a font, and a font size, and any of its other properties.
You know, the string can actually be NSString or NSAttributedString, you can have various font types. Then you can do things like change the justification or the centering or the wrapping or the-- and you basically get end up with the string drawn in the layer using Core Text in the regular Mac text APIs.
One of the new things about this in the current release is that you can now animate the properties. So for example I could animate the font size from 36 to 12. And the way that works is basically using the same kind of client-side drawing mechanism we were just talking about a couple of slides ago. So let's take a look at this in a little more detail.
One of the things people often want to do with the text layer is somehow modify the way it draws. And so if you realize that the text layer is really just another layer with the sub-- with a drawing context method, then you can really work out how to subclass this thing. So we're just going to create a very basic subclass here just to illustrate this. So well you create a layer called MyTextLayer and then we're going to implement a single method which is just the draw method and then we're really just racking the superclass here.
So but then we can start to add things before we call the super-- and obviously anything we do the CGContext will be reflected in the text of the superLayer drawers. So one of the things people often run into is that LCD text is not enabled for this layer. And the reason that is really because it's drawing into a transparent bitmap and LCD text does not work in those cases.
So the thing you have to do here if you want to get real LCD text is have a solid background color then fill a rectangle of it into your layer, so the first few lines, and then just tell the Core Graphics, you know, I want font smoothing which is what we call LCD text rendering for some reason.
And then again, we call super and then super now has the context setup with those, with the right property such that we will get text-- LCD text rendering. And if you can see the difference, you can see the color fringes rather than the grayscale fringes on the text.
So, one final thing which is often useful is CG also support shadows in everything it renders. So if you want to have shadow text, I don't know if you can even see it, yeah, maybe, the way you just turn CG shadowing and then when the super draws the text, that gets the nice shadow underneath it. And this is by far the most performed way to draw shadow text on the platform, on the layer anyway.
OK. So enough with the text. So the next thing or the next class we want to look at is a way to draw Bezier Paths, Bezier splines. And so these are basically just some examples of what we're talking about. Really these are kind of the line based vector art where we have basically solid colors and fills and stroke lines and things.
And so you kind of see that all these things are basically defined by a path, a line. And the path is really made up of bunch of segments, you know, Bezier spline stuff. And Core Graphics has a lot of built-in support for this. You can create paths. You can render them. But until this point, Core Animation has really not dealt with them too much.
Obviously one of the really nice features about rendering paths rather than images is that you can scale them in and out and they stay perfectly crisp because you know the image information is described geometrically rather than as a raster image. So we now have this thing called the shape layer and let's look at the little example of that, excuse me. So in this case we're going to draw an ellipse.
So we will create a rectangle, create a path, add the ellipse to the path and now we have a CGPath which represents the ellipse shape. So once we have the path, we can create a shape layer. The shape layer will then have its path property set to the path we created and then we will tell it how to draw that path.
So in this case we want a white stroked line around the perimeter of the path and we want it to be10 pixels wide, and we also want the path to be filled red. So that's what those 3 lines do. And finally we'll release the path because obviously we don't want to leak any memory.
And you can see we've set up this path and now we're going to be-- set up the flare rather, and it will just draw and we can scale it in and out, and hopefully the anti-aliasing will stay crisp and everything like that. OK. So we can have a number of possible uses for this. We're just going to touch on a few of them. For example if you had something like a drawing app or charting app you could use this for arrows and various bits of vector art that you can animate around the screen and composite pretty fast.
You can also do things like the marching ants selection effect you often see. You have 2 shape layers, the one on the top give them both the same path and then make one of them dashed, and then you obviously see because you can animate these shape layer properties, you can then animate the dash phase and the kind of the ants will move around the perimeter of the path. You can also use this to draw text because, you know, most fonts describe their glyphs as splines.
And this is really, you know, it's kind of a judgment call whether you'd use this version of drawing text or the other version because obviously if you have lots, you know, a thousand glyphs on the page then using a layer for each one of them is going to get kind of heavy weight. But if you have just have a few glyphs, you want to be able to animate them around, flip them in 3D, then this kind of drawing might work a lot better because, you know, they will be rasterized at the right resolution.
And one key tip for you is that if you are going to do this, then really you should try to reuse the path objects because now that will allow us to do some level of caching if the path objects are reused. For example if you have two eglyphs from the same font, don't create the path twice, just create it once and use two layers.
And obviously there are a couple of other uses which you can probably work out. So one interesting point is you know given that we have this support for rendering paths then can we animate them because obviously animation is one of the things we also do. And the answer is yes, but morphing paths or morphing shapes is kind of a hard problem, so we really only handle a couple of cases well. And you really need to know about this. So the case is basically, you know, the 2 paths we're trying to blend together, must have the same number of subpaths. And each subpath must have the same number of points.
If that is true we can basically do a pretty good job of, you know, blending the control points together and making sure the curves stay continuous and making it look pretty decent in most cases. The other case of course is where the path structure differs, then we really don't have any way to kind of register the correlation between the two paths, so we basically do something, but it often looks kind of weird. Because of this, you know, we don't let the path property implicitly animate this thing, you have to basically enable this animation explicitly because you know it may or may not do the right thing.
So let's see an example of this. So here we have this kind of mouth glyph and you can kind of see it has 2 states and in the second state, you know, the control points have been basically pulled out a bit. But I hope that you can realize from the graphic that, you know, there really is no change of structure here, it's really just kind of pulling the points around and changing the angles of the control splines. Some-- something that doesn't work, if you imagine changing to animating between 2 glyphs, then you can kind of see the, because the A and the B have no real correlation, then, you know, something happens, but, you know, some of you may like it, but I don't know.
So anyway, so lets-- we're going to show a demo of this a little later, but let's keep moving for now. So there are few other content layer types which we're not going to cover in any detail but just kind of let you know what they are. So obviously we deal with OpenGL content pretty well.
We have on the Mac there's an OpenGLLayer and on the iPhone, there's kind an EAGLLayer because the EAGL is the iPhone OpenGL system layer. And so these kind of do the same thing in that they both let you put OpenGL content into your layer tree but they are pretty different in the way they do that, so it's worth touching on that briefly. So basically, the OpenGL on the Mac is really designed for using kind of OpenGL in a UI kind of setting, so we try to tie it into the regular layer drawing model. And that means that SetNeedsDisplay works as normal, and yeah.
On the EAGLLayer, sorry, on the iPhone rather, the EAGLLayer is really just trying to be the most lightweight way possible to get OpenGL content to the screen because we really thought the most common use of that would be for gaming. So it really has a different model where you basically have the standard OpenGL kind of draw and swap model, rather than getting any of this Objective-C stuff involved. Also on the Mac, we have ways to provide QuickTime content into your layer tree. So you can use the QTMovieLayer to display movies, that's kind of what QuickTime player uses.
You can also use the CaptureLayer to display camera content. And then finally, if you have Quartz Composer compositions, you can create a QCCompositionLayer and give it the composition and put that in your layer tree. This works out kind of nice because you know Quartz Composer is a great way to create kind of free-flowing graphic elements, and while the CA is more for structured compositing.
So you can kind of create the more free-flowing ones in QC, expose a bunch of their properties into the top level and then when you put the Quartz Composer composition into the layer, you can then animate those properties in the Quartz Composer patch, I think they call it, and everything kind of works nicely when the 2 systems play together.
OK, so enough of that content. Let's look at some other features of CA. Well, these things we're going to be talking right now are really not, you know, they're not individual layers or individual ways of drawing things, they're really ways to affect all different types of layers. So, again. OK. So firstly we want to talk about masking. And so we have this example here of a pink logo of Rye green grass.
And you can see initially we've started off with the foreground element just posited over the background elements as a sublayer. But what we can do is we can say, "OK, that's great, but we really want the masking operation here". So we're going to say instead of doing a sublayer, it'd be a mask for the background.
And when we do that, you get a different effect. We get the basically the background is masked through the alpha channel of the foreground element of the ball. And so then, that's kind of nice. It works pretty orthogonally every layer can have a mask.
All mask layers act basically like any other sublayer the geometry is the same. They can animate. They can have movies in them whenever you want.
And the mask really takes the 2 versions of the backdrop, you know, the initial background of this stuff with the layer composer and just kind of masks them together through the mask. But, well, we'll talk a little bit more about that later. But one key point here is that, you know, this mask operation is a more kind of complex compositing operation than just compositing the source over the destination. And so it does often cost more in terms of GPU required performance. So, I mean, you might be able to get away with doing this a few times but if you have a thousand layers with a thousand masks, you probably will get bogged down.
But, luckily, we can often get around these limitations just by knowing our data. So in this case, or in the previous case, we know that the background is black and so that means we can kind of create a cheating masking operation by saying instead of masking, we just kind of composite the inverse of the mask over the top.
So, obviously, the reason we need to know the color is because we need to multiply the color into the inverse of the mask here and that's this foreground layer, and then just kind of composite that over the background. Because we're just using over compositing, this works a lot more quickly.
And so if you look at the iPhone UI they have a lot of instances of where this is used, it's a key trick that's been developed there-- well, I'm sure it wasn't developed there but. So, obviously, one case where this is very common is you often want to have the scroller where the top and bottom edges aren't kind of a sharp clip but they're actually kind of a feathered edge. And so, obviously, you can do that with a mask there.
But if you know the background, you can really just put 2 gradient layers at the top and blend from opaque to non-opaque and that will be again a lot faster. So another Mac only feature is, obviously, we have Core Image on the Mac, so we can provide a list of filters to any layer and then the contents of the layer will be put through those filters when it's rendered, when it's composited. So in this case we're going to create a single CIFilter called the Bloom Filter which is just kind of a glow effect.
We're going to tell it, you know, give me your default values and then we're going to give it a name which we'll come to in a bit. But then once you created a filter, we will just set the filter as one of the layer's filter objects in the filter's array.
And now every time the layer updates, it will be put through the filters. And, obviously, you can write your own CIFilters and define your own image processing effects this way. So the reason why you name the filter though is because now we have a name, we have a key path because we can refer to this filter as, you know, filters.something.input something which means, you know, now we have a way of actually referencing all the inputs of those filters. And since we can reference them, we can now set them through the regular layer set by these key path mechanisms.
And the reason we want to do that is because, firstly, it means that layer knows that the filter changed and, therefore, it will update correctly. But secondly, it means we can also get the benefits of the implicit animations here because, you know, the properties are being set by the layer and, therefore, can create the right animations. And, obviously, we can animate these explicitly as well because we now have the key path.
But again, any of these kind of complex rendering methods are expensive so if you know they're both, the content and the filter are static, you know, are not changing, then you may be able to just, you know, render everything into the CGContext at once and have the filter applied once rather than every time the screen updates which, obviously, is going to be a lot cheaper. So another type of compositing effect are background filters.
So just in the same way we can filter the foreground, either the content of the layer, we can also filter the background, and that's kind of what we're going to be talking about here. But, you know, firstly, we set off with the same composited 2 image graph. And this time we will do something different.
We're going to create a crystallized filter which is a kind of a brown ray effect type thing. And then we're going to set the filter's defaults again, but this time we're going to add it as one of the background filters of the layer not the foreground filters. And so when we do that, you can see that everything under the layer with the filter everything under the pink thing has now been filtered and crystallized.
We can also set the compositing filter which is a way to give us other blend modes so CI provides a bunch of different blend modes most of these standard ones. So in this case, we're going to choose the hard light filter. And then when we apply that, we get a different compositing effect. So now you can see that we're replacing the source of the compositing by something custom.
And again, we could rewrite right around the filters here. But one final point is that, you know, these things also interact with the mask as we saw earlier. So if we apply mask. In this case we're going to take another copy of the Core Image-- sorry, the Core Animation logo layer and basically use its alpha channel as the mask because that would basically match what we already have on screen. So when we apply that, you can see that now we're restricting the filtering and compositing operations within the mask. So you can use this for a bunch of effects like blurring of the objects. It works pretty usefully. OK, enough about filters.
Let's look a little bit to the-- I forgot a drink of water. OK, so, obviously, as you probably heard what we talked about, CA is really not just a 2D framework, it's kind of 2-1/2D. And what we mean by that is we have lots of 2D elements but they really live in a 3D space so, you know, somewhere between 2 and 3D. So the way you deal with this is you can control the Z access. And so in this example we have 2 layers, A and B.
We're going to set the zPositions to be negative 100 and plus 100, so obviously, one is further back and one is further forward. And so the obvious question here is why do these things not look in perspective? 'Cause we assume that they're the same size and they are. The answer is because we haven't set up any way to project the objects into a superlayer. And so the way CA works is that every step in the graph of the tree of element is always a 2D compositing operation.
So basically, every parent looks at the sublayers and it somehow projects them into its plane, it's kind of postcard in space if you like. So what we can do here is we can say basically set up some kind of projection for shortening matrix. So we created a transform, the matrix, and then we're going to set the Z component on the perspective column to be minus 1 over EYE_Z, and this is just a kind of trick you have to learn. But EYE_Z really means is the distance from the, kind of the eye point to the projection plane.
So if you make that bigger, you get lesser perspective effects and make it shorter that you get these very weird kinds of fisheye things. And when we apply that to the layer, then you can see that the sub-- sorry, when we apply that to the superlayer of these objects, then, obviously, that projection matrix is now being used to project the two sublayers into the parent and giving you the correct foreshortening that you probably would have expected. And so really what this is doing it's-- if any of you know OpenGL this is really trying to mimic the OpenGL kind of projection matrix which is the way of defining the viewing system. And so similarly again, we can also start adding more kind of camera effects.
So if we want to rotate a viewpoint on the load or basically move the camera position away, we can have this kind of rotation translation portions of the matrix. And then when we apply that, we get, you know, another kind of wall. So really this is kind of an analogous to a camera in most 3D systems but it's just expressed a little differently because it fits better with our 2D model, 2-1/2D model.sty-l7 So one other part about 2-1/2D is, as I said, everything gets flattened. So if you think about how you define a cube, then your cube would probably be 6 layers, 6 square layers, oriented around with different rotations so they map to a cube.
And then probably what you want to do then is going to rotate the cube as a whole. But, unfortunately, you can't because, as I said, everything gets flattened so there's no way to put a kind of a common matrix through those things if you have more than one of them. So what we've done here is we've added basically a way to create 3D layer groups, I guess. And so that's what this transform layer class does.
And it's really acts pretty similarly to a regular layer but except that only really its geometry is considered. And explicitly, its geometry is used to basically construct the matrix in a normal way and then add that to the, its sublayers, and then the sublayers basically are just rendered as they were any other sublayer of the transform layer's parent, not its parent.
And so it's basically a way of kind of collapsing the graph but inserting new matrixes so you can-- so back to our cube example, if you create a cube as 6 layers as children of the transform layer, you can then attach the transform layer into some other layer of defining your 3D spacesty=lW and then rotate the transform layer and all the sublayers will rotate correctly because, you know, the transform layer is really not a flattening operator. But, you know, because they are the special type of thing with no 2D space of their own, we really can't have an image, any color, any filters because, you know, they never render, they have no real space in our 2D model. They're just the construct.
So, the final point I want to talk about 2-1/2D, obviously, we don't-- we have to deal somehow with layers that intersect. And so you're going to obviously see the example here of 2 layers A and B which are split down the middle because, you know, part of A is in front of B and part of B is in front of A. And so, traditionally with 3D, you would render this with a depth filter and, obviously, we can't do that because we have to deal with that 2D model again. It's, you know, opacity and filters and all these kind of download things.
And so what we end up having to do here is basically take these 2 elements and somehow arrange it so that standard painter's model in a back to front rendering will just work. And so, you know, what we could have done here is basically cut B into 2 pieces and then render the back half of B, and then all of A and then the front-half of it-- of B again.
And the reason you need to know this is because you need to understand that if you set up things like this, it really does causes a bunch of extra of work, so your apps will run slower.
And so if it all possible you should avoid doing these kind of things. For example if my A layer here had a filter applied and the filter is expensive, then now if we split A into 2 pieces then we may end up rendering the filter twice and the frames just once and you kind of see we have lots of these and things intersecting them we may end up with lots and lots of little cut fragments rather than just one.
One other important point here is that depth sorting is kind of a simple-- has a fairly simple view of the world of the layers. And so, it really assumes that if the layers it steps only have sublayers then it assumes that those layers live entirely within the bounds of the layers being sorted.
So I'll give you an example of what I mean. If these things actually look like this, A and B were the sublayers of the red and blue layers, then at this point the depth sorter is not going to see that these things actually extend past the bounds and we may end up with an incorrect result.
See you just have to be aware of this because you know it may cause issues and really the only work-- the only way to avoid this problem is either clip to the layer bounds or just don't create this kind of geometry in the first place. OK, so, let's talk briefly about drop shadows 'cause they're a fairly important kind of UI effect.
So you really have 3 ways at least on the Mac to draw in drop shadows and these obviously are 2D drop shadows not 3D shadows. So first, you can just ask the layer to draw its shadow for you, at least if you're on the Mac and that works pretty well it can shadow basically anything.
It'll, you know, you can shadow a movie with an alpha channel but because of this generality, it really can be expensive again because it has to assume that the layer is changing every frame that for every frame we'll take the alpha channel of whatever you rendered and blur it and use that to create a shadow.
So if it's at all possible, then it's good to try and avoid using this. So one way you can do that is as we saw with the text you can just draw the shadow using Core Graphics and then the shadow will be cached along with the object and you can just composite it very cheaply.
The other way you can kind of figure out doing this sometimes if you know the geometry of your layer, for example, you have an opaque rectangle, then you can also often just create another layer which represents a shadow and often you can do that very cheaply for example if you have a Gaussian blob image like a little kind of round thing, but you can stretch that across to match the layer bounds by using the 9-part image we have now so then you end up with one layer which represents the shadow of the other layer.
And again, it's much cheaper to composite because everything is static. So it's really all I want to talk about shadow. So another common UI effect are reflections. It's kind of like shadows. But in the past the way we've had of doing reflections is basically you got to do everything yourself and that really means that you have to take everything you want to reflect, create a copy of it, flip it upside down somehow, and add some kind of darkening effect either gradient or whatever.
And obviously that's kind of a pain because, you know, if the things you're copying and moving or animating, you also have to copy the motion, you have to apply the animations to both sets. So now we're trying to give you a way to do that a lot more easily.
So we have now something called the ReplicatorLayer. The ReplicatorLayer is a little more useful from this but let's just talk about this one instance. So really the ReplicatorLayer is a way of replicating sublayers. So in this case we're going to create a ReplicatorLayer, we're going to set the sublayers of the replicator to be these-- whatever we want to reflect.
In this example here we have another image logo and a bunch of shape layers for our text, and then we're going to say I want 2 instances of the stuff I put in that ReplicatorLayer. So there should be one, one instance for the normal version and one instance for the reflected version. And the next thing we're going to do is we're going to say to our replicator OK.
Every instance you create will be transformed somehow. So the matrix we construct is pretty simple. We're going to flip the Y-axis just to reflect it upside down and then we're going to offset on Y as well. This H minus 2ry is basically saying height of the element minus twice the reflection frame gives us basically transform-- lining these things up correctly. Well that's pretty easy trial and error. And then when we set the instanceTransform on the ReplicatorLayer, basically what happens is that every replicated instance of the layer will have that transform concatenated into it.
So in this case we just have one instance so it gives us the thing we want which is the upside down reflection. And then finally we want to darken the things so as well as having a geometric transform the replicator also provides a color transform. And in this case we know that the-- every instance is multiplied by white minus some offset which adds up all the instances.
So basically what we're going to do is set the instance RGB offsets to be minus 3/4 just so that when we subtract 3/4 from 1 for our first replicated instance, we end with a color which is 25 percent black which is the other thing we want to multiply the reflection by. So that when that all goes and runs, we basically get this nice reflection.
And obviously the good thing about this is because all this replication happens at render composite time, then you can reflect movies and particles and, you know, basically anything in the tree and all the animations will, excuse me replicate as well and things will just work as you'd expect. OK. So finally and again this is on the Mac only right now, we also added a way to do particle effects which, you know, is kind of finally here because it's not always the most useful thing but kind of looks nice sometimes.
But the particle system is basically a way to emit images and then have them kind of composite together and kind of animate. And so the emission glare really just has an emission shape and then a ray of emitter cells. And each cell then defines how this particles are emitted over time. So the cell have things like an image which was drawn.
They have colors. They have, you know, direction vectors. Velocity acceleration, these kind of properties. And one interesting thing is that cells can also have subcells. So what that means is you can have particle, that your particle system can emit particles and then these particles can also emit particles. So you can kind of create this very nice, fairly organic looking effects sometimes. So and, yeah, that's all I want to say about particle system, so then we'll have a demo. So any ways, so that's been a lot of talking, so let's look at couple of demos, so.
OK. I'll show you this one. So first thing I want to show is an example of the shape rendering. So you can see here we have a bunch of different, well, you can't see but I'm going to tell you. We have a bunch of different shape layers one per glyph. And then, you know, rendering these paths in real time.
And so I-- the great thing is because this is shape layer rather than text, we can zoom this thing and you can see that we're not doing anything other than setting the scale transform on the layer and yet the rendering stays pretty nice and crisp, and fairly reactive as well.
So, you know, we can animate this thing and it kind of does what we would expect. We can also do a different font obviously so I can change from Helvetica to Times which is slightly more ornate and you can see that the renderer is still doing a pretty decent job of rasterizing the glyphs. And then with just with one last thing we can use a pretty more, a way more complex font.
Well you can see here that again, as you can see, we're basically have the same thing going on but the thing to note is that you know these glyphs are really, really complex. There are lots and lots of path segments. So this is just a really a demonstration that, you know, this is working fairly efficiently.
So OK, so that's about all of that. So the other demo I have is a particle system effect. And so this is actually combining a bunch of different pieces of the things I've been talking about, so obviously we have some particles. We have this kind of text write on effect where we have the particle emission shaping animated along the row of the text.
We also have a mass glare which is a shape where we're animating the shape to basically reveal the text, and kind of link those to animations up so you can see the text, appearing as the particles move over them. In the background obviously we have a gradient layer just to give us this kind of horizon effect. And then of course at the end we have some 3D CI effects where we basically ramp the Z position of the text and then add a zoom blur just to give us something else, some kind of fake motion blur effect. OK. So that's basically done.
[ Pause ]
[ Applause ]
OK, so that's basically the end of techniques now, so we just kind of have a few slides on performance, just to give you a few ideas of what you should do and what you really shouldn't do. So firstly, this really applies more to the Mac that the iPhone because we have more ways to do this on the Mac. But first thing you should try to do is avoid as much offscreen rendering as possible.
And what I mean by that is, you know, this kind of complex compositing effects where you can't just take an image and render it into the drawing destination we have. You have to do something more complex like apply a bunch of filters, apply shadowing apply masks. All these things involve kind of extra passes to the GPU and that's the kind of thing that if you do it enough times will start to kill your performance.
And I guess one of the most common cases where you would run into this probably without realizing is if you have a layer with a bunch of sublayers, a bunch of images and then you change the opacity of that layer then to get the correct mathematical result you really have to you know render everything in that group off screen and apply it-- and then apply opacity as a single object when we render it back on screen, so that can be a good source of this. So the next rule is as we mentioned earlier we're going to mention multiple times is really try to minimize the amount of blending.
This again is not so important on the Mac although it is still an issue. But really on the iPhone we have limited graphics bandwidth for blending so if you want your application to run close to the right frame rate which consists of 60 frames a second then you need to minimize the amount of opaque-- sorry, not opaque, of translucent data that you have. And so as I said earlier, one way to do this is just to make sure the layer knows that it's opaque if it is.
And you know sometimes the layers aren't opaque because they can't be opaque but you know I mentioned if you have a tree of views and you know these things are composing together so they probably have alpha channels, and so they can't be marked opaque because then you get the wrong compositing result. And so one thing you can consider then if your frame rate still isn't fast enough is you can basically reduce this tree of views into a single view which you render as one using Core Graphics or Quartz.
The nice thing about that is that then you can do your compositing once into this bitmap which then gets cached and composited as an opaque object hopefully. So that means it's just all removing things we have to do at composite frame rate. Again if you have, if you know, there's an easy way to do this for layers but if you have images, it's a little more subtle. You need to make sure that the ImageRef, the CGImage object doesn't have an alpha channel.
So you can either make sure the image file you load to create that image doesn't have an alpha channel. For example, you could try using a JPEG instead of PNG if that's acceptable, or if you create the image directly from memory then you have to, you can just mark it opaque by using one of these CGImage alpha types correctly.
And we do have ways for you to debug this. On a Mac you can set the environment variable CA_COLOR_OPAQUE and that will give you a hint and we'll see it in a second, but on the iPhone you can't really set environment variables but you can turn on the same option via Instruments and get the same effect.
So if we look at that you can see this Instruments option and when we turn that on, what we see is that all of our layers are either tinted red or green and so green means that the content was opaque which is good because we don't have to draw what's underneath it.
And the red is really the tip that there's something going on here because red is non-opaque, translucent. So anything that's red was drawn over whatever was behind it which means you use more kind of rendering level, rendering passes to do that. So the key here is try and remove the red regions.
OK. So that's basically about it. So I guess the things to take away from this are that firstly you don't need to use Core Animation directly. Every time you use UIKit or AppKit, if you turn on the right option you get the benefits of the CA hardware compositing, for free basically.
So that's kind of great. And obviously then when you do need to go through the levels because you know if you can't quite get the effect you need to just jump in to CA then. And then finally again, can't say this enough times, really try to minimize the amount of translucency you have in iPhone applications.
It's really the-- It is the key thing to get a good frame rate. And so basically that's it. We'll be in the graphics lab this afternoon, I think 12 'til I don't know, midnight or 1 or 6 or something. And Allan is our evangelist, so you can email him with issues or you can find us on these mail lists, typically.