Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-245
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 245
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 245] Advanced Ti...

WWDC12 • Session 245

Advanced Tips and Tricks for High Resolution on OS X

Essentials • OS X • 1:01:55

Dive deeper into making your apps stunning for high resolution on OS X. Learn how to work with OpenGL surfaces and bitmaps, handle custom layer trees, set up notifications for resolution changes, and examine how to get great performance when laying out different types of content onscreen in a high resolution environment.

Speakers: Chris Dreessen, Patrick Heynen, Aki Inoue, Dan Schimpf

Unlisted on Apple Developer site

Downloads from Apple

HD Video (212.8 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

My name is Patrick Heynen and I'm here to talk to you about high resolution again, but this time in a new and more trickier fashion. So what are we really, what's the purpose of this talk? Well, any of you who have been involved in writing real products know that 80% of the tasks are easy and it's that last 10 to 15% that's where a lot of the important stuff happens to make your product actually go out the door.

So what we're going to do in this session is try to go deeper into high resolution, pull back the covers a bit, talk a little bit about how some of that works, and how to take full advantage of all the new APIs to achieve full pixel precision. So that you can both work around any sort of subtle bugs you might have and achieve the highest quality you possibly can. Also leveraging advanced technologies under high resolution if you're using those, things like Core Automation and OpenGL.

And just how to get the best visual quality and performance for your application on these wonderful, new machines that you may have hopefully had a chance to look at and play with this week. All right. So what are we hoping that you will learn in this session? How to work with OpenGL contexts. We're going to cover some very specific things about OpenGL contexts that apply to usage in AppKit and high resolution displays.

Also, if your application takes advantage of Core Animation directly and you have custom Core Animation layer trees, there are some unique considerations that we're going to tell you about how to handle them properly. Also, drawing into off-screen bitmaps, you know, it's not all about on-screen into window drawings. All the things that we made easy, you know, artwork and all the Quartz, you know, resolution independent drawing, well, sometimes you have to do stuff yourself, right? You have to draw into pixel, into bitmaps yourself.

We're going to cover some of the things you need to be aware of for that. And most importantly, once again, I want to remind you the thing that makes high resolution different on the Mac and the Mac different in general from iOS Retina is that we have a much more dynamic display environment.

And you can have multiple displays and you can have any one display going between low resolution and high resolution at any time, given the way the Mac works. It's a much more dynamic kind of environment. That's a strength, but it's also a situation that your software needs to really take into account. and handle appropriately.

Then we're going to go into sort of an interesting deep dive, but an important one, because text is really the primary feature of Retina. If you've had a chance to look at these displays, it's all about the text. You just look at that and you're like, wow, this is almost like looking at a laser printout. Well, but there are some implications for some of the text and font technologies that we have in our system. And for those of you who spend a lot of time with text in your applications, you may want to be aware of these.

Also, just some final, some best practices throughout all these different sections about how to achieve quality and performance under high resolution. So we've got a lot of material here. Hope you had your coffee. As a brief backgrounder, before we dive into the individual sections, I just want to go over what the technology we're talking about is, if you haven't been to the introduction talk. We have new high resolution display modes for Retina displays. And the consequence of this is that screen. Screens and windows each have a two to one pixel per point density ratio.

And the frameworks are providing automatic scaling to your applications to ensure consistent coordinate system between 1X and 2X operation. It means your software sees the same thing going on for screen, view, and window coordinate systems across both standard resolution and high resolution, which is great for compatibility, but it doesn't mean you're abstracted from the pixels.

So sometimes you need to know about the pixels, and that's what this talk is about. And then, of course, finally, the Quartz Window Manager ensures consistent presentation across multiple displays. This is what's allowing things like window drags between a high resolution retina panel and a low resolution external display to work seamlessly. But there are some details which we're going to go into.

Okay, speaking of details, I'd like to bring up Mr. Chris Dreessen, who's going to tell you all about Innis Image. Thank you, Patrick. So let's talk about NS Image. As most of you probably know NS Image can contain multiple NS Image reps. And until now you've probably not been taking advantage of that feature. You've most likely been using a single bitmap representation in your artwork.

So now that you have let's say two bitmap representations, how does NS Image pick which one to draw? And the important thing to bear in mind when you're wondering how an NS Image does this is that it doesn't actually make a distinction between high and low resolution. It really just cares about pixels.

Specifically when you draw, it's going to try to find the representation, the smallest bitmap representation that has as many pixels as the destination. So a picture is worth a thousand words. In high resolution it's probably worth 4,000 words. So I don't want to show you what I mean by that.

So we've got our image here that has a 1X ping and a 2X ping. And it's probably a little tough to see on screen but you can definitely tell the 1X one has less detail. And physically they're the same size. But 1X is the same size. But 1X is the same size. And you can see that the image is a little bit smaller.

And you can see that the image is a little bit smaller. And you can see that the image is a little bit bigger. And you can see that the image is a little bit bigger. And you can see that the image is a little bit smaller. But the 2X one has way more pixels. And from NS Image's perspective, though, it looks like this.

We've got a 2X representation that's much larger. So if we're drawing without any scaling into a 1X and 2X destination respectively, it's kind of easy to figure out what's going to happen. Draw without any scaling, 1X gets 1X, 2X gets 2X. But what happens if we do something a little unexpected? Let's say we draw to something that's 100 pixels tall.

by 150 pixels wide. What are we going to get? And it's not the 1x representation. We're actually going to get the 2x representation. And you can see it's stretched there. And this produces a better quality result because we're working with more pixels from the 2x representation. But there's a few places where this might happen and you might be a bit surprised by this result instead of pleased by the higher quality.

Specifically if you're stretching images, especially three-part images or banners. Like in this case we've got two end caps that aren't going to be scaled and a middle piece that's going to be stretched over the entire center. And what happens at 2x is we're going to notice there's no scaling involved on the end caps and draw the 1x representations.

In the center piece we're going to notice we're covering a lot more pixels than the 1x representation has. And the 2x representation involves stretching a little bit less. And that's probably not what you want, especially if you have a clever artist that's taking advantage of the extra pixels and isn't giving you just scaled up, scaled down artwork.

So if you're hitting this case, we recommend instead of drawing these yourself, you use NSDraw3partImage and NSDraw9partImage. And these are going to tile the image instead of stretching it. So what do I mean by tiling? Well, if you take a look at these images here with this grass texture, this is stretching. You can see the middle piece is -- well, I hate to say the word again, but stretched. If we're tiling, we don't actually pull it out like that.

We instead go ahead and draw the image multiple times adjacent to each other. So if your image is only, say, one pixel wide or one pixel tall and you're stretching it like that, you're not going to notice a difference. But tiling gives us the information we need to know that we don't actually have to scale this thing and choose a 2x representation.

If you really don't want to tile, like you have a gradient or something you're trying to stretch, consider this API called setMatchesOnlyOnBestFittingAxis. It tells the NS image that if one of the image reps fits perfectly on one axis, it's okay to use it even if the other rep fits a little bit better on the other axis but not perfectly.

So the other thing I want to mention here is if you're drawing off screen, which is you're probably using NSImage lock focus and unlock focus, there's a few things you should really be aware of. The first is try not to use NSImage lock focus. And the reason is that your drawing is going to be flattened into a single bitmap. So all of the color space information and resolution information of that bitmap is going to be crystallized there. You're going to throw away detail.

We have a new API in 10.8 called image with size flipped drawing handler. Drawing handler is a block there. So if you were using lock focus and unlock focus before, all that code between those two calls you'd now sandwich in your block. And I want to especially call out how this behaves with regards to caching.

Basically, the first time we draw the image, we're going to invoke your drawing block into an offscreen bitmap that's appropriate for the destination. So if we're going to a 1x window, we're going to invoke it against a 1x bitmap. And then if that image is drawn repeatedly there, we're going to draw from the bitmap and not invoke your block.

If that window is then, say, moved to a 2x screen, the next time it draws, we're going to re-invoke your block into a 2x bitmap and then redraw that bitmap as necessary. So you have that caching behavior. You don't have to worry about losing performance by switching to this if you had a lock focus based version before. The other thing I want to mention is if you're trying to manage your own bitmap caches of vector artwork or other things, you may be able to get away with just using this instead of managing it yourself.

There are some cases where you're not going to be able to use that because, say, you're capturing transient state in your drawing and having the block invoked repeatedly might invoke different things and it's not worth your effort to capture that state. Or capturing the bitmap is just fine. And that works, but you need to be aware of a few things.

The first is you're going to still want to create a multirep image. And you're going to do that by explicitly creating an NSBitmap image rep. And this is what lock focus is doing behind the scenes. So if you look at our snippet here, it looks like a lot of code. It's actually just two method invocations.

But the important thing I want you to look at are the pixels wide and pixel high arguments on line 3 and 4 here where you see we're multiplying our width and heights by whatever scale factor we're targeting. I said we want to add multiple bitmap image reps. So we're going to call this probably once for 1x and once for 2x.

And if you notice the very bottom line we're calling my rep set size with the size and points. And what that's doing is we have a virtual size and points and the physical size and pixels and that lets us know the resolution of the image. And that's important when we get to drawing it in a second.

So the next thing you're going to do is you're going to use NSGraphicsContext to render into this bitmap. And this is going to be a sequence of calls where you're going to tell NSGraphicsContext to save the current graphic state so that any existing drawing that's going on has something to return to that makes sense.

You're going to replace the current context, and you're going to do it using this method here. NSGraphicsContext, GraphicsContextWithBitmapImageRep. And that takes the bitmap image rep we just created. And I was mentioning calling setSize explicitly to communicate that resolution. That's very important here because this will set up the transformation matrix on the context automatically.

And then in here we're just drawing a red rectangle here. And this code doesn't have to care whether or not we're drawing at 1X or 2X. The scaling handled by the context does that for us. And finally, when we're done drawing, we restore the current graphic state. And that allows us -- we're called within a view's drawRect. The view can do other drawing and whatnot. If we don't call that, we're going to have problems.

So the takeaway from that, though, is really NSGraphicsContext will automatically set up the scale for you if you've built the NSImageRep correctly. And you should really try to take advantage of that. The other thing I want to call out again is you do need to invoke that code multiple times to get best results. One for your 1x screen, one for your 2x screen.

Some other things I wanted to call out. There's a few methods in NSImage we'd really like you to move off of now. Caposite to point, dissolve to point. And the problems are they don't fully respect the context transformation matrix. And you'll notice a lot of our drawing these days at 1x and 2x really ties in with that context transformation matrix. Most of the time we get it right, but there's edge cases. And if you move off of these, you won't hit them.

A good rule of thumb here is that if the method doesn't begin with draw, don't use it to draw the NSImage. Instead, here's the master drawing method for NSImage. Draw on rect, from rect, operation, fraction, respect, flip, hints. It sounds like a mouthful, but if you look at this little snippet here, the important part is really just the destination rect parameter. Everything else you can just copy and paste the code here. It'll be available later. But that'll handle 90% of your drawing cases, probably more.

So that covers what I want to tell you about NSImage. I know some of you are doing OpenGL stuff, and you're going to be interested in this next section about OpenGL and high resolution. And if you just run your app today on modified on one of these systems, you're going to notice things look scaled up, and that's because we create the surfaces at the standard 1x resolution.

This is the most compatible thing to do because points and pixels still match, but it's not the result you're probably looking for. If you want to have nice, crisp results on these displays, you're going to need to do a little bit of work to opt in to this. And you do that using this new NSView method called SetOnceBestResolutionOpenGLSurface. And you pass yes, and that tells us that we can allocate a full 2x surface for you.

And you have to do a little bit more than that, though. If you don't update your GL viewport code, you're going to find your drawing in the wrong place. And GL viewport takes arguments in pixels. And until now, you've probably been propagating -- or getting your arguments for GL viewport by just asking for your view's bounds.

In 2x, of course, the pixels and points don't match up anymore. And there's this method called ConvertRecToBacking we'd like you to use that will take your local points and convert them into display pixels. And you'll use the results of that to pass on to GL viewport. Some other things I want to mention is that if you're doing UI stuff, you're probably going to want to incorporate the UI stuff by scale factor into your model view transform.

And that's so things like buttons and text appear the same size physically in the real world as they did before. Otherwise, it's tricky to click very tiny buttons. The other thing is we have way more pixels. You'll probably want to update your texture resources to take full advantage of them. Let's go ahead and show you exactly what I was talking about here with the chess application in Mac OS X. You're probably all familiar with chess.

And let's just run it here. We're on a 2x screen, and this is an unmodified version of the chess program. And if we go ahead and look at our board, zoom in, you're going to notice the pieces are a little blocky. Like, especially if you compare the text up here with the pieces, you'll notice they're not the resolution we want to be rendering at.

So I mentioned we had to opt in. And I'm not especially familiar with Chess, but I do know they use an OpenGL view. So I'm just going to find where their OpenGL view is. They have this MBC Board View class. And let's check the implementation of it and just find the init method. So here's init with frame in their OpenGL view subclass. And here's the call to super init with frame. And if we just add a self, set once best resolution OpenGL surface here, we can see the results.

So that looks crisper. I mean, these pixels are a lot sharper, but there's a problem. I'm not quite sure what it is. Oh, oh, the board is really tiny. That's the problem. So I mentioned we had to update our call to GL viewport. It's expecting arguments in pixels.

So let's just see where we're calling GL viewport. Here it is in MBC board view draw. And we have this variable called bounds. If we scroll up, we can see that bounds is definitely equal to self bounds. So if we just do self convert rec to backing, and pass in the existing bounds argument. Now we're going to have a bounds rect in pixels. Let's see what that looks like.

That is the right size. And if we zoom in, we notice these pieces are really crisp. They match our game center warning resolution, too. So that's fantastic. So that was a simple case. There's more complicated ones. But for a lot of cases, this will be exactly what you need to do.

So some other things you should be aware of. GL viewport isn't the only thing that takes device-dependent geometry. It's not the only thing that takes pixels. Scissors, stencils, many other OpenGL functions do expect input in pixels, so you're going to need to modernize your code to use convert_rec to backing or convert_point to backing there as well.

In general, we find it may be easier to just, when dealing with OpenGL, convert all of your inputs into pixel space and then back into whatever your OpenGL world looks like. You don't have to do this, but it tends to simplify things. The other bit, if you have pre-rendered text or GUI elements, you're going to want to re-render those and probably have separate 1x and 2x versions that you can display. Text especially looks not so great when we scale it up or down.

Something else you should be aware of, if you're using multi-sample anti-aliasing or other full-scene anti-aliasing stuff, it's really expensive, especially with regards to memory, and you might find that that memory usage is better spent on higher quality textures. So you might try turning it off or just dialing down via the multi-sampling factor you're using.

Some other notes if you're doing OpenGL full screen. Some of you are capturing the display and changing the resolution, and we'd really prefer if you stopped doing that. Instead, create a window that covers the entire screen. And some of you may have been concerned about performance problems using a window instead of a full screen OpenGL context, and we actually detect this case and go ahead and make sure your bits get to the screen as fast as possible. So there's no performance penalty for having an OpenGL full screen window as opposed to capturing the display. And the other thing this lets us do is present critical system alerts.

So I mentioned this convert rec to backing method. What do I mean when I say backing? Let's talk about backing coordinate systems. Backing coordinate systems are really what we're talking about whenever we're referring to bitmaps. And you're most likely going to deal with these in a Cocoa world if you're calling convert rec to backing or convert rec to backing. It's a method that exists on NSView, NSWindow, and NSScreen. NSView also has methods to convert points and sizes to backing. Let's discuss the specifics of that coordinate system a little bit.

So the first thing is that the units in the back-end coordinate system are in pixels. I don't think anyone's surprised by that. And it's the standard Cocoa coordinate system orientation where the lower left is increasingly negative coordinates, and as you go to increasingly positive coordinates, you're approaching the upper right. So that means that you can say floor value to move it down or seal value to move it up.

And finally, the integral values in the space are pixel aligned. And you'll notice I didn't say anything about absolute coordinates, and that's because we don't actually make any guarantees about what the local coordinate to back-end coordinate transformation is going to look like. Specifically, don't anticipate that just because your bounds origin is 0, 0, that your bounds origin and back-end coordinates is 0, 0. The view can be rendered to a surface or a layer or have various flips involved on its way up to the window back-end store.

So you're going to see some weird coordinates every now and then. Like you'll throw in a positive view coordinate and get back a very negative back-end coordinate. And don't be surprised by that. If you're concerned about distances or relative to a certain point in your view, convert that point to back-end also and do your calculations in back-end space and treat them as relative coordinates.

Finally, the back-end coordinate system is different for every view window and screen. So if you use convert rect to back-end and are trying to round trip data, make sure you use the exact same object to call convert rect from back-end on it. If you mix and match objects doing that, you're going to have really weird results.

So a few of you are here for core animation, I'm sure. Let's talk a bit about core animation now. Especially if you're managing custom CA layer content, here's some things you should know. One, on Mountain Lion, layer bounds and position are in points. They're virtual. And to get results that are appropriate for the display, you need to be aware of the contents properties and contents scale properties.

If you're getting the contents scale wrong, you're most likely going to see unsatisfactory results. The other thing you should be aware of is if you're playing with the contents gravity property of your layer, that's going to affect the positioning of the bitmaps within that layer. We'll talk a bit more about that in a moment.

If you're using the drawLayerInContext delegate method or subclassing drawInContext, these will already conclude the scaling if you are adjusting the contents scale of the layer correctly. Provided you're doing that, you don't have to modify any of your drawing codes. So that's kind of a handy thing to know.

Another thing that's handy to know is that you can use an NSImage as the contents of a layer. This is really convenient. We just have this one-line snippet here. And this works for multi-resolution images. Let me describe how we pick a representation to use in this case. Basically, we go ahead and look at the resolution of the screen the layer tree is on.

So if you're on a 1X screen, we're going to pick the representation most suitable for 1X inside the image. If you're on a 2X screen, we're going to pick the representation most suitable for 2X on the image. And that probably covers 80% of cases you'll care about. It's really handy. But there's some edge cases you should be aware of if you're doing fancier things. And let me illustrate those for you.

So we have our cake image again with 1X and 2X cake variants. And here's a layer tree where we're just displaying a centered layer here. So on 1X, we display the 1X image. And on 2X, we display the 2X image. So far, so good. But let's say we change the transform on the layer, the bounds on the layer, so it's larger now.

What we're going to see is that the 1X screen is going to be a little bit bigger. And on the 2X screen, even though we could use the 2X screen and get more pixels and display them, it doesn't happen. And that's because we don't actually know the display size of the layer. We just know the resolution of the screen it's on.

I mentioned contents gravity. Contents gravity is used in conjunction with content scaling to -- or to -- or content scale to position the contents within the layer. By default, the contents gravity is one of the resize modes, which doesn't affect the content scale or ignores the content scale once positioning it.

If you're using a non-resize mode like top left here, you'll notice that when we provide the 2X mode, Core Animation treats it like a 1X bitmap and displays it in a much larger area. So instead of just showing the wrong number of pixels, we're actually drawing things incorrectly. And that's true for various other non-resize orientations like top right.

So to summarize those images, we can't pick up transforms and bounds changes in the layers. And additionally, if you're using NSImage as a layer content, you need to be sure your contents gravity is one of resize, resize aspect, or resize fill. Now suppose you do want to take advantage of bounds changes or transforms or you want a non-resize contents gravity, we have API for you. So that API comes in the form of two methods on NSImage, recommended layer content scale and layer contents for content scale.

Recommended layer content scale, you pass in a desired content scale. So let's say you're in a 2X window and you have a 3X transform attached to your layer, you would go ahead and say, "Recommended layer content scale 6." And NSImage will return the content scale most appropriate for the image reps in that image.

So if you're doing the standard 1X, 2X bitmap configuration in that image, we'll go ahead and say, "Well, 2 is way closer to 6 than 1, so we're going to say 2 is the desired content scale you want to use." If it were a PDF image rep or something resolution-independent, we would instead probably return the factor you pass in.

And that can be used in conjunction with the next method, layer contents for content scale. In that, you pass in the content scale you're going to set on the layer, and we give you back an opaque object to set as the contents of the layer. And if you use those two in sync, you can use all the contents gravity modes just fine in Core Animation.

So I mentioned the content scale here, and you're probably wondering how do you know about that and how do you react to changes in that? And at the layer level, we have this new delegate method called layerShouldInheritContentScaleFromWindow. And it returns a Boolean. If you return yes from this method, we're going to go ahead and update the content scale of the layer to match the scale we've passed in this method and then call setNeedsDisplay on the layer.

So if you're using the drawLayerInContextDelegate method, this is a very natural pairing for you to automatically update the content scale and redraw. If you return no, we're going to keep our hands off the layer, and you can instead take this opportunity to update the contents and content scale yourself.

Something you should be aware of, if you do return yes from this, you absolutely need to implement displayLayer or drawLayerInContext. And the reason for this is that when you mark a layer as dirty, which we'll do when you return yes, it blows away the existing contents on the layer.

So if you don't implement displayLayer or drawLayerInContext, you're going to notice the contents of your layer disappears instead of being updated for the resolution you want, which is probably not the effect you're going for. The other bit to be aware of is that this delegate method, it's not invoked by CoreAnimation. It's actually invoked by NSView.

So if you add a layer to an existing layer tree that's hosted, we don't know about it, and we can't invoke that delegate method for you. The takeaway from that is when you're creating these layers, you should probably take care to set the content scale manually. And you can do that by asking the window for its backing scale factor, for example.

And let's go ahead and see what taking this advice looks like. This is the unmodified version of the app. So on the left is a layer that uses draw layer and context to draw. And within that it uses the NSImage draw and rect method as well as NSRect fill with a blue color to make the blue background. The layer on the right is actually set by setting the background color of the CA layer, setting the contents gravity to center, and setting the contents to our NSImage. And so it's displaying more or less correctly, but those are some really fuzzy cakes.

If we go ahead and look at our KCAT 2X PNG, the first thing we're going to do is we're going to add it to our target. And we've updated our artwork. Everything should look great. And we can see the image on the left doesn't look so great. It's still pretty blurry. And the image on the right is the different size. So we're getting very different results than what we had before. And let me show you how we set up that layer.

So an application did finish launching here. The layer on the left, it's pretty simple. We just set ourselves as the delegate, set needs display, and call set needs display on ourselves there. The one on the right does exactly as I said where it sets the background color, the contents gravity, and it calls this configure contents for view two method which just sets the contents to an NS image.

Using layers should inherit content scale. So let's go ahead and do that. And let's take this one layer at a time. So we're going to start by working with the layer on the left. Let's just add this delegate method here. So we're going to go ahead and run this.

And you'll note nothing has changed. The image still looks just like it did before. So the problem with this is that when we call -- when we set the layer on this view, it's already hosted in a window. So we don't get the notification that we're adding a layer to it all. And this method is never invoked. And the solution to that is to update the content scale manually.

So I mentioned the window backing scale factor method, and that's something we can use here. So we just add view1.layer.contentScale equals view1.windowbackingscaleFactor. And now we see a -- well, it may be difficult for you to see out there, but this is a much crisper 2X variation of the cake image.

Let's go ahead and start working on the right side layer, the one that's using the layer properties to display itself. So I mentioned we're using the contents gravity of contents gravity center, which means we can't just set the image as the contents of the layer anymore. So let's go ahead and update this method to use an explicit content scale.

So in this case, we have our new configure contents review to method that takes a scale, and it goes ahead and grabs the cake image again, and then asks for the recommended content scale for that image, and we go ahead and set that on the layer, and then also pass this to the layer contents for content scale method and update the layer contents using the result of that. So we've changed our method name here.

Let's go ahead and just start with a 1X scale and see what happens. And we've noticed now that we are actually getting a pretty decent result. It's still the 1X image. It's not as sharp as it should be. So we want to update this to be similar to our code here where we're manually passing the scale factor. Instead of using view1.window.back_in_scale_factor, we're going to use view2.window.back_in_scale_factor.

And now you can see we're getting consistent drawing between the left and right layers again, which is exactly what we wanted, and it's using our 2x resources. So that's great. That's what we were targeting. I mentioned we were handling this layer by layer. We added layer should inherit content scale, but we didn't handle layer 2 in it.

And in this case, it doesn't make a difference because we're not responding to a dynamic screen change, but that's something we will have to respond to in the real world. So you would add code like this to notice, oh, we're talking about layer 2? Let's call configure contents for view 2 again with the new scale factor passed into the delegate method. And it's very important that we return no here. If we were to return yes, we would blow away the contents we just set on the layer and undo all of our work.

All right, so we've seen using that a bit. Now, I mentioned the should inherit content scale method. So you've probably picked up that resolutions can change. Let's talk about that a bit more. So the resolution can really change at any time. You can't necessarily predict that things will hold constant. Someone can hot plug an external display or mirror or extend the desktop.

And just because the internal display is 2x doesn't mean the external displays are going to be 2x. You do have to deal with heterogeneous environments where you have a 1x display and a 2x display simultaneously. And in those cases, when Windows drag between displays, it's going to update automatically.

So let's talk a bit more about how that window updates its resolution. It's going to try to make its backing resolution match the backing resolution of the associated NSScreen. And that means if you're straddling displays, it's going to pick the screen with the largest part of the window is on. If you're offscreen entirely, we're going to do something different. We're going to use the resolution of the highest resolution display attached to the computer.

And when we do change the backing scale factor of the window, we post a notification about it. And that notification is NSWindowDidChangeBackingProperties. And that's also called when the color space or bit depth of a window changes as well. If you're using the WindowDelegate, you can go ahead and implement WindowDidChangeBackingProperties instead, which will pass you the notification as an argument.

Views are similar. We have a new method on NSView called ViewDidChangeBackingProperties. You can subclass that. There's no equivalent notification. And that's called when the view is added to a window or when the window changes its backing resolution or the color space changes. Here's a little snippet demonstrating what you might do in that method. Here we call SuperViewDidChangeBackingProperties.

And if you recall from the demo, the new NSImageContentScale-based methods, we use that to explicitly manage the contents and content scale of a layer we're hosting. Something to be aware of. The properties -- well, this method, for one, isn't invoked when a view is removed from a window.

And additionally, if you were to say, "Convert REC to backing on a view not in a window," the view will act as if it's on the highest resolution screen. So that's consistent with NSWindow. The other bit is before you add a view to the window, it's doing the same thing. So I'd like to go ahead and bring up our resident text expert, Akie Inoue, to talk about text rendering in high resolution.

Thank you, Chris. Good morning. I'm Aki Inoue. I'm the text guy from the Cocoa Group. Today, I'm gonna cover how you can achieve the best text quality for your application in your high-resolution world. So let's get started with exciting new stuff in Mountain Lion. Actually, there's no new text system API in order to support high resolution.

Why is that? The tech system is designed as resolution diagnostics throughout. It's been working with no identity coordinate system for years. For example, we've been working with ZoomView in TextEdit or we've been rendering into totally resolution independent PDF files. So there was no need to add new API. We are introducing some significant behavior change in Mountain Lion.

Screen Fonts. We are duplicating the usage of screen fonts starting mountain lane. So let's review what screen fonts are. The screen form is a variant of your base form, and it uses integral advances instead of the default floating point values. As you can see, with the base font, the width of the character is in floating point value. That's taken from the font file itself.

On the other hand, with screen fonts, the weather spacing is tweaked so that the origin of the character is aligned to the integral position. Remember, in low resolution, one point used to be typically one pixel, so you get the effect of pixel aligning the character in some cases. So with this, we were able to achieve pretty decent text quality in lower resolution displays. And, you know, quotes could be able to use the same shape bitmap caching easily. And it used to work with hand-tuned, ancient, ancient bitmap fonts.

These days, these advantages are getting less relevant because of the newer course technology such as font smoothing, subpixel quantization, and on-screen tinting. And also, the disadvantages of using screen fonts are outweighing advantages. For example, because of the rounding, the glyphs are spaced uneven. And the gap created this uneven spacing is more apparent or uglier when you're running in high resolution.

Also, we are not using kerning or rilature, the higher advanced typographic features with screen fonts because these features are designed with floating-point value in mind. So they don't work too well with integer advances. And finally, because of the tweaked letters, it often doesn't have consistent scaling between point sizes. So if your applications are using multiple font sizes such as graphics tools or presentation tools, you might encounter surprises sometimes caused by this effect.

So as I mentioned before, using the base 14-point form gives you the best text quality, both in low resolution and high resolution. Because of that, these floating point advances are often referred to as ideal advances. And that gives you taking advantage of the higher density in high resolution because we're not rounding to pixel alignment. And we are now enabling Cognitive Literature everywhere by default. And this is how actually the font design was originally imagined.

And you have the uniform scaling between point sizes and different coordinate systems. For those reasons, we've been actually using the base fonts in many places. For example, our system font, Lucida Grande, has been using the floating point advances since Mac OS X 10.0. And also many applications such as iWorks been using floating point advances. And finally, in fact, iOS itself doesn't have the concept of the screen formats altogether.

So let's take a look at the screenshot. This is from 10.7. We're showing Helvetica and times from 12.2, 18. As you can see, the line edges are pretty much jagged in some places. And this is how it looks with 10.8. The lines are scaled smoothly and consistent between point sizes. Let's zoom into some of the world. In 10.7, you might notice the glyphs are placed unevenly, and especially between W and E, the job is pretty ugly. With 10.8, the glyphs are spaced evenly, and the W and E are placed handsomely using kerning.

So take a look at the actual TextSystem API here. As you may know, the Cocoa has three main groups of text rendering and measuring APIs. and NSLayoutMonitor. One of the core tech system APIs, along with TextView and NSTextStorage, provides the Power and extensibility to the text engine. And NSStringDraw API, such as DrawingRack, gives you a convenient way to render NSString and attribute string efficiently. And NSCell renders many user interface control strings. And these APIs can be categorized into two groups, Father.

One is for document contents. NSLayoutManager and other text system luminaries. NSTextView and NSTextRage usually take this burden to support large documents. And user interface elements are usually rendered using NSStreamDrop API and NSCell. For these groups, we have the specific API for controlling the screen font setting already from 10.0.

For NSLayoutManager, we have userScreenFont method. When it returns, yes, NSLayoutManager uses screen font. If no, it doesn't. Similarly, we have NSStreamDrawingDisabledScreenFontSubst itution flag. It's used for extended stream drawing API, such as draw with red options. By specifying this method, you can disable the substitution of screen font with these APIs.

So with 10.7, these were set to-- your screen fonts were set to yes, and for-- User interface element setting, it was default to be no, so we were always using screen fonts, and on 10.8, they were flipped so that the layout manager doesn't use screen fonts, and NS3 drawing disabled screen font substitution is implied.

So typically you are using the font object. You send NSFontFactory message, such as font with name size, and you get back a base font. And you pass that font object to one of the text system APIs. And the tip framework takes over from there. You can use these fonts and measure and render text through these API, text system APIs. So your application actually don't see the screen font itself. Behind the scene, the tech system swaps the base font you specify with its corresponding screen font dynamically whenever necessary.

There are two APIs for that. NSLayoutManager, a substitute for phone, and NSPhoneScreenPhone. And actually, this is somewhat what Substitute Font for Font in NSLab Manager implementation. It checks if it's supposed to use screen fonts, and if so, it calls NSFontScreenFont to substitute and get the screen font. So, behind the scene, the tech system sends the substitute font for font itself, and... Get the screen phone and use it dynamically for measure and render.

So, as I mentioned that because of those changes in mountain lying, you can take advantage of the higher density.

[Transcript missing]

Because the width of the letters are different between screen fonts and idea fonts, the number of characters that can fit in the line changes. And these changes are often you want to avoid because you want to have the same appearance of your document between releases. So, if your application falls into this category, you can manage your screen font setting per document using, for example, APIs such as NSLite Manager, useScreenFont.

And actually, we're introducing a new document attribute, an used screen font document attribute that you can specify so that your per-document used screen font setting can be stored into your document data. And actually, text edit is already enhanced to take advantage of new functionalities. So you can look into the source example and adopt the same strategy in your application. Thank you.

Also, we are introducing a new pref key and it's phone default screen for substitution enabled. It controls the default setting for the screen font usage. If your application is linked against 10.sdk, it defaults no. That means you are not using screen fonts. And if you're linking against previous sdk, it's yes. So we're preserving the Lion behavior. So using this key, you can control your screen fonts, the default screen font usage in your application explicitly. Now I'd like to switch over to Dan Schimpf, who's going to discuss the intricacy of pixel aligning.

Thank you, Akki. Okay, I'm going to talk about a couple issues with aligning pixel-based art. This is most helpful with drawing UI controls, like buttons and things. So what are the issues that you may run across? Well, there are some situations in your drawing your UI art that may have worked well at 1x that will seem to fail at 2x. It'll be out of line.

And this usually comes down to rounding differences, causing some layout that, again, worked at 1x to change at 2x, just change their values. And this is because there are no odd pixels at 2x. Because everything is doubled, even three points all of a sudden becomes six pixels. So there's no odd pixel values.

So here's a couple examples of things that actually worked. So I have a four-point tall space, and I want to fit something that's two points big inside of it. And you can see at 1x, it works out fine. We can center it just fine. And at 2x, the blue bar is in the same spot. My one point turns into two pixels, and it's just okay. Yay.

Same thing works for odd inside odd as well. We center it, one point turns into one pixel, one point turns into two pixels, everything's happy. The trouble begins When we have even things inside of an odd space, we try to center it at 1x with 1.5 points, so we round it.

That goes up to two pixels to have a good appearance. But at 2x, well, we have enough pixels that we can actually put it where it belongs. So while it's more technically correct, it's all of a sudden in a different spot. It's shifted down a bit. And maybe you've already accounted for that rounding up in your design at 1x, and now it looks wrong.

Just for completeness, here's the equivalent example of an odd-sized item inside of an even-sized space. It shifts down again. So what do we do? How do we see these things? The easiest way to see these things really is to just look at it. I mean, you can do a lot of math, but testing is going to be your key here.

If you have two displays, it's really the easiest. If you can set one at 1x and one at 2x, then you can just drag your window back and forth between the two displays and see how it changes as you move them. So the window switches, as we said before, when the window hits the midpoint of the screen divide. When a window has more of its screen, more of its area on the other screen, the window rebuilds to 2x.

So that's where you can observe any visual shifts. Things should stay at the same spot, at 2x and 1x. And your eye can really pick those differences out. If you only have one display, don't worry, you can still do this. Take screenshots in both modes. And then open them up in preview or any sort of other image viewer where you can line them up in the same spot on screen and scale the 1X screenshot so you can, again, they'll occupy the same amount of area on screen, and then just flip back and forth.

And you can see those same kind of visual shifts. So here's an example of a pixel shift. This is a button with a glyph inside of it. And here it is at 1x, but obviously scaled up a bit so you can see it. And I'm going to flip it to 2x.

So it's subtle, but it moved. It moved up. I'm going to flip back and forth a bit, and you can see it. And while if you're just sitting there at 2x, you may not see these sort of things, but it means your layout is wrong, your UI is incorrect. And especially if your user flips back and forth, they're going to notice these kinds of things.

So where's the problem here? Well, this time it actually might be the design. If you can redesign the 1X appearance to eliminate these odd inside of even sort of situations, that's really the best thing to do because then you get to have correct math at 2X as well. If you can't change that because of legacy concerns or historical reasons, or maybe you just like it better, you might have to introduce 2X specific code just to handle this.

So you can experiment with the rounding direction. There's a method backing aligned rect and options. You pass it in a rect and it gives you back a rect that's aligned on the backing coordinates. It's not exactly -- it hasn't changed coordinate spaces, but it's aligned on a good pixel grid. And the options that flag that you pass in provide you explicit control about how it's rounded in each direction.

So, but if that doesn't work, you may have to add a half a point or one pixel if you've got it in pixel space explicitly when running at 2X. But as you might tell, this is a little fragile and not exactly the cleanest code in the world, so really do this only if absolutely necessary.

So scale factors. We've given you a couple cases in this session alone about where scale factors are important, but there are some cases -- but we want to caution you from overusing scale factors. So some background on coordinate spaces first. We talked a little bit about the back-end coordinate space, but there's some other ones. Cocoa actually has a lot of them.

Each NSWindow, NSView, CA Layer, bitmap context that you might have, and OpenGL context, they each have their own coordinate space. To talk about positions, you have to convert between one or the other. And it may seem like a lot of work, but by dealing in the correct coordinate space, your code actually stays cleaner. And the best part is the scale factor is already accounted for in all of these contexts.

So when you want to convert things from one area to another, let's say you've got a view, you want to go to a different view, you use Convert Rect to View, and there's also Convert Point and Convert Size. If you want to have a view go to a window coordinates, you use convert_rect_to_view again, but you pass in a nil view, and that will get you the base window coordinates. This is also when you have something like an NS event location window, you get the point back from an NS event.

That's in window base coordinates, so you can convert that from the nil view to get your point back in your view's coordinate space. So if you have an NS window, you want to take something like a point or a rec to the NS screen that it's on, you can use convert rec to screen. And NS view to its hosted CA layer, you can use convert rec to layer.

And again, we mentioned this before, but if you have anything and you want to take it to its backing coordinate space, you can use Convert Rect to Backing. So where don't you want to use scale factors? Let's say I'm drawing something in my DrawRect, and I really want to get something on a pixel boundary or pixel coordinate. Well, I can take my frame or my bounds and get my window scale factor and do some math myself.

But this is actually not going to work out in a couple cases. You don't--if you're in a layer or if you're in some other kind of context, you've got some other scaling applied, and it's also going to be dependent on the window that you're drawing into. So the better thing to do is, again, use these convert back to backing calls, and that will give you your pixel origin. Just applying your window scale factor is not usually the best case.

So some tips with first scale factors. Work in points wherever possible. It makes your code cleaner and simpler. And be prepared for fractional points and positions at 2x because they're okay. If you're at 3.5 points, that's actually 7 pixels. You're still aligned. You're okay. And you want to convert any coordinates and sizes that you use to the appropriate space before using them. Even NSView to NSView, even if they're in the same parent view, because they may have special view-specific transforms. And if you can get by, don't ask for the current scale factor.

But if you absolutely need to, again, some of the cases that we've discussed in this advanced session, use your current window or the tightest context you can. If you don't have a window or screen that you can use, you can ask for its current scale factor. It might be time to rethink your design. Because again, some cases where you have 1x and a 2x display, what is the right answer there? It's hard to say. Okay, and I'm going to hand it back to Patrick here. He's going to talk about onscreen content. Thank you, Dan.

[Transcript missing]

Now, the difference here, and this is the important point, is you need to calculate that image's size. Because it's a CG image you're getting back, that is not going to be -- its size and its width and height are not going to be in points, rather it's going to be an image space or pixels.

So in order to create an appropriate image with that CG image, you actually need to convert from backing to compute that size in points. And you need to do that with the screen object that you captured from in order to get the right result so that you get whether it's a 1X screen or a 2X screen, you take that into account appropriately.

Here's a little tip. If you saw the CG display ID, well, how do you get there from AppKit? Well, it turns out if you ask the NSScreen for its device description, ask that for an object for key NSScreen number, it turns out that that is magically and very usefully the CG direct display ID. So you can use that to sort of connect the dots in the sequence of APIs.

Okay. Now, talking a little bit about app performance under high resolution. Really, I'm not going to go into this nitty-gritty like, you know, code techniques. I'm really just going to talk about the philosophy of performance under high resolution and give you some sort of general parameters here. So first and foremost, the thing to remember with these products here, with this particular product especially, is your application is going to be processing four as many as seven times the amount of pixels under high resolution. The 7X, of course, comes from these wonderful new expanded desktop high resolution downscaling modes that you can get to where it goes up to 1920 by 1200 at 2X.

So that's a lot of pixels. And you might come away from that thinking, how can anything ever possibly be fast if you have that much more? It's almost an order of magnitude more pixel content. What's going to go on? Well, I'd like to give you the message, don't despair.

The hardware is there. The hardware, especially on these new products, is more than capable of handling it, as can be evidenced if you have a chance to play with it. Even the most aggressive system animations that we do and scrolling and a lot of dynamic behaviors, it's more than able to handle that without even breaking a sweat and getting out of low power states or anything like that. In particular, the key thing that we have discovered while developing this product is that most often performance problems are not because you're hitting some fundamental hardware limitation.

Obviously, I want to say here there are cases where if you're like a game or really aggressively focused on GPU programming where obviously the hardware is going to be your bottleneck and there's special considerations for that. But if you're just a regular Cocoa application, it's typically not the case that you're going to be fundamentally limited by the hardware.

And most important is to make sure that your application actually leverages the system graphics technologies as much as possible. Make sure that when you are profiling your application that you're spending most of your time asking the system to do work for you rather than like waiting for one of your threads to give a response to the other one and just waiting around a lot.

Typically we've looked at a lot of applications and it's usually some sort of choreography problem rather than a fundamental software hardware limitation. Another thing I want to point out on the topic of performance is be aware of time space tradeoffs. Because of this significantly larger amount of pixels that your application is processing, some caching strategies that may have been advantageous under standard resolution may no longer be advantageous under high resolution because now that you have to suddenly not only cache 2X sized images but you might also have to cache both 1X and 2X to be able to handle any possible display, it may now, and especially with the fan-to-fan, it may now be advantageous to render all the time and stop caching. It's a pretty big change and you may want to revisit the base assumptions of some of your caching strategies and make sure that it still fits.