Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-500
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 500
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 500] Game Techno...

WWDC12 • Session 500

Game Technologies Kickoff

Graphics, Media, and Games • iOS, OS X • 51:03

iOS and OS X deliver an incredible lineup of technologies for developing cutting-edge games. Join your fellow game developers in kicking off the games sessions of WWDC 2012 and explore the powerful frameworks that enable you to create the most imaginative games possible. Dive into the multiplayer capabilities of Game Center, check out the shared experience of AirPlay, discover the incredible effects of Core Image, and much more.

Speakers: Jacques Gasselin de Richebourg, Joe Gatling, Geoff Stahl, Adam Wood

Unlisted on Apple Developer site

Downloads from Apple

HD Video (909 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Thank you very much. Welcome to day number two at Worldwide Developers Conference 2012, and we're here to talk about games. So, as we talked about yesterday, some amazing announcements as far as the scope and the breadth of the application that all of you have developed. 650,000 applications we have.

Many of them are games. And we talked yesterday in the platforms kickoff about both of our platforms, iOS, OS X. We're trying to build a great set of game technologies that spans both of those platforms that gives you the best set of technologies and tools to build great games.

So from a game technology standpoint, we have an amazing array of technologies. We have everything from OpenGL and OpenGL ES to AV Foundation, Core Image. We have great graphics tools. We provide you great hardware, Retina Display, A5X Processor, now Ivory Bridge in our Macs. Fantastic array of technologies. For the games kickoff, we're going to focus on five of these. We're going to focus on Game Center, Retina Display, graphics tools, AirPlay, and Core Image.

And what I want to do is I want to change some paradigms in your thinking, change the way you may approach some of these technologies, and give you an overview of where we have some new features. There are sessions all week that go into depth in these technologies. So let's jump right in with the Game Center.

We talked about yesterday 130 million Game Center players. That's just an amazing number of players. And what's even more amazing is 5 billion scores per week. That's playing your applications. One of the key things about Game Center is it makes your games more accessible. It makes them sticky, makes you be able to find them, makes you play together.

And I had a developer come up to me after the Kickoff yesterday, and they said they had a single-player game, and they realized that really the single-player game is really like an island. It is alone, and they already started working on integrating Game Center. So really, if you're in the minority and you have not integrated Game Center into your games, now is the time to do that.

So Game Center, as we announced, is available on iOS and OS X for the Mac. We also announced yesterday that we have a new Game Center with a bunch of new features. That new Game Center will be available on iOS 6 and for OS X this fall. So you'll have the same Game Center across the platform to work for your games.

So if you're new to Game Center, let's do a little bit of an overview as far as what pieces we have in Game Center. So first, we have the Game Center app. And the Game Center app is kind of the hub for the player. So it'll allow players to find their friends, find games, inspect the challenges, and see the leaderboard, see where they rank in the gaming world as far as the Game Center app.

The game kit is your API for Game Center. It also includes some API for some of the in-game multiplayer invites, and it includes a wide array of API. The interesting thing about this is when we brought the game kit to the Mac, we took a conscious effort to make sure that it's almost identical between iOS and the Mac. We don't want you having to rewrite code or put a bunch of if-defs around code blocks because we change the behavior in a very small way.

So there's very few differences between the game kit framework on iOS and the Mac, making it super easy to adopt. and finally we have the Game Center services which provide the cloud support, provide the leaderboards, achievement uploading, provide the multiplayer invites, all the back end of Game Center. So that's kind of the overview of the pieces.

The things it provides are friends, leaderboards, achievements, multiplayer, real-time and turn-based, voice chat, and discovery. And again, let's reiterate on discovery a little bit. What we want to do is we want to provide services that make it easy to find your game so you can differentiate. If you have a unique game, it doesn't get hidden in amongst the many other games out there.

You can define your game, you can have players attracted to it, they can tell their friends, they can link to it, those kind of things, and they can get players into your game so you can sell games and you can be successful. So that's the idea with Game Center. We want to make you successful.

Let's talk about some of the new features. Those are the features we had previously. We've introduced some new features. I want to walk you through kind of the basics of them. And later on this week, we'll talk about the code you have to write and things you have to do specifically for some of these new features. And many of them are built right in, such as sharing scores and achievements. So a player can simply have a great achievement here. He's crazy good at Jetpack Joyride, and he can decide he wants to share it.

We have the activity sheet come up in iOS. He will say he wants to share it. He will say he wants to share it on Facebook to let the world know how good you are in Jetpack Joyride. And it goes up to the Facebook page to post that, and it appears on the Facebook wall of the player. Great. So this is a, this again breaks down that small circle around your game and lets other people know about kind of advertising for your game.

Let's talk about liking games. Super simple. In Game Center now, we have the Like icon. You can simply -- it's right there, and you can simply press it. Likes the game, and what it does is it actually puts a post on the player's wall about the game, and the key here is it gives a direct link back to the App Store for your game. So if you have a -- if you adopt Game Center, new in iOS and will be coming to the Mac also, you can like games, and that can give an outside world the link back into your game.

Local multiplayer. Local multiplayer is pretty cool. So you have a bunch of people who've gathered at the conference. You may not all be friends on Game Center right now, but you're sitting next to each other in one of the conference areas, and you actually want to play a game together. What we can do is you can go to local multiplayer, and you get in here and you click nearby and we fill that in with the people who are close to you.

So it's ad hoc, you don't need to be on a Wi-Fi network, easy for kids to set up, you may have your child go over to someone else's house and they want to play with their friends and they don't have to get on the Wi-Fi and do all that stuff, they can simply do local multiplayer and play with their friends. So that's local multiplayer.

And challenges. We love challenges. Challenges are cool because you've designed this fantastic, brilliant puzzle game that's a great single-person game, a lot of replayability, but it doesn't connect to other people. Well, challenges makes that single-player game a multiplayer game. Someone can be going back and forth on your puzzles with their friends saying, hey, I finished this in 35 seconds, beat that kind of thing.

So we can go back and forth with challenges. So how does it mechanize? So you've made an achievement here in Temple Run and you want to challenge your friends for that achievement. So you simply, in the same kind of place where you're inspecting your achievements, you simply hit challenge friends.

It comes up with a text box there you fill in. Can you beat this score? You send it over, your friend gets the challenge. It comes up in their challenges tab, which is new for the new Game Center in iOS 6. They click on it, and what they can see is they can see who challenged them, what the challenge is, and they can either play now or they can kind of whip out and decline the challenge if they decide to. But, of course, you want to play, and they didn't know how good of a Temple Run player you are. They thought they would easily beat you. In this case, they didn't.

You were easily able to beat their challenge, and what you wanted to do was send back a score challenge. So, in this case, you've collected 1523 coins, a good amount there, and you want to send it back to them. You can send back a score challenge. What's interesting about score challenges also is you have this kind of auto-challenge. Once it's sent back to them, if they beat the challenge, it'll auto-challenge them back. So you can go back and forth trying to one-up each other. Again, replayability for your games.

Makes them sticky, reminds people your games are there. One of your players may not have played your game and not have bought any in-app purchases, and some of their friends challenge them. So they pick the game back up and they play it again. So we're trying to add that replayability for you guys for your games. So challenge goes back, right back at you.

And there you go. So that's challenges. Turns that single-player gaming experience, that fantastic one that you've built, into a great multiplayer experience and keeps people engaged in your game. So let's do a demo of a great implementation of challenges. I want to bring up Halfbrick, Adam Wood, and Joe Gaatling to show we've kind of challenged them to see what they could do with challenges on Jetpack Joyride.

We've been playing with Challengers for a few days now, and we really wanted to see how we could integrate this into our game. So as you saw earlier, Challengers work completely from Game Center, but it's also possible to integrate them into the game as well. So we've been playing around with it for a little bit, and we've come up with some stuff that's pretty cool, and we'd like to share it with you.

I sent Adam here a challenge earlier today, and now in front of all of you, we're going to see whether or not he can actually beat my score. So I sent him 974 meters. So let's take it away and see if you can do it. Okay, now, if you've played Jetpack before, the first thing you're going to notice is that there's an extra character on the screen. This is actually me. When I sent the challenge, I also sent along with it a ghost so that Adam has someone to race against.

So the way this works is that when I was playing, the game recorded every movement that I made and every obstacle that was generated. At the end of the game, we uploaded this to a server. Now, when you send a score to Game Center, there is a 64-bit context variable that's associated with it.

And we're using that context to store a key so that we can retrieve this ghost and level information from our own servers. So when I send the challenge to Adam, that score and the associated context goes with it. And that's how he's able to download the exact same level and all my recorded ghost data onto his own device. So even though he's still just trying to get my score, we've added an extra character for him to run again.

So it's turned this asynchronous challenge mode into what really feels like an exciting and engaging head-to-head match. So I'm pretty confident that he's not going to be able to beat my score, but let's just see how this plays out. All right, okay, well, Adam always was slightly better at this -- at me than this game.

I think he's going easy on me. Now, as was mentioned earlier, this challenge is now going to be sent back to me automatically. So we're going to have this backwards and forwards going. But if he wants, he can forward this challenge on to some of his other friends.

So here you can see he can click to bring up a friend selection, selects one or more friends, and clicks send. And now that challenge and the ghost data and the level he was playing is going to be sent on to his friends. And that's how you can use challenges to turn a single-player game into an engaging game. head-to-head style experience. Thank you.

[Transcript missing]

So we also have a great streamlined multiplayer UI. This is fantastic where you have, in this case, you can get two to four players, you can control, you can remove a player, you can add a player, you can fill in some players with your friends if you want to play with friends, but one of your friends isn't available, so maybe you do an auto-invite. Really easy to control it, it's great pictures, so it's a new streamlined multiplayer UI in the new Game Center on iOS 6.

And also we have multiplayer rematch. Multiplayer rematch allows you to programmatically record that rematch and record that match and re-invite those players to another match. It really helps out if you're doing an auto-match and you kind of say, "Hey, that was a great game. Let's play again." You don't have to find the same players. You don't have to remember who they are. You can do the auto-match and pick up those auto-match players. So another great thing we've added to Game Center, the new Game Center.

So those are kind of the social aspects. We've also added things like improved authentication, which cleans up the authentication flow for you guys. We've added a unified interface. When we do add new features, you don't have to recode your game for a real interface that can be adopted automatically.

Turn timeouts, as I mentioned, one of the things that was requested from a turn-based game folks is that you have three or four players playing a game that does not require everyone to be there for, and one player decides I'm losing so I'm not going to play, and they halt the game. So turn timeouts, you can set that as a developer.

It's initially set at two weeks, but you can bring it into whatever your turn makes sense for you guys and keeps your games flowing. Programmatic invites, you can do invites within your game, so it really is nice. It keeps you inside of that game like we saw with challenges with Jetpack Joyride.

It keeps people inside the game itself. We have host election. We do an automatic job of finding the best host for your game, so that's fantastic because we allow, we do the hard work for you, figure out who the best host is for the game. And turn match data saving, it sounds like you're kind of like, well, what's that mean, how does that fit together? Well, let's say you have a game that has multiple moves in a turn that you're playing some kind of a strategy game that you move multiple pieces.

But one turn, usually in your right hand, you're going to have a game that has multiple moves in a turn that you're playing some kind of a strategy game that you move multiple pieces. But one turn, usually in your right hand, you're going to have a game that has multiple moves in a turn that you're playing some kind of a strategy game that you're moving multiple pieces. but one turn usually right now is that you only see the final results. Well turn match data saving allows you guys to show those those turns to all the other players playing the game so you can do that with turn match data saving.

So that's Game Center. It provides the backbone for your social gaming network. Most of you are already on it. The few of you who aren't on it, now it's time to adopt it. Hopefully we've given you some great new features that can even make your gaming experience for your players even better and attract more players to your game.

So let's talk about AirPlay. We talked again about it yesterday. Kind of want to go through some of the basics. We talked about what it can be used for streaming audio, streaming video, mirror display, second display. We want to focus on that second display piece. So of course you can mirror to a TV with your iPad.

Mac OS X Mountain Lion, you can now mirror to a TV on your Mac, and you can do second display on iOS devices. The great thing about second display is the paradigms it adds for your games. We talked about an action game where you're looking up, you can put content on the second display, and that allows players to concentrate on the big display and have their kind of radar panel or their simple controls.

Note here that what Sky Gamblers did was they put really simple controls that were cleverly placed for your thumbs, and give you basic information on the second display. They didn't try and overload you with too many controls here. So this is a good idea, a good paradigm to think about.

When you slow things down, talk about the shared experience, you have a game where you're passing an iPad around. In this case, we're using the full multi-touch interface. Because the person who's playing the game, they're actually drawing, looking at the iPad. The people who are playing with them, sharing that experience, are looking at the TV in this case.

Family Game Night, everyone in this case has a device. You're sharing the board. Think about this as your classic board game. So you have a classic kind of board game. Everyone has their units, their tiles, their cards, or whatever, and that's on their own device. You could even, in this case, since you're doing multiplayer and the device is a single AirPlay device, you can do, it could be a Mac down here, it could be any iPads down here. The devices at the end to share the multiplayer can be any set of devices.

and finally, a good example from Real Racing, 4-up racing action, divide the screen, you have a big screen, use that real estate, divide it into four quadrants, everyone has their controllers, and you see that you have a heads-up display on the iPad there. So let's talk more about, that's what we talked about yesterday, let's talk a little bit more about setting this up.

There's been some great adoption of second display, but we think there's probably more great adoption out there also. So it's really, really simple. And the nice thing here is that we talked about AirPlay, but second display on iOS also refers to plugging in displays. So if you implement it once for AirPlay, you've gotten plug-in displays also. So if you plug in via an HDMI connector to the dock port, the 30-pin, then you get both of those are exactly the same code to support.

So set up at launch, configure to display, let's handle the rotation correctly. Every now and then we see examples of games where... you rotate it and the TV image turns upside down, we'll talk about handling that. And a little bit about design considerations. This is really straightforward. So when your app finishes launching, you want to actually make sure you set up the screens for whatever screens exist.

You also want to make sure that you do two notifications. One is to correctly handle extreme connections, and also disconnects. Because you don't know what the user's kind of set of actions are. They could launch your game and say, "Hey, I want to plug into the TV." When they plug in at that point, you're going to get the connection, the connection to the TV, and the disconnect notification after your game started.

So make sure you handle those correctly. If they said they didn't want to be playing on TV, they want to hop on the subway, head to work, they want, but they're playing the long game, they just disconnect and you should handle that gracefully and correctly. So we configure the second display, pretty straightforward. If your screen counts greater than one, you have a second display.

We're going to get the screen and we create a new window. Basically the screen size, we introspect that, grab the size out of it, create the window. We then want to create a view controller for it as you would expect that you do for your main display. You do the same thing for your second display. and finally, super straightforward, you create, you turn on, you unhide the window or instead make sure it's visible.

So handling rotation. This is pretty straightforward, but there's a little nuance here. This is kind of like your, this should be the default method, but it's going to be the default method that you write in your code. So override the default method with this. The idea here is for an iPad, you want to handle all rotations. So no matter which way someone turns an iPad, you do that.

But for a phone, you want to make sure that we, from our HI guidelines, they don't turn the phone upside down, and they should not have that rotation. So the phone has three rotations effectively that are viable, which is straight up, normal, each side. And then upside down is not, we don't use the upside down one. This means that we'll rotate the image in the system. We'll use a, you know, our core animation layers, we'll rotate it correctly for you. So we put it out to both heads that you're talking to now with the correct rotation.

We do this because if you do it in OpenGL, for example, and say, hey, my device got rotated, I'm just going to change my matrices and I'm going to rotate it when I do my OpenGL, what happens is you'll rotate your device to match the player's angle, and you'll rotate your device to match the player's angle. And your TV will turn upside down. So you really want to make sure you do it this way rather than doing it in the OpenGL. So this is the correct way to handle rotation.

And then we talked about many of these ideas with design, like where is the user looking? Are they looking up? Are they looking down? What kind of -- how do they control the game and what should be displayed on the device and on the TV? And the last one is something I want to carry through to the end of the talk is about updates, about performance of your application.

There's a lot of things that you can do to adjust performance of your application. We don't have to get back to the old paradigm of scaling everything down to make it perform. There's different ways you can adjust this. And so when you're using second display, it may not be appropriate to run both at full resolution or it may not be appropriate to update both at 60 hertz.

Maybe your kind of your heads-up display that's your controller is doing great getting the gyro input at a high frequency, but you don't really need to update that at more than 20 hertz or 30 hertz. So you can limit the amount of information going to the second display while concentrating the majority of your performance on the big screen.

So what that boils down to is kind of two basic breakdowns. You have the action game where everyone's looking up, simple controls on the device in front of you, and then you have the other game where people are mainly doing input via the device, and the eyeballs for the person holding the device is down.

So that's second display. We'd love to see some of the things you can do with it. We think games are great when they, and users really react to that. You plug it in or you do an AirPlay mirror and the device does exactly what they expect it to and it enhances the gaming experience. using second display. So that's AirPlay.

Next, let's talk about Core Image. So Core Image is a fantastic technology for image manipulation that's across both iOS and OS X. And most of you think of Core Image, you think of this kind of thing. If you're familiar with Core Image, you think it's a photo manipulation API.

So Core Image is the backbone of iPhoto and Aperture, but it's more than that. It provides pixel-level effects and optimized pixel-level effects, and it can provide a chain of them and optimizes that chain. So what you have here is you have someone who wanted to take an image and put an artistic effect, which in this case is a sepia filter.

They did a hue shift, and then they did a contrast on that to get the effect they wanted. Well, how can you apply that to games? If we think about it, if you look at the game, you can see that there's a lot of different ways to play games. So if you look at movies and you look at games through the last few years, games can be more and more cinematic.

And this allows you a really good hook into that cinematic solution. So if you look at a movie, let's go back a little ways, look at The Matrix. The Matrix was color graded. When you're in The Matrix, everything kind of had this green tinge to it. It got that uncomfortable feeling for the viewer. And the real world didn't have that.

You can do the same thing in your game. If you have two areas of play, or if you have someone who's, when you're damaged, or you want to make the player, you can do that. You can do things that are, you know, uneasy. You can do things at an image level to do that post-processing effect on your game to add play value.

You see some of the high-budget games are doing this. Well, with Core Image, you can bring that to both OS X and to the mobile space for iOS. And you can do the same level of color grading and post-processing effects with Core Image. So let's look at an example.

So we have Mr. Big Sword here, and you want to do a glow effect. You could do this in OpenGL using shells, or you can make some additional textures, or some things with dot product, but instead, let's just take that sprite and do a glow effect. If you're really sneaky about it, you can control the alpha correctly, so you can do that on a sprite basis, and you can stamp the sprite down on your 2D game, for example, and have that Core Image effect come up. And this is so you render into a texture, and then you apply this.

Another example is, you've rendered this great scene. We use this, I think we saw this last year. This is this great kind of dungeon scene, and you have this, you want foreboding. You want this to be darker and kind of narrow the user's vision. Well, you can easily do that with a vignette effect.

So what you did, you did a post-process on it, you applied the vignette effect, filter we already have, and you bring it down in, you focus the user. In this case, maybe you have a some sort of surprise or a creature coming out from that hallway. It didn't really want to make it just the baked-in lighting to your scene. You wanted to make it more closed in.

Another example is a sprite-based game. In this case, you have kind of your standard sprite-based shooter, and you wanted to do something, either maybe the player gets damaged and it's a little harder to play, or there's a screen transition. You can simply use a blur effect here, and it's a full-screen blur effect using Core Image.

And the important thing here is you don't have to write a full library to do this. We give you a library to do it, we give you optimized effects, and you can chain them together in a million different ways. They're like building blocks. And while one block could be interesting, you put a lot of blocks together, and it gets really interesting.

Here's another example. We took that same scene, but that was great. So after you escaped the corridor and you got around the corner, you realized it was a control room, and the control room had monitors throughout this haunted mansion. And so you wanted to do something to indicate to the user that it really was a TV screen or a monitor. So you can use Core Image and you get this kind of contrasty, noisy monitor kind of effect, and that gives players that idea that it's different than just them looking at it normally.

And again, this is to enhance that production value, make the players really attracted to your game. So how does this work? Well, what you do is, for OpenGL, you render to an FBO, render to a texture, and use that as an input to Core Image. All of this can stay on the GPU. You don't have to come back to the CPU to do it. So it can all stay accelerated.

Core Image is fully accelerated. It can come across, and then you use the results of that to draw to your screen. You can also replace the word "screen" with "sprite." So for example, if you were rendering a very complicated sprite using 3D, you could render the sprite, you could apply a Core Image effect like the glow, and then you could render to a sprite which then you use later as an OpenGL texture back into your scene. One thing to think about here is the update rate for your effects.

Not every effect needs to be updated at 60 Hz. For sprites, maybe you can either process them statically, or if you have something that's very dynamic, you update them at the rate that that sprite would update. So you don't need to do everything at the frame rate. You can do things at a rate for the effect itself. So this saves you some processing power. And keep that in mind when you're using Core Image.

An example of the film grain example we showed. In this case, you have a couple of Core Image effects. You have noise generators to generate some noise. You have a minimum filter. The image coming in the side is combined with the first set of noise that's blended together, and then you can blend another filter together. The idea here is that you have multiple blends you can do.

You can blend things in different orders, different pieces. So you can combine a long filter chain and make a very compelling example. So, speaking of compelling examples, I'd like to bring up Jacques to talk about a little example he put together in a few days using Core Image and showing how it can add to your gaming experience. Jacques. Thank you.

Hi there, I'm Jacques. As Geoff just told you, I'd like to show you an example of a little application I was toying with. Now, Core Image is a great API for you to modify your effects that you've made. So here I have this simple kind of gloomy forest where I want this sense of depth. I've created this with just flat graphics. This is procedurally generated terrain. I've hand-painted a couple of tree trunks and some branches. And I'm conveying the idea, but that's about it.

So I can use Core Image here to help me out to convey what I'm really after a bit better. So the first problem here is there's no sense of fog here. And what I'm really after is a ground fog, so... I'm going to turn on a filter here to just apply a little bit of noise and a linear gradient.

And you can see how I have this moving fog that's sort of stuck in the little valleys between the different--

[Transcript missing]

So I'd like to add some more visual effects on top of this. And, you know, a dark, dim forest, you know, it needs fireflies. So fireflies, I could have drawn these, but really, since I'm using Core Image, I've got all these amazing tools, I just had a point, and I applied bloom to that to get this sort of glow. And it's tiny, so I can do this really, really quickly.

And you see how I've got this little ambiance going. Okay, so I'm going to go over to more full-screen effects now to, you know, make this feel a bit more immersive. So it doesn't quite feel old enough. Not like you're at an old cinema and it's black and white.

That's kind of what I'm after. I'm going to apply a vignette and a camera tracking error to it, and also a varying light bulb. So you can see how it's kind of a film that's lit with an incandescent light bulb that's being projected, and it's feeling pretty grungy right now.

And these are some simple things that you can do with some fairly simple Core Image filter chains. And you can see that I'm all doing this in real time. And it's about picking what to do when. And then, of course, because this is Core Image, I can pick another crazy filter if I want to. So why not just distort the whole world? And I can just play with it. And this is just a few lines. So that's some of the fun things things you can do with Core Image.

Thank you. So as you saw, Jacques used a number of Core Image filters to make some great cinematic effects. And what's interesting is first this was done in a few days. It was done with very little art and did a great job of conveying the idea that he wanted with some layering of these simple effects on top of each other. And it would be, if you, the alternative would have been either to build your own kind of image processing kind of library, which is probably, you don't want to do that. You want to use the tools that, as Jacques said, you're given.

And so using Core Image for that, probably not a good choice to build your own. Or you could have done some, a lot of effects in OpenGL. You could have done a lot of post-processing effects yourself. And that, some of these effects really are in the image space. And you should realize that they're in the image space. So render your scene like you want it to be. Whether it's a sprite-based thing like a side scroller like Jacques has with Parallax or whether it's a full 3D application.

And then you can use Core Image to apply those kind of effects on top of it in real time. These are kind of a list of some of the filters that are available. And these are available on iOS and OS X. And an interesting note, a trivia about that demo, that actually only used six of these filter effects combined in different manners. So you can see that there's a lot, almost infinite possibilities for you folks with your game. And we can't wait to see what kind of things you can do with Core Image.

So that's Core Image. Core Image is available for OS X, available for iOS. It's a great use in post-processing, using your rendering pipeline. Normally you would think about it just used as kind of a photograph processing environment. We do use it on the iPhone also for the auto-enhance, but it also can be used in real time as a great thing to add depth, add complexity, add visual presence to your game. Core Image.

Yesterday, we announced the new MacBook Pro with Retina display. It's a brilliant, brilliant screen. I hope all of you had a chance to look at the ones out there. It is just absolutely stunning and amazing to get that level of clarity. And we want to talk about kind of using that in the game paradigm. So we now have Retina displays across the products, across all of our platforms.

and you can achieve this level of fidelity. And actually, this looks fantastic up here, but realize that this is not the full actual pixel. We had to actually scale this down significantly to get it onto the slide. So actually, it looks even more clear when you're playing a game like Diablo 3 with the retina display.

So when you're developing for OS X, keep in mind retina displays, even when you're developing for iOS, keep in mind you have retina displays to really utilize that full fidelity. So how do you do that? Well, we've talked about iOS previously. Let's review for the Mac. We've done a lot of work in the OS to help you out.

So first thing is you have to make sure your artwork matches the resolution for the display. This would be reminders that was not updated for retina display, had pixelated artwork. And when you update it, it's not going to be the same resolution as the display. So you have to make sure that you're not using the same resolution for the display. So if you're using the same resolution for the display, you're not using the same resolution for the display. So if you're using the same resolution for the display, you're not using the same resolution for the display.

And when you update it, it looks brilliant. We take care of a lot of this for you. Anything up there that's a system font, we're going to do that for you. Anything up there that's the system controls, we also will-- that artwork is already in high res, already up res-ed for you.

That's not a problem. So the only thing you have to do is supply the pieces that are actually in your application. I supply the artwork for those for that step. The second out of these four steps is opt into high resolution OpenGL. So you said you want best resolution OpenGL, and also as a second step, make sure that you convert the bounds to pixels and provide that in GL viewport. And you see this convert rec to backing.

And that is a routine. There's a family of those routines. And it's really important to understand those and use those properly. Because in this case, the GL viewport, of course, OpenGL pixel based. GL viewport takes your pixels. And the self-balance is in points. So you do that conversion. It gets the right thing. You have a viewport that spans the entire screen.

Eliminate deprecated API. If you're using Quick Draw, move away. Move off of it. NS MovieView, great. A number of years ago, right now, it does not understand how to do high-resolution graphics. Quick Draw does not understand retina displays. So get off of these deprecated APIs. And I'm not talking about APIs that you're guessing at being deprecated. I'm talking about the ones that actually are marked deprecated. So you should be able to tell in the latest SDKs the deprecated API.

Move on to the modern equivalents. Many times, we'll point you to a new API. Move to those, and you'll be doing great. And finally, as I alluded to, do correct use of pixels and points. For example, in this case, you get the size of the image in pixels. You want to actually pass it to the initCG image in points. So what we need to do is you need to convert size from backing. In other words, conversion routines, that family of conversion routines. So realize when you have to convert points and pixels.

And if you notice, the sizes in points and size came out of CG image in pixels. So understand what your routines take in and what your routines put out. Straightforward, just make sure you get that math right. It should be fairly clear when you don't. You'll get either fuzzy images, or you'll get things that aren't the right size. You'll get quarter-sized images of pieces. So it should be fairly clear as you walk through your application.

So that's talking about retina displays from a standpoint of the mechanics of it. But as I spoke of before, one thing we wanted to talk about is moving to the future of the graphics and display pipes and how to think about this. So a number of years ago, we had CRTs, and it was fairly easy to scale performance, especially when you're drawing with a CPU and your CPU has a limited ability to fill pixels and minimally accelerated graphics card that you provide the user with a switch that would allow them to pick resolutions. I remember my, what game was it? It was a very old Bungie game that I had a friend who was great at it.

And he played the game in literally like a one-time game. And he had a one-inch by one-inch window to get the frame rate out of it. That shouldn't be something today that we're letting users trying to figure out. A user should not be going, well, how big do I need to make my screen to play this game properly? You have enough facilities with the APIs we give you to render the game at a reasonable size, understand what the frame rate is, scale that to different performance and different machines, and fill the screen. So leave the screen as native resolution. Render the screen at an appropriate resolution.

To get the performance you need. You have a few choices here. First, if you can render at retina resolution, fantastic, do it. Go to full retina resolution, whatever the platform you're on, render it at that brilliantly sharp resolution. Give the player pixel for pixel in your game. If you can't though, you can always decide to render to a smaller buffer in the background.

So you render to a smaller buffer and that scales to the screen and will scale the fit with bilinear scaling. And it turns out that in many cases, if you're rendering above, somewhere between that quarter size and the full size retina, there's a sweet spot. There's a good spot for you. Where effectively you're doing much less pixel processing, but the output looks very similar, very close to that native retina. Not quite the same, but it's very close and the trade off for you is really good.

So again, if you're pixel bound, your GPU cannot drive the number of pixels you have on your screen, you want to scale that resolution, reduce the number of pixels, and then you'll be able to up that frame rate. A good test for this is if you scale your backing store to be smaller and your frame rate doesn't change.

That probably means, probably means, unless you've done some strange things in your code, but probably means you're not pixel bound and that something else in your application is limiting the frame rate. So that's something you should test before you start playing around with scaling and assuming that it's going to do something in your app. Make sure you're actually pixel bound.

And finally, as we talked about, there's this post processing paradigm. Make your game have a lot of visual value for the user. So in this case, render to texture. You render to a texture, you can decide what size that is, and then you add a filter onto the texture. You can add a filter onto the back of that and put that out to the final resolution. You can do a lot of different things there. You want to optimize your app for great user experience.

You don't want to have the user, the person effectively administrating the system trying to figure out what resolution. You don't want to come up in 1024 by 768, because that's what you've done for 12 years. You want to come up using the full resolution available, make your game brilliant, make them love your game, so when the new player picks it up, the first thing they see is something that just looks awesome. It's not, oh, choose a resolution.

I don't know what resolution I got, so I'll just pick one, and they may get stuck there. So think about that from a user experience. Leave the display in native mode, render your content to the right size, and your users will be much happier for it, and they'll have a great, great experience. So that was Retina Display.

So the final thing I want to talk about is graphics tools. We talked about being pixel bound. The previous discussion of dev tools showed some of the graphics tools and showed some of the great tools we have for OpenGL ES. We have three fantastic tools. We introduced them last year and we continue to improve them. We have the OpenGL ES Performance Detective. That gives you that high-level, really quick view, very actionable, what you can do to improve the performance of your graphics in your application.

The second tool we provide is the OpenGL ES Analyzer instrument. This hooks into instruments and gives you that under-the-covers, in-depth look at what the GPU is doing, exactly what's happening on the system, and so you can figure out where your bottlenecks are. This is that really, let's pull the covers off, look inside, and see what's going on.

and the OpenGL ES Debugger. The ES Debugger is brilliant. It's an absolutely brilliant tool. It allows you to introspect a frame of your application. It allows you to tear it apart, look down to the minutest detail of OpenGL state, what exactly the commands that are being issued, exactly how all the objects in your scene are being drawn, and allows you to really, really kind of dissect your graphics and understand where your problems are, where your performance issues are.

And now we've added some things to it that it really also works as a great development tool to you to work as you develop your shaders and modify your applications. And all these are built into the Xcode environment. Let's talk about some new things. Like I alluded to, shaders. Shader edit and continue. Fantastic feature. So you have a shader that's doing something in your application and you want to change it. You want to modify it. You want to change the color blending. You want to change how much contrast is in there.

You want to change how it handles the vertices or how it handles fragments. You can do that inside of the debugger. So you don't have to come out, rewrite it, recompile, rerun. Write it, recompile, rerun. We've all done that. It's not a great way to debug. This is great. You can fix it. You can change it. You can fix problems, edit, and continue.

We have an integrated OpenGL ES expert. In this case, you have this expert panel, and I'll show this in action in a little bit. And it really looks after your application. It's kind of the nanny behind you that says, hey, by the way, look at that. You have a redundant call. You've called GL viewport with the same coordinates.

So you're wasting your time. And this is good for redundant state changes. It's good for GL errors. It's good for a lot of the common mistakes in OpenGL. And every time you run your application, you can bring it up. It's right there to maintain your application, make sure it's really clean.

We have save and load captured frames. In this case, you find a frame, you find a problem in your, you have an artist or something like that, you have someone who's doing some of the scripting and they find a problem, they can use the X code, they can run the frame to the problem, they can stop and capture it, and then they can capture that frame and send it off to their OpenGL expert to debug exactly what's going on.

This is a fantastic debugging tool, especially with more than one person, or if you don't have time right now, you saw a problem, you wanted to capture the frame, you capture it, let's look at it later, let's fix it a little bit later, so you can do that. It's great for save and load captured frames.

Integrated Performance Detective. As I talked about, the DevTools section, we have this Integrated Performance Detective right in there. It gives you that high-level view of what's going on with your application, very actionable items, and it's now integrated right into Xcode. And we put a ton of time into that.

And of course, the tools are faster, they're more accurate, and they give you more detail. The team has done a fantastic job building brilliant tools, and so let's take a look at the tools and see what we can do with them. This is a project we created to show off some graphics technologies here.

Let's hide that out of the way and we'll close that one. So we have our application and we're going to just run the app. We're going to compile here and we have our laptop. We're all good, or not laptop, our iPad. And we're going to switch to the iPad and see what we get.

So we have this fantastic simulation of this temple with some lights. And it looks great, but I do notice in this that something is missing. The lights inside, you can see they're reflecting on the surfaces, but there's nothing casting the light. So we've modified this, we've played with it, and we don't really know what's going on. So let's jump into the debugger and actually try and figure out what the problem is. We're going to go back to the primary here.

We see this little camera icon right here. So we're going to hit that camera icon. And what it's doing, if you look at the top, it's capturing OpenGL ES frame is what it says right up here at the very top of the screen. And so we're going to let it capture the frame. It paused the application. It grabs all the OpenGL calls and puts them into this kind of introspection area here. And so let's take a little tour of this so you can see all the pieces. Some of this is hard to read at this resolution.

So I'm going to talk through what some of the areas are. On the side over here, it says light prepass. It has that frames per second meter that the dev tools folks talked about. You have your rendering commands. If you notice, this says like GL push group marker. This is GL enable, GL enable, GL blend funk. So these are all the OpenGL commands. Every single OpenGL commands with the parameters that were passed into it to render this frame. So you can completely introspect exactly how the single frame was built.

You have at the bottom down here, the bottom area, you have a GL context. So if I look at GL viewport, it gives me the information in the GL viewport. It gives me the active texture unit. All the things that you would expect in OpenGL state is portrayed here.

So when you stop it, you can say what was my state? Did I have blending turned on? Where were the texture units set? And finally, over in this section, you have the GL objects. And so these GL objects are the active objects right now for the frame. So that's the main thing.

Another interesting thing to look at is the GL objects. And so these GL objects are the active objects right now for the frame. So this is the final frame of the scene. That's okay. I mean, that tells you what it looks like. But let's say you wanted to look at an earlier kind of point as you put the scene together. So we're going to go back to here.

And what you see here is on the right side, you see a partially rendered frame. You see a depth buffer. And then you see an empty stencil buffer. Let's go way back in time, so to speak. And you can see how the frame is actually being rendered here. We're actually rendering the temple itself. And I can hide the wireframe. And you can see that's the actual temple.

If we move forward a little bit, you can see even more. We'll hide the wireframe here, and you can see we've rendered the tree in the temple for the part of the lighting pass. So what was our problem we were looking at? The problem we were looking at is as you see this frame, you see lighting on the structure but no lights casting it. So we're going to use the integrated issue detective, and of course we have tooltips. The issue navigator here, we'll click on it.

And if you look on the issue navigator, what we see here is there's two errors called out. And it turns out that our debugging tools will call GL error on the end of every call of yours, and will capture those errors for you so you can actually see what the errors are. Let's click on an error and see what it says.

So what it's saying here is for this GL depth thunk, an enumerated argument with unacceptable value. Well, I know that GL depth thunk can't take a GL false, so maybe I should change that, but I don't think that's what they intended. I think they actually intended to... to put depth mask in here because depth mask can take false. So we'll put depth mask. And this other one down here, the second one, has the same problem. We'll put depth mask.

And let's stop the program, let's run it again. And we can switch back to the iPad as it builds. So we built, we succeeded, it looks pretty good. And we'll see if we actually get the lights as we expected. Perfect. So that was using the debugger to find a GL error in your application that may have, as you coded it up or as you're changing code, you may have missed it.

It now shows the full simulation with the lights. The simulation's beautiful. They did a great job with this image effect, this lighting effect. So, game technologies. We have sessions all week on all kinds of different game technologies from media to graphics to tools to Game Center. Look on your schedule, go to those technologies. Thank you very much. Have a great conference.