Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2014-610
$eventId
ID of event: wwdc2014
$eventContentId
ID of session without event part: 610
$eventShortId
Shortened ID of event: wwdc14
$year
Year of session: 2014
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2014] [Session 610] Building a ...

WWDC14 • Session 610

Building a Game with SceneKit

Graphics and Games • iOS, OS X • 43:26

Dive into the practical workflow of developing a 3D side-scrolling game using SceneKit. See how to get started, and learn about tools for managing game assets, creating particle systems, and editing nodes. Understand how SceneKit integrates with your art pipeline and learn how to implement lighting, shadows, and other visual effects. Come away from the session with a demo game in-hand and ready to play.

Speakers: Amaury Balliet, Thomas Goossens

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Hello.

[ Applause ]

Welcome to Session 610: "Building a Game with SceneKit." SceneKit is an amazing technology that makes it easy to write casual 3D games. Because SceneKit is a high level API that integrates well with other Cocoa frameworks, it's really easy to write games. We built this demo for WWDC, and I only required a few lines of code.

We believe that building a casual 3D game with SceneKit is really easy, and anybody can do that. It's really fantastic. So, this is a hands-on session, so hopefully you are already familiar with the basics of SceneKit. You should know what a scene graph is; that nodes have attributes such as geometry, camera and lights; and if not, we have great sessions about this, one just before and one last year. I encourage you to check these presentations because they have 3D slides entirely made in SceneKit.

So, in this session, we will start really quick by showing you how to start in Xcode, how to add 3D assets to your project, and have your first scene rendered to the screen. Then we will show you our Bananas demos. It's a great demo because it shows many features of the SceneKit framework, and we will use that demo throughout the rest of the session. We will explain to you how we made it so then you can create your own games. And finally, Thomas will join me onstage to talk about performance and creating custom tools for SceneKit.

Okay, let's get started. The first thing you want to have when building a 3D app is a view to render your scene, and that's just easy as you see. While integrating between the interface builder, so all what you have to do is drag an SCNView from the object library and drop it onto your [inaudible].

Then you open the inspector, where you can set the visual properties, and you simply specify the name of the 3D scene. You click Build and Run, and boom: Without having to write any single line of code, you have your first scene rendered on the screen. Now, if you are starting a new project, you might want to start with a new game template. It's really convenient because it creates a universal app that runs on iPhone and iPad, and it has a full-screen 3D view that displays a scene you can interact with.

The way you add 3D assets to your game is just SceneKit asset catalogs. SceneKit asset catalogs are new feature in Xcode 6, and they allow you to organize and optimize your 3D assets. The structure of SceneKit asset catalogs is preserved when they are copied into your target. Also, they automatically track files that are added to them or removed from them on disc.

They are really convenient because they can optimize your 3D assets for you. For instance, they help with up axis conversion. SceneKit follows the up axis conversion, which means that the positive y axis is the one that looks up. This is a convention that is followed by many other applications and frameworks, but some exporters do things differently and use a z up axis convention. With SceneKit asset catalogs, you don't have to think about that. We automatically and transparently convert all the animations and geometries in your scene so that they follow the up axis convention.

We are also able to automatically interleave your geometries at build time. This means that your vertices' position, normal, and texture coordinates are stored in a single buffer that makes the GPU really happy and faster rendering your geometries. And finally, on iOS we support PVRTC textures. If you have two versions of the same texture, one with the PVRTC file extension - which is a compressed file format that is optimized for iOS - and another version - for instance, a PNG - this code will automatically select the PVRTC texture when you target iOS and the regular version when you target OS X.

So, this is how it works: Your artist works in their favorite authoring tool; they export all the animation and models you want to use in your app in a COLLADA document. Then you take over and you import this document in SceneKit asset catalogs. But there is much more you can do with Xcode.

To help you have a better understanding of your scene, we built a scene [inaudible] into Xcode. This tool is really great, not only to allow you to have a better understanding of what is inside your scene, but it's also useful to tweak and refine the scene. For instance, with direct manipulation you can place the nodes where you want, you can rotate them and scale them, and that's much less code to write in your app.

You can also see how nodes are arranged. You can re-bound them. You can merge them. You can create and delete nodes as well as node attributes. You can immediately see which node has a camera, light, or geometry attached to it, and it also allows you to have a quick look at all the entities in your scene. Remember that in SceneKit node attributes are shared by default, and this is a great way to know how many unique objects you have in your scene, and it helps for performance.

Of course, we have inspectors in this editor. You can edit node properties, and as you make changes, they are automatically reflected in the viewports. This works for node attributes as well. You can edit all the camera, lights and geometry properties that are exposed in the raw APIs. And finally, it works on materials.

This is really cool. It allows you to finely tweak the rendering of your objects. You can control exactly how they will render in your scene, because what you see here in the editor, that is exactly how SceneKit will render the scene in your app. It's a huge timesaver.

In addition to the scene editor, we have a particle system editor. It's really useful to edit particle system properties, and it becomes very convenient when you have to do things that would be very tiresome in code. For instance, finding the right animation curve to control the size of your particles can take a very long time in code. With the editor and immediate feedback, it's really quick and easy.

So, we've already covered a lot here. We know how to work with an artist. You know how to tweak scenes in the editor. You can automatically optimize them using the asset catalogs, and you can render them on the screen without having to write any line of code. And now, to show you how truly simple it is to write a casual 3D game, we built a demo.

So, first it's a sample code, so might you have any questions about what you are going to see on the screen, you will be able to dive into the code and see how it was done. And it's a great sample code because it illustrates many features of SceneKit: animation, lighting and shadows, physics, particles and advanced rendering. So, let's have a look at the demo.

So, this is our game named "Bananas." As you can see, we are in the jungle controlling an explorer. We use gestures to make the character walk along the path. On the track, we can collect bananas, but there are enemies. We have some animated monkeys that throw coconuts at us. So [inaudible], you can jump and run.

Look how the scene looks gorgeous. We have perfect [inaudible] real-time lighting as well as real-time shadows. Look how the character is lit when it approaches the torches. As we advance in the game, we encounter obstacles. Here is a lava flow, and falling into it is not a good idea.

So here we have to start again, and you will notice how the explorer produce dust as she runs. If you look at the background, you will notice a volcano. The volcano is erupting, and this is where the lava comes from. In fact, everything that's in this scene is animated: the character and the enemies, the lava, the torches, the vines as well. And as you can see, everything is 3D in this scene. When the character moves, the camera follows in 3D space and offers new points of view.

We also have a soundtrack and sound effects. There is also a basic UI that show you how much time is left and your current score. This one's really supposed to be on an iPad, but it works on iPhone too at 60 frames per second. Okay. Thanks, Thomas. Thank you.

[ Applause ]

So, this was "Bananas." It's an All Objective-C project, and it's a small project: only 2,700 lines of code for everything you saw onscreen today for iOS and OS X. And so, you don't have to have a big team to write a 3D casual game with SceneKit. Bananas was written by a team of one designer and only one engineer. It's really easy to make such games. So first, a quick look behind the scenes. Here is our world. It's a simple track on which the character walks.

Look how palm trees and rocks do not always stand on the ground. This is because your scene should only be made of elements that will, at some point, be visible from the camera. Here is a side view. As you can see, vines aren't attached to anything. They float in the air, and the world suddenly ends. There's no need to have extra geometry pushed to the GPU if it's never rendered onto the screen.

We have low [inaudible] for the mountains and the volcano as the background. It's absolutely fine to cheat when you write 3D games. Here, the scene is sparse, and you have to find tricks to make the scene gorgeous but really cheap to render. So to give you some idea of reasonable numbers, here are statistics from the game.

We have 10 lights in the world, but each object is only affected by three lights at most. We have 200k polygons in the world, but at each frame, only 80 are rendered. And to finish, we have at most 50 draw calls per frame, and that will make more sense when Thomas talks about performance.

So of course we use SceneKit assets catalogs in "Bananas." We have about 25 3D documents that store animation, models, textures and particle systems. And how do you use that many different documents you want to consider in your game? Well, let's take this example. Here we have a document which stores the jungle, and then those are documents which stores our animated monkey, and we want to view that monkey in our world. The first step is really easy. All you have to do is to load the two scenes.

We have to - where you have to be careful is that you cannot directly add the root node of one scene to another scene. This is because root nodes are not meant to be rebounded. What you have to do instead is to retrieve the node you're interested in by using its name and then add it to the original scene.

And of course, you can add multiple copies of that node. So SceneKit is a high level API that [inaudible] well with other APIs on the web platform, so we support game controllers to control the character. But we also leverage gesture recognizers and implemented our own "D-pad" gesture recognizer to make the character jump and go left and right. And then on OS X, we also support game controllers and simply listen to keyboard events.

So how do we move the character? First, we have to animate it. Our character is skinned, which means that it has a skeleton with bones, and by animating these bones, we can define the geometry and make the character adopt defined postures. Our character can run, jump and be idle. And you have different animations for the bones in these files.

Animating a character is just as easy as retrieving a Core Animation animation by using the assigned sceneSource class. It allows us to retrieve an animation, with a unique identifier of the animation that you can find in Xcode. And then, animating the character is just as easy as adding this animation to the character node.

Okay, so now we animating the character, but we still have to move it. There are many ways to do that, and here is the technique we used in "Bananas." In our authoring tool, we placed empty nodes at different locations in the scene. We then use a [inaudible] of time to interpolate these values. It's a smooth parametric curve that goes through each of these locations. And then moving the character is just as easy as evaluating this function at different times, between zero and one.

And for the camera, when the character goes to the right, we simply [inaudible] the camera, and at each frame, we move it by 120 feet its distance to the character. This gives us these nice [inaudible] animations that's really pleasing when you play the game. Note, in iOS 8 and OS X Yosemite, SceneKit has an active support for physics. And we use collision detection at many places in the game - to make the character stay on the path, to detect when we are hit by coconuts and when we collect bananas, for instance.

How does it work? Well, each scene has a physics world, which has a delegate. And each time a collision occurs in the scene, the delegate is notified and can react. We can also explicitly perform ray tests, and that what we use to compute is the altitude of the character.

So we cast a ray and compute the intersection between that ray and the ground to calculate the altitude of the character. Animating items is different than animating a character. For items such as the bananas, we use actions that was presented in the previous session, that really easy to manipulate programmatically.

So here's the lighting in "Bananas." As you can see, we have objects that aren't affected by any light. They use a constant lighting model. For all the other objects in the scene, we add an ambient light, so it's very ambient when it's all black. Next, we have the directional key light. And under there directional backlight, to add more contrast. Finally, for torches and lava flows, we have only directional lights.

With this dynamic lighting, we also want shadows, and we have multiple techniques in "Bananas." First, static shadows. Static shadows are suitable for objects that aren't animated in the scene, such as the palm trees. Static shadows are baked into textures, which means that they are rendered offline in an authoring tool. And that's why they are generally very complex and detailed.

Setting a static shadow is as simple as setting an image to the multiplied property of a material. But we also have dynamic shadows, for objects that move, such as a character. For dynamic shadows, we use shadow maps. These are real-time shadows and suitable for animated objects. And making your light cast shadows is as simple as setting a property.

Next, of course you can mix techniques and use dynamic and static shadows. After you make your light cast shadows, you can simply exclude nodes that are using static shadows, so that they don't cast dynamic shadows. This can also be achieved by using categoryBitMasks. You use exclusive masks on the light and nodes that don't cast shadows.

So this is one example of dynamic shadows, but we have one on other, which we call projected shadows. Projected shadows are real-time as well, but they are simplified, and they are very suitable for low-end devices. Using projected shadows is done by using the modulated shadow mode. You set an image to the gobo property of a light, and then every object that is lit with this light has this image projected on it. So in "Bananas," to make the floor only receive this image, we use a categoryBitMask.

Next, particle systems. We use particle systems extensively in "Bananas." We use them for torches and when the character walks. Using particle system is real easy. All you have to do is load a particle system from a file and add it to a node. [Inaudible] particle system of dynamic, we can make the character emit a lot of dirt when she runs and emit nothing when she stands idle.

We also have some nice visual improvements in "Bananas." We use geometry animation for the vines, and we use texture animation for the lava and the volcano. You might have seen that smoke is emitted by the volcano, and that the lava flow is animated. This is done using shader modifiers.

What we do is we have a shader modifier that continuously updates the texture coordinates of the lava and the volcano. This is really simple. Next, we have some visual postprocessing. We use SCNTechnique, which is new this year and that lets us achieve color effects and image deformation. We use that in "Bananas" when we launch the game to have this nice grayscale effect.

We also use SpriteKit overlays. SpriteKit overlays are really nice, because they are cross-platform. They work on iOS and OS X. And they let you build UIs that work everywhere. In SpriteKit - in "Bananas," we use that to display a simple UI that shows you the final score, and when you're playing, it shows you the current time and your score.

And finally, we use an SKAction to play sound in the game. So as you saw, we have a soundtrack as well as sound effects. So I hope we showed you how simple it is to write casual 3D games with SceneKit. We truly believe that anyone, even a small team, can write games with SceneKit. And with that, I hand it over to Thomas, to talk about performance. Thanks.

[ Applause ]

Thank you. Okay. So let's talk about performance now. So Xcode has some great tools to get information about performance and in particular with graphics, performance with graphics report that you probably already knew. In this release of Xcode 6, we are adding a new report, that we call the SceneKit report, that will give you CPU time information about your game. So this report is not available in the first seed of Xcode 6 yet. So you will have to wait for the next seed to have it in hand. But I would like to present how it works now, since we are talking about performance.

So this report will give you timing information about all the different steps of your game loop. You'll remember your game loop looks like this. It's made of both callbacks that you implement to your game logic, and also of SceneKit internal process, like rendering itself or the evaluation of physics and animations and constraints.

So the SceneKit report will give you many milliseconds aspect into those different steps. And this is for iOS only. But on OS X, we have something equivalent, directly available in the SCNView with the showsStatistics property. If you set this statistics - this property to yes, it will display a little overlay on top of your view that you can expand to get more detailed statistics. And basically, it contains the same information as the iOS report. Sorry - here it is.

Okay. And so this is how to get information. Now, let's see how to analyze this information. If your game is running slow, the first thing to understand is if you are limited by the CPU or by the GPU. For this, use a graphic report and look at the third column.

It will tell you how many milliseconds are spent on the CPU side and the GPU side. Both should be under 16 milliseconds if you want to run your game at 60 frames per second. Now let's say we are alerted by the GPU, let's say. You can switch to the SceneKit report to get details about the different steps.

And then depending on the returned - the numbers that are returned, there are some obvious actions we can do. For example, if it says that most of the time is spent in physics, you might want to reduce the number of dynamic bodies in your scene or simplify the shape of your bodies. And if it's in particles, you want to [inaudible] reduce the number of emitters or reduce the number of emitted particles. So these are the obvious things. Now, less obvious is the time you spent in the rendering itself or when pushing the OpenGL commands.

Then it's very likely because your scene requires too many draw calls to render. The number of draw calls is something very important for your frame rate. And if you have too many draw calls, it will impact your CPU time. You can check how many draw calls your scene is doing with a SceneKit report by looking at the third column here.

This is for iOS. On OS X, you have the same information in the statistics overlay in the lower right corner. And if this number is big, and if you are limited by the CPU, you want to reduce the number of draw calls. And to do that, what you can try to do is to flatten your static objects into one single node.

For example, in "Bananas," we have many plants and palm trees that are static, and instead of adding one node for every plant, we grouped them and flattened them into a single node, so that it ends up into a single draw call to render many plants at the same time.

To flatten objects, you have two options. You can ask your artist to flatten directly in the 3D software. 3D software, great tools to flatten everything into a single object. This is the recommended way, because everything is pre-computed, so that nothing to do at runtime. And this is obviously faster. Now if needed, you can also flatten things programmatically with the flattenedClone method and SCNNode. It will flatten the entire node tree and return the new node with no time load that render exactly the same.

So flattening is really going to improve the number of draw call, but don't flatten too much. Because let's say you have a very large level. If you flatten everything into a single node, it will end up into a giant mesh, with millions of polygon. And so you will lose the benefit of the culling of the objects that are not visible.

Your big mesh will be always visible, and so you will push millions of polygons at every frame, which is obviously not good for the performance. And also, your huge mesh will be lit by all the lights in the world, which is not good for the performance, and I will explain why after.

So here is how we did in "Bananas." We first split the level into chunks that are about the width of the viewport. That way, when the character progress, we can directly - SceneKit automatically culls the chunks that are not visible, and so they are not pushed to the rendering [inaudible] at all. And since almost every scene in one chunk is flattened, that means that we are only rendering one or two chunks at the same time. And so we are really pushing a few number of draw calls to render to chunks that are visible.

So that's if you are CPU limited. Now, if you are GPU limited, again, you can use the graphic report and look at the third column to check how many milliseconds you are using on the GPU side. And if you are GPU limited, there are two things to look at.

The first one is a tiler on one side. The second one is the renderer/device. So this is here in the second column. So let's consider the renderer and device for now. If your renderer and device usage are at 100 percent, it is very likely because you are either fill rate limited, or you are using too complex fragment shaders.

So let's consider first the fill rate. Fill rate means - if you're fill rate limited, it means that you are asking the GPU to render more pixels - more fragments per second than it can actually render. If that happen, you can first try to play with the content scale factor of your view.

By default, SceneKit is using a 2x contents scale factor, which means fully [inaudible]. But you can try, depending on the device, to switch to 1x or any intermediate values. You can also try to reduce the number of postprocess effects, since they are usually full-screen effects. They are [inaudible]. So typically deferred shadows, depths of field, reflective floors and all your custom postprocessing. Reduce that if you are fill rate limited.

Last, you can also try to play with the anti-aliasing level by setting the antialiasingMode property of the view to one of the available constants known [inaudible] sampling 4x for limited sampling. Note that on iOS, it's already turned off by default, and it's 4x by default on OS X.

So that's for the fill rate. Now, the other reason could be that you are using too complex fragment shaders. And most of the time, the complexity of your shaders directly depends on the complexity of the lighting in your scene. So I remind that there are two type of lightings: dynamic lighting and static lighting. With static lighting, all the light informations are baked into textures. So if they're super-fast, it can look really great.

But obviously, it only works with objects that are static. And so here we focus on dynamic lightings that we need to use when objects are moving. One thing important about lights is their area of influence. So you can configure the attenuation distance with an attenuationEndDistance, and beyond that distance, a light won't influence the other objects.

And this part is really important, because we're getting into performance. What matters is not the total number of lights in your scene. It's the number of lights that influence a given object. So for example, here, I have three lights in my scene. But I configured the attenuation distance so that most of the objects are only affected by one light, or two lights, in the worst case. So that means that the light with the 1, or - sorry. The object with the 1 will be, the rendering of them will be relatively cheap.

The rendering of the objects with a 2 will be slightly more expensive. But at least no objects in that scene will be affected by 3 [inaudible] lights at the same time. Here's how we did it in "Bananas." We placed the torches and the lava, so that when the character progress in the game, it is never affected by more than one torch or one lava at a time.

Related to lighting, shadows, shadows are also expensive. Again, there are two type of shadows: dynamic and static. Static for this particular scene is baked into textures with 3D 2s. And it's fast, so let's focus on dynamic shadows for dynamic objects. The first thing to consider is what mode of shadow you want to use. If you want to use the real dynamic shadows with the 4-1 mode, which is the default. And you might want to consider projected shadows for low-end devices with the right shadow image it can write to, and it is really fast.

If you are using dynamic shadows, there are still a few things you can do to optimize the performance. The first thing is to play with the size of the shadow map. When you are using dynamic shadows, SceneKit computes such shadow maps at every frame typically by rendering your scene from the light point of view. So if you're fill rate limited, by setting the shadow map size to a smaller size, it will reduce the fill rate impact of the shadowMapSize computation.

You can also play with the shadowSampleCount property on SCNLight. This inputs a lot to the complexity of the shadow that is generated to compute the shadows. Note that on iOS, it is already 1x by default, which corresponds to hard shadows. And it is 8x by default on OS X for smooth shadows.

Still about GPU, one more reason to be limited by the renderer is when you are doing too many texture sample in your game. So first thing to check is to make sure that you are not using unnecessary large textures in your game. For example, if your texture is always rendered small on screen, there's no need to have a huge texture for that.

One more thing to do is, it's better - if you're using tons of textures, it's better to pack them into a texture atlas. For that, 3D softwares have great tools to bake your textures into texture atlases, and you can also use SpriteKit to generate texture atlases for you.

If you are using - if you need to use very large textures, you can also try to play with mipmapping. Mipmapping improves the performance a lot when you are rendering a large texture at a small size, because it will pick and choose smaller resolution of your texture. It can also improve the rendering by reducing some aliasing effects and some Moire effects.

It has some drawbacks, though. It takes more time to load, and it can use also - it uses also slightly more memory. To turn on the mipmapping, just set the mipFilter property on, the mipFilter property on the [inaudible] property. Set it to linear to turn it on and none to turn it off.

OK. So that was for the renderer and device part. Now, let's say you are limited by the tiler this time. If you are limited by the tiler, that means that you are pushing too many vertices to the GPU. And so by extension, that you are - you have too many polygons. You can check how many polygons your scene is rendering at every frame, with SceneKit report here. This is for iOS. On OS X, you have the equivalent in the statistics overlay here.

And so if you have too many polygons, obviously the first thing you can try is to reduce the number of polygons in your models. And the second thing you can try or so is to play with level of detail. SceneKit has some support for level of details. For example, here I have three version of the teapot - of these teapots with more or less polygons. And so with the higher and lower quality.

And I can group them into a single level of detail array and assign that to my geometry. Then SceneKit will automatically use the right level of detail, depending on how big your model is displayed onscreen. So you can associate to each resolution either a distance from the camera or screen reduce and to tell SceneKit what level of detail it should use.

And this can help a lot to reduce the number of polygons in your game, because with this example, here for example, the teapots in the background are rendered with a low resolution, and so most of the teapots are using very low - a small number of polygons instead of having the huge resolution all the time.

So to sum up all of this, when your game is running slow, you first need to understand if you are limited by the CPU or GPU. And if it's with the GPU, you have to check the tiler or the renderer and device. If you're limited by the CPU, you can try to reduce the number of draw calls by flattening your scenes, and also if you are using too - if your time is spent, sorry, in physics or animations, you can try to reduce that. If you are limited by the tiler, you can try to play with level of details to reduce the number of polygons, and you can also try to split your scene in smaller chunks to alleviate the culling.

If you're limited by the renderer or device, you can try to simplify your materials. It will end up being simpler shaders. You can simplify your lighting, try to reduce the size of fill textures and try to turn on mipmapping if you are using large textures. And if you are fill rate limited, you can also try to reduce the number of full-screen postprocess. Now some more, other performance notes.

First, about sharing. When you copy a node in SceneKit, by default it shares the attributes. So this is ideal for the performance. But now, let's say you want to modify NodeA.GeometryA.Material. It will also modify the color of Node B. And so this is a common pitfall. If it's what you want, that's perfect. Now, if you want a different material for Node B, what you have to do first is to copy the geometry as well.

And see, the geometry is immutable, it is relatively cheap, because no geometry data is actually copied. It's just shared. And then you can just simply copy the material as well, and now you can modify Material A and Material B independently to have a different material colors for your objects.

Another note, this time about preloading. By default, SceneKit will load the information on the GPU when needed. That means that when your objects are never rendered, nothing is pushed to the GPU. And the first time an object appears onscreen, we compute everything the GPU needs to render that object. And depending on the complexity of this object, it can take some time, and in some cases, it can make you to miss some frame and so suffer from frame drops.

To avoid that, you can preload your objects, if you want, with the prepareObjects withCompletionHandler on SCNView. You can pass to this method the following object. If you pass a material instance, SceneKit will pre-compute all the textures that are referenced by this material. If you pass the geometry, it will pre-compute all the geometry buffers - like the vertices, no more than texture coordinates - and also, all the materials that are referenced by this geometry.

If you pass a node, it will preload the entire node tree, including all the geometries that are attached to these nodes. And last, if you pass the entire scene, it can do even more. It will preload the node tree, but also all the shaders that are needed to render the objects. It only works when you pass a scene, because SceneKit needs to know how many lights you have in your scene to pre-compute the right shaders.

In "Bananas," for example, at launch, we preload the entire scene to have almost all our shaders directly ready when the game starts. Okay. Some notes about custom tools and workflow now. So I'm already presented how to manage your assets in assets catalogs. But at some point, you may want to sort of customize your workflow.

By, for example, building your own tools that we process your assets, or having your own tools to debug your game. And for this, SceneKit provides some APIs to help you. The first one is archiving. Now, this is new in this release. All the objects of the scene graph of the API conforms to NSSecureCoding. So that allows you to archive whatever object, an archive with a [inaudible] NSKeyedArchiver, for example.

SceneKit also allows you to export your scene as COLLADA. And here, the advantage is that you can export to COLLADA and import it back to a 3D software. However, note that only a subset of SceneKit can be exported to COLLADA. If we compare both, so the first difference is that archives can be directly loaded on iOS, although COLLADA files need to go through Xcode first if you want to load them on iOS.

Then you can import back COLLADA files to any 3D software, but obviously, you can't import SceneKit archives. Both report the scene graph basics, like the node hierarchies, the node names, the geometries, all the materials and the animations. They work for both. But advanced features like actions, physics, particles and even custom shaders, everything is archived - works with archive but not in - supported by COLLADA.

One more note. As mentioned in the previous session, SceneKit is now fully [inaudible] with JavaScript. This can be really helpful for your - if you want to customize your workflow or add debug tools to debug your games. It's [inaudible] with JavaScript call. And so the "Bananas" sample code we showed in our - here is available on the developer website. We also have three other sample codes available: the little car demo we showed in the previous session, the demo that was shown in the state of the union, and the 3D slide that was shown in the previous session that is for OS X only.

For more information, please contact our evangelists, Allan Schaffer and Filip Iliescu. We have new gorgeous documentation available on the developer website, as well. And we have a dedicated forum for SceneKit on devforums.apple.com. Don't hesitate to ask your questions there. Some related sessions. So, obviously, the previous sessions about "What's New in SceneKit. Also have a look to the SpriteKit sessions, since both can work well together to achieve great stuff. Thank you.

[ Applause ]