Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2013-500
$eventId
ID of event: wwdc2013
$eventContentId
ID of session without event part: 500
$eventShortId
Shortened ID of event: wwdc13
$year
Year of session: 2013
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2013] [Session 500] What’s New ...

WWDC13 • Session 500

What's New in Scene Kit

Graphics and Games • OS X • 48:01

Scene Kit is a high-level Objective-C framework that enables your app to efficiently load, manipulate, and render 3D scenes. Check out what’s new in Scene Kit and understand how your apps can take advantage of the latest additions. Learn how to extend Scene Kit rendering with custom OpenGL shaders, see how to integrate morphing and skinning, and dive into advanced image effects.

Speakers: Amaury Balliet, Thomas Goossens

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

OK, thank you. So, good afternoon. My name is Thomas Goossens. And I'm here to talk about Scene Kit. So Scene Kit is a framework that was introduced last year at WWDC and it is available on OS X since Mountain Lion. And the goal of this framework is to simplify the integration of 3D in applications. And they can be whatever applications like presentations, UI, games, slideshows, data visualizations, et cetera.

Scene Kit is built on top of OpenGL to leverage the GPU and it can collaborate with other graphic technologies like Core Image, Core Animation, and GLKit and then I'll talk about that later. Scene Kit is a high level Objective-C API. And basically, it exposes a scene graph and I will introduce what it means right after. For OS X Mavericks, we introduce 20 great new features. We will present some of them later in this presentation. But I will first start with a quick recap of Scene Kit's basics.

So for this presentation, I will start by introducing the concept of a scene graph, sorry. And especially, how the scene graph look like in Scene Kit. Then I will show the basics and usual steps to start an application that uses Scene Kit. Then a section on the different ways to extend Scene Kit with OpenGL. Then we will present some of the new features we have in Mavericks.

And we will conclude with some techniques and notes about performance. So let's start with the scene graph. The scene graph approach is the main difference with an API like OpenGL. With OpenGL, you call some draw commands and set some draw states for every object you want to render, one by one and you redo this at every frame. Scene Kit is a more declarative API where you set up a scene, set up some properties of your scene and then you let the framework manage the rendering for you.

So the top level object of the scene graph is the scene represented by the SCNScene class and basically a scene has a root node. A node represents a location in 3D space. A node may have some child nodes to build a hierarchy and each node is relative to its parent node. So it's like layers that are relative to their super layers or views that are relative to their super views.

Then a node by itself doesn't represent anything. I mean nothing that can be rendered to the screen. It's just a position on which you can attach some attributes. So attributes you can attach are the following. You can attach a geometry, a camera, and a light. You can share attributes to multiple nodes. The typical usage is to, for example, if you want to show the same object to multiple locations in your scene, you simply attach the same geometry to multiple nodes. So let's get a quick overview of these attributes.

The geometry first. A geometry represents a surface that can be rendered to the screen. It is made of triangles that are connected together to build the surface. The triangles are connected to vertices and the vertices may have a normal that indicates the direction of the surface. And Scene Kit uses it to compute the lighting for example.

Then the geometry may have some texture coordinates that control how images are mapped onto the surface. And to finish a geometry has some materials that control the final appearance of your surface. Including the colors and textures. Note that a geometry has an array of materials because sometimes, the geometry is split into several elements. For example, this teapot here has four elements and you can have a different material for each element.

The second attribute are lights. A SCNLight represents a light source in your 3D scene. There are different types of lights to illuminate from a point, in the direction, with a cone, or equally in every direction with the ambient light. To add a light to your scene, simply set a light instance to your node with the light property of the node. And then this light will illuminate the entire scene from this node and not only the node it is attached to.

The last attributes are cameras. A node with a camera represents a point of view that can be used to render a scene. So when you set up a scene, you place your objects in 3D space with the X, Y, Z coordinates. And then you are free to render that scene from any point of view.

So to do this, you add some nodes to your scene and you attach a camera to it. Then to select a point of view, simply set one of these nodes as the point of view of your view as the pointOfView property. And for example, that's what I do here, I added multiple nodes to this scene and I can switch to another point of view like this one, or yet another one.

Then a camera has several parameters to control how a scene is projected to the screen. For example, the field of view, if you want to increase or decrease a perspective. So for example, here is a narrow field of view, so almost no perspective and here is a wide field of view with a strong perspective.

So to sum up, a scene is made of nodes that can have child nodes. Then these nodes just represent locations and you can attach attributes to them. For example, a geometry, if I want to render something to the screen. Then I can configure the materials of my geometries to customize the appearance. And I can attach other attributes like for example here, I attach a light to the sun to illuminate from the sun.

So you know-- with this you know everything about the models of Scene Kit. Now, let's see how to start an application that uses it. So the first thing you will need to do is to render somewhere. And for this, Scene Kit provides a SCNView if you want to render into a view, a SCNLayer if you want to render into a layer and a SCNRenderer if you want to render into an offscreen framebuffer. So let's consider the simplest and fastest which is a SCNView. To create a SCNView, simply drag and drop the SCNView object from the library of Interface Builder. Drag this to your user interface and you are done.

Once you have your view ready, you will need a scene to put into. And to create a scene, you have two options. You can create everything programmatically. Or you can load a scene from a file. So let's see the first option. So to create a scene programmatically, Scene Kit provides a set of built-in primitives like a cube, a plain sphere, et cetera that you can create and configure with simple parameters like width, length, height, corner radius, segment count, et cetera.

Scene Kit also supports 3D text with the SCNText class. And this supports a text that use an extrusion, multiple materials, a chamfer, and new in Mavericks even the curve you want for the chamfer if you want to create really fancy text. And regarding the layout, it supports everything Core Text supports. So all the fonts, kerning, ligature, and things like that.

Then, new in Mavericks, we introduce SCNShape. A SCNShape lets you create a 3D object from a Bezier path. So Scene Kit takes your NSBezierPath and it extrudes it to create a 3D object. So to do this, instantiate to SCNShape object and give your Bezier path and an extrusion depth. Optionally, you can even provide the curve you want for a chamfer if you want a chamfer.

And that's it. So for example, here is a Bezier path that represents the map of the second floor of the Moscone. I can use SCNShape to create a 3D version of it. And same thing for the walls here, they are created from another Bezier path with SCNShape. The last way to create things programmatically is using custom geometry. So using the SCNGeometry class, you can provide the vertex, normals and texture coordinates you want to create your pure custom geometry.

And so you have the full control and you will need this, for example, if you want to represent some mathematical functions or any kind of data visualization for example. So that's for creating scenes programmatically. Now, the other way is to load a scene from a file. And loading a scene from a file is essential because complex scenes are really hard to create programmatically. Complex geometries and complex animations are usually created using very specialized tools.

Scene Kit allows you to load 3D scenes from DAE documents. A DAE document is an XML file, XML-based format, that is supported by all the major 3D tools like 3ds Max, Maya, Modo, Cinema 4D, et cetera. A DAE document can describe a lot of things in a 3D scene. Obviously, you can describe the geometry information but also some animations, reference to images, all the light settings, the different points of view and even some more advanced features like skinning and morphing, and I will talk about that later.

So DAE documents are well supported on OS X. You can open them in Preview or with Quick Look to get a preview of it. We can directly see it in Finder for example. And you can do even more with Xcode. Indeed, Xcode has a Scene Kit editor built-in and this editor will let you preview a 3D file, play the animations, inspect the scene graph, rename nodes, duplicate nodes if you want. Edit the materials, configure the lighting, change the point of view, et cetera.

Once you are happy with your scene, it's easy to load it into your application. Usually the first step is to get the URL to your document using NSBundle and once you get the URL, simply load that scene with sceneWithURL:options:error: and you will get your scene created. Once you have your scene, it's easy also to render it. Simply assign this scene to your view using the scene property. Same thing if you use a SCNLayer or SCNRenderer. And then, any modification you do on the scene graph automatically reflects into the view. There is no need to call some update or setNeedsDisplay: methods it's all automatic.

So let's see what kind of modification you can do. So the scene graph API will let you do everything. You can of course move, scale, and rotate your objects, add some animations, change the colors and images dynamically, change the lighting, duplicate objects, et cetera. And all of these are simply done by modifying Objective-C properties of the objects of the scene graph.

So the usual first step is to retrieve the node you want to manipulate with its name and this is done with childNodeWithName:recursively:. You can do it for instance starting from the root node since it's recursive. And once you get the node you are interested in, you can modify everything with simple Objective-C properties. For example, if you want to move it to the origin, set this position to the vector (0,0,0). And from the node you have access to the attributes like geometry, camera and light. And from the geometry, you also have access to the materials.

But you can do more than simply modifying positions, you can also animate everything in your scene. And for animations, Scene Kit includes an animation engine and regarding the API, it uses the same programming model as Core Animation with implicit and explicit animations. And actually, all the properties of the scene graph are animatable implicitly and explicitly.

So implicit animations are the animations that are automatically generated when you modify the properties of the scene. It works like Core Animation. You start a transaction and configure the animation duration and timing function. Note that here, you have to use SCNTransaction and not CATransaction. And then inside the transaction, you can modify whatever property you like. For example here, I modify the opacity and the rotation of my node. And when you are done, commit the transaction and the animation automatically-- is automatically triggered.

Then explicit animations, this time we don't introduce any new API. We simply use the objects from Core Animation. So we support CABasicAnimation, CAKeyframeAnimation, and CAAnimationGroup. For example, here I create a simple-- a basic animation that targets the rotation of my node. Then I configure its duration and destination value and set it to repeat forever. Finally, I add my animation to my node with the same API as Core Animation which is addAnimation:forKey:. And this makes my node to animate forever like this.

OK. But you can do more than just moving and rotating around your nodes. And the materials in particular are something very powerful. A material controls the appearance of the surface. It's represented by the SCNMaterial class. And the material is made of material properties that can contain a color or an image. So let me explain this.

A material is made of the eight following properties. And each of them has a very specific role in the final appearance of your material. The diffuse is the base color of your material. It's what the material reflects when it is hit by some light. It can be set to a color like here, if I set to blue color or it can be set to an image like this earth texture.

Then the ambient property is what the material reflects when it is hit by the ambient light. For example here, it lets me make the parts of the sphere that are black because they don't receive light to show the earth texture a little bit, thanks to the ambient lighting. Then the specular and shininess control the specular highlight and can be set to a color like this or an image like here if I want some part of my materials to be shiny and some other to be not shiny.

Then the normal property lets me make-- set a normal map which is a popular technique to add some details to a surface without adding more polygons. For example here, I use one to add some elevation to my-- to the mountains, to add some bumps to the sphere. Then the reflective is an image or a cube map that is used as a reflective environment. For example here, I use a simple image like this one to add some sort of glossy reflection. This image is reflected by the sphere and note that it combines well with the normal map as well.

Then the emission is a color or an image that is emissive. That means that it doesn't need to receive any light to be visible. For example here, I set an image that represents the lights emitted by big cities. So it's not very visible here but if I switch off the lights, you can see that the other properties go away and the emission is still visible.

Now, let's switch back the lights on. Let's say, I want now to add some clouds over the earth model. I can do it by adding a sphere over the earth. And then, use the transparent property and set an image that will control the transparent areas of my surface. You can use an image with colors and gray scales if you want, it's not just a binary image.

And last, the multiply property is a color or an image that is multiplied to the material color to produce the final fragment. It is usually used for shadow maps and we'll talk about it later. But you can also use it to dim or tint an object. For example, if I set a yellow color, it is multiplied with my material to add a yellow tint.

And once your material is configured, it automatically adapts depending on the light settings. So if I switch off the light again, you can see that the diffuse, ambient and specular are not here anymore because there is no light to reflect. But the emission is still emissive, this time, it is tinted by the multiply property.

And the clouds are still there but this time they render black because there is no more light to reflect. Regarding the API, it's straightforward. You can have-- you can access the geometry from your node with the geometry property. To create a new material, simply instantiate a SCNMaterial object and set its diffuse, for example here to a red color. And finally, you can assign your material to your geometry to make it look red.

As I said, every material property can be set to a color or an image. But it can also actually be set to a layer tree which is very handy if you want to have some animated content mapped onto your 3D objects. For example here, I set a movie layer-- I use a movie layer to play a movie mapped onto my 3D objects. [laughter] And also, once we have set your layer tree, you can add child layers and everything. As soon as you modify any layer, Scene Kit will automatically redraw the scene. You don't have to, again, to call some setNeedsDisplay: yourself.

So we can do already a lot of things with Scene Kit's high level APIs. But sometimes, it's-- if you want to do some really specific task or some more specific rendering, it can still be useful to have access to the lower level which is OpenGL. And Scene Kit provides the necessary hooks for you to plug your OpenGL code to Scene Kit.

So Scene Kit allows you to plug your code at several levels, which are the scene level, the node level, and material level. And new in Mavericks, we introduce the concept of shader modifiers. Let's start with the first one, the scene. So we can set a delegate to the SCNView, SCNLayer, and SCNRenderer, sorry, and your delegate will be invoked before and after the scene rendered.

And so, you can at this-- in your delegate, you can do whatever OpenGL code, for example, to do some procedural background with OpenGL, or any kind of overlay with OpenGL. And here, you are totally free of constraints, you can do whatever OpenGL code. The context will be already ready and the viewport will be already set for you. Then at the node level, this time you attach a delegate to a node.

And Scene Kit will invoke your code when this node needs to be rendered. Scene Kit provides the necessary information for you to render at the correct location in the scene. For example, here we added a node on top of the hole object and set a delegate that renders some kind of custom particular system with OpenGL.

And note that when you set a delegate to a node, it replaces Scene Kit's rendering. So the typical usage is to set a delegate to an empty node where-- that is placed at the location where you want your custom effect to be rendered. Another example of a particle system, this time attached to a child node of the sword object and so, you can see that when the sword moves, the emitter of the particule system moves accordingly.

Next hook, at the material level. This time, you can provide your custom GLSL program to replace Scene Kit's shaders. Scene Kit provides the necessary geometry attributes-- sorry, the necessary transform uniforms and geometry attributes that you will plug to your custom uniform and attributes in your custom shader. For example here, we set a custom program with a custom vertex shader that does the morphing effect, and a custom fragment shader that does the smoke effect. Note that in Mavericks, we added some new APIs for you to bind your custom uniforms and custom attributes in a more efficient manner basically by using blocks instead of delegate methods.

So with a customer material, you can-- we have a very fine control on the rendering because it's your custom shaders. However, the main inconvenient is that you need to reimplement everything in your GLSL program including projecting the vertex, computing the lighting, and managing the texture. So that's for these reasons that we introduce the concept of shader modifiers in Mavericks. And the idea here is to let you inject some snippet of GLSL directly inside Scene Kit's shaders. And you do it at some very specific stages and it combines with Scene Kit's rendering.

So we call these stages entry points and the API looks like this. Basically, you set a dictionary to the shaderModifiers property of your materials and the keys of the dictionary are the entry points and the values are your GLSL code. Let's take a very simple example. At every entry points, you have access to some data-- to some GLSL structures that you can read or write to modify the rendering. For example here, you can read the current output color and modify it the way you want with GLSL. For example here, to do a simple invert for example. So Scene Kits provides four entry points which are geometry, surface, lighting, and fragment.

And the geometry entry point lets you modify all the geometry information in model space so you can modify the vertex, the normal and texture coordinates in GLSL. So for example here, I can inject a little code that just modifies the Y position of my vertex to do this wave affect. And so, you don't have to reimplement all the rest, all the lighting and all the textures. You just focus on the effect you want.

The surface entry point let you modify all the surface attributes which are the diffuse, ambient, specular, et cetera in GLSL. For example here, we did a car paint effect by injecting a simple code that modifies these properties. So it generates some random flakes by modifying the emission and diffuse property of the materials. But all the rest, all the geometry and all the lighting and all that are still done by Scene Kit.

Then the lighting entry point lets you change the lighting equation that is applied for each light. For example, here is a basic rendering done by Scene Kit. And here is a shader modifier, so another equation to do some sort of cartoonish rendering. And the last one is the fragment entry point. And this is the very last stage. So at this stage, you have access to all the information from the previous stages like all the surface information, the lighting, et cetera.

You also have access to the current fragment color, and you can modify it the way you want to produce your custom fragment effect. So for example, here is the default rendering in Scene Kit, and here, a fragment modifier that does an x-ray effect. And this is done simply by changing the opacity of the fragment based on the normal of the object. And the color is just simply set to a blue color. Of course, you can combine all of these effects into a single dictionary.

For example here this virus has a geometry modifier to do the deformation, a surface to do the noise effect. The lighting is tweaked as well to do some sort of back lighting. And the-- and with the fragment modifier to do this kind of hologram effect on top of it.

Regarding the API, you can set a shader modifier dictionary to the objects that implement the SCNShadable protocol. So basically, the material, SCNGeometry and all the geometry subclasses. And one thing very useful that I didn't talk about is that you can even declare your own uniform in your GLSL code. And these uniforms are automatically bound to Objective-C. That means that you can set the values using Objective-C KVC to your uniform and you can also even animate your uniform using implicit and explicit animations with Core Animation.

Another thing that's very handy is that you can even declare your custom texture sampler in your shaders. And these samplers are also automatically bound to your Objective-C code. That means that you can still use KVC to set your texture by simply setting a material property instance to your sampler in your GLSL code. And since it is a material property, it works with images and also with layers. So the shader modifiers are a very powerful new feature. Now, I am calling Amaury to talk about the other features we added to OS X Mavericks. Thank you.

[ Applause ]

So hello. My name is Amaury and I'm a software engineer in the Scene Kit team. Thomas just showed you the shader modifiers. They are a fantastic tool to bend Scene Kit to your own needs. And we added many more new features in Mavericks to help you build better applications. Today, I would like to talk about six of these new features. First is morphing. Morphing is a popular technique used to animate objects and deform them. It works by interpolating between a base geometry, the geometry of your node, and one or more morph targets.

So for instance, here I have a geometry that represents a map. I also happen to have two other versions of this geometry named Target A and Target B. I will use these targets as morph targets. So for instance, if I increase influence of Target A, the map is folded like this.

I can also use Target B to fold the map like this. And as you can see, this is fully animatable. You can smoothly transition from one state to the other. And of course, you can use both targets at the same time to combine their effects. So, how does it work? Scene Kit exposes the SCNMorpher class to deal with morphing.

All the morphing information and animations can be loaded from a DAE file or you can create everything programmatically. It's really up to you. You can also use any kind of geometry for your morph targets as long as their topology matches the one of the base geometry, that is to mean they have the exact same number of vertices and the same triangulation. So that is morphing, and you might want to use this feature for say, animate a face with different facial expressions for your morph targets. The second new feature is skinning.

Skinning is a very popular technique used to animate objects and characters. It works by attaching a skeleton made of joints and bones to a node. To better understand this, let's show the skeleton used to animate this character. A skeleton is nothing more than a node hierarchy. The nodes being called joints and the segment from a joint to its parent, a bone.

You can simply animate the character by moving the joints. So, let's hide the skeleton and play another animation. How does it work? 3D authoring tools will help you design the skeleton and your animations, and this information can be loaded into Scene Kit via a DAE file. You can also programmatically set-- dynamically set the skeleton of a node and move the joints.

[ Pause ]

Next, depth of field. This is our third new feature. We added a few properties on SCNCamera to provide a depth of field effect. The main two properties are the focal distance and the focal blur radius. So, let's have a look. The depth of field effect allows you to blur areas in your scene.

The focal distance is a distance from the camera at which objects should be sharp and the focal blur radius is the amount of blur you want to use for objects that are out of focus. So, for instance, if I increase the focal distance the objects in the background are now sharp as the ones in the foreground are blurred.

I can animate the focal distance and look how the rook stands out of the scene. This is really beautiful. And we worked really hard to make this advance technique both easy to use and efficient. You can just-- right after this session you can open your projects and add a depth of field effect and with no more than two lines of code and no effort from your part you will make any scene more beautiful.

So, our fourth new feature is the support of Core Image filters. We added this support in OS X Mavericks and you might already know how to use it because it's the exact same API as on CALayer. You simply provide Scene Kit with an array of filters and they are automatically applied.

So, let's take an example. Here, I have a grid of objects and let's say they are part of a 3D application with a 3D UI. You want the user to be able to make a selection and you want to highlight this selection. This can be done using a CIFilter, a glow filter.

If I remove the filter from one node and apply it to another node, I can move the selection. And notice how the glow effect exactly follows the shape of the object even when it's rotating. This is because Core Image filters work in screen space. This kind of effect would be really hard to achieve in 3D space.

Also, Core Image filters apply on the whole node hierarchy. So for instance, if I group all the objects in the background under our common part and set a filter on this parent, they all get this old TV effect. And of course, you can use any of the built-in CI filters or you can create your own custom filter written with your own kernel. So, that is Core Image filters and we think they will be really useful for 3D UIs.

[ Applause ]

Next, our fourth feature-- our fifth feature is constraints. Scene Kit gives you a lot of freedom. You can manipulate your nodes in any way you want and you can even animate them either explicitly or implicitly. But sometimes, this is not enough. There are some kinds of behaviors that you cannot implement. So that's why in Mavericks, we add a new way to manipulate your nodes via constraints. Constraints are applied at render time and at every frame automatically.

The result of evaluating a constraint is not destructive. The result is used only for presentation purposes and do not modify the model values. So, Scene Kit exposes the SCNTransformConstraint class to allow you to freely manipulate the transform of a node. You can create a constraint by providing a block of code that will automatically get executed at render time. And this-- and in this block of code, you can provide Scene Kit with any transform you want and it's the one that will be used.

We also provide you with the SCNLookAtConstraint. As its name suggests, it is really useful when you want to make a node always look in the direction of another node. So let's take an example here. I have a ball and several arrows. I will add a "look at" constraint to each arrow in the scene so that they look at the ball in the middle.

And because constraints are evaluated at render time and for every frame, if I animate the ball, the arrows keep looking towards it automatically. And because constraints apply on a node, I can also apply this kind of constraint on the node that holds the camera so that it looks towards the ball. And you can also use it to direct the spot light.

[ Applause ]

So the last new feature I wanted to talk about is animation events. When dealing with explicit animations, it might be really useful to trigger some actions at some specific moments in the animation. An animation event is nothing more than a block of code that gets executed automatically. For instance, here I will have an animation that plays a sound. I will play it again. Here, we set an event that will be triggered when the middle of the animation is reached.

OK. And because animation events take every parameter of the animation into account such as the timing function, the speed factor and the repeat count, if I have one animation with only one event but set this animation to repeat, the block of code will be triggered several times. And because you can provide several animation events, you can have one animation with different events that play different sounds.

So these were six of the new features that will help you build new kinds of applications. It will-- they will allow you to make more dynamic and more beautiful scenes. But you also want to achieve the best performance, right? So in Mavericks, we add a new tool to help you debug performance issues. By setting the showsStatistics property to YES on a scene view, you will make an overlay appear. This overlay provides you with a lot of information about what's going on in the scene.

One of the most important indicators is the FPS counter. Your quest is to always have this display 60 frames per second. So, if you click on the little gear icon next to the FPS counter, it will show a debug panel. In this debug panel you can do things such as showing the bounding boxes, tweaking the parameters of the depth of field effect, dynamically change the point of view and you can also freeze the rendering, and go through the steps-- go through the steps of the rendering process one step at a time. So, using the slider, you can see the rendering process draw call by draw call.

In the overlay, you can also check the number of vertices, polygons and draw calls. We want always this numbers to be as low as possible to achieve the best performances. So how can you do that? Well, you might not be able to reduce the number of vertices without affecting the quality of your scene. But you might be able to reduce the number of draw calls. And this is what the flattenedClone method is for.

It works by taking a node hierarchy and making a copy of it that will render exactly the same but which is flat, which has no child node. So for instance, here I have a parent node with four children. These children have a total of eleven geometry elements but they use only four materials.

The orange one, the blue one, the purple one and the red one. If I flatten this node, the method will go through each node and merge all the geometries into one single geometry that will have as many geometry elements as materials. So here instead of having eleven draw calls, we have only four which is a huge win performance wise.

So what about geometries? In Scene Kit, all the geometry-- all the node attributes are shared by default. The camera, the light, the geometry, the skinner, the morpher. And this is a good thing in general. But say, you want to duplicate a node and change its materials. Because the geometry is shared, if you change its materials, it would reflect everywhere in the scene. So, to fix this issue you have to unshare the geometry by explicitly copying it.

But the good news is that copying a geometry is really cheap. This is because all the vertices and triangulation information is immutable. So Scene Kit doesn't have to copy them. So once you have a copied geometry, you can change its materials without affecting the other nodes in the scene. So, that was about node hierarchies and geometries. Now, materials. It's really important to keep your shaders as simple as possible. This is not only true for your custom programs and shader modifiers, it's also true for Scene Kit's default rendering.

The more lights in your scene, the most expensive the shader will be. So, a good idea is to keep the number of lights in your scene as low as possible. But what if you can't? What if you want a richer lighting with many lights? Well, a good idea would be to pre-compute all the lighting information and bake them into textures.

And it's the same idea for shadows. Dynamic shadows are really expensive. Whereas pre-computed shadows are fast and they might even look better. 3D authoring tools will help you bake this lighting and shadow information into textures and they can be loaded from DAE files or set programmatically using the multiply property of a material. So let's take an example here.

Let's say I have a scene that represents a dungeon. It is lit by only one light, and because there is only one light in the scene, it's very fast to render. But it doesn't look that great, right? So, you might want a richer lighting here. So instead of adding one dynamic light, let's say per candle, we'll bake this into textures using a 3D tool. So, the first step is to get rid of all the lights. This is the same scene rendered with no lighting at all.

Next, we will apply a light map using the multiply property to achieve this. See, you have colored lights, shadows and the scene looks much nicer. It's richer. And in fact, this is actually faster to render than the first seen with only one light. So, light maps and shadow maps.

What about textures? Well, one of obvious thing is that you want to avoid unnecessary large images. If a texture is never to be rendered large on the screen, then it shouldn't be large in the first place. Second, if you plan on using the same image on ambient and diffuse properties then you should let Scene Kit know so that it can optimize its shaders.

And finally, you might consider using mipmaps. When activated, Scene Kit will pre-compute several resolutions of the image and select the best resolution at render time. This makes things more efficient but it has a cost. Pre-computing the mipmaps is a bit expensive, so you might want to use this feature if you need faster rendering more than faster setup. The last thing I wanted to talk about is levels of detail. This is a feature that only focuses on performance. Here, you have two objects that represent-- two geometries that represent the same object.

As you can see, one is high res and one is-- and one is low res. And you can easy tell the difference between the two. But if I move these objects very far at some point it becomes hard to tell which is which. So, the idea behind levels of detail is to build several versions of a single geometry with different polygon counts.

This is actually a new feature in OS X Mavericks and Scene Kit exposes the SCNLevelOfDetail class. This class encapsulates a geometry that is used for a specific resolution as well as a minimal distance from the camera at which this resolution can be used. You then simply set a-- set an array of levels of detail on a geometry. So in this scene, I duplicate this teapot many, many times. If we hadn't activated levels of detail, there would be more than seven million polygons in the scene which would kill the performance.

But thanks to levels of detail, Scene Kit rarely uses the best resolution. And since the objects in the background only have 256 polygons it makes rendering this scene possible. So that was about the new features in OS X Mavericks and now I hand over to Thomas to-- for a quick recap.

Thank you.

[ Applause ]

So yes to conclude, so we presented some of the new features we added to this release. We talked a little bit about performance and the new tools we have to help you debug the performance issues, and also the new APIs that will help you improve the performance of your scene.

There are actually many other new features that we didn't have the time to cover today, like exporting to a DAE document for example. We also support OpenGL's Core Profile, if you want to do some really late-- use the latest features of OpenGL. And more. And I encourage you to have a look to these new features in the seed. And you probably guessed that this presentation was entirely written using Scene Kit from the beginning and we are very happy to share this code with you today.

[ Applause ]

So this code will be available right after the session. It is made as one single file per slide so it's-- every time it's a simple file. And this is kind of a collection of fifty small sample codes and every file is independent of the other, so I really think it's a really good sample code to get started with Scene Kit. For more information, please contact our evangelist, Allan Schaffer. We also have some documentation online on the developer website. And the developer forum is also a good place to ask questions and get answers and we are very responsive on it. So that's about it and thanks for coming.

[ Applause ]