Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2016-609
$eventId
ID of event: wwdc2016
$eventContentId
ID of session without event part: 609
$eventShortId
Shortened ID of event: wwdc16
$year
Year of session: 2016
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2016] [Session 609] Advances in...

WWDC16 • Session 609

Advances in SceneKit Rendering

Graphics and Games • iOS, macOS, tvOS, watchOS • 52:33

SceneKit is a fully featured high-level graphics framework enabling your apps and games to create 3D animated scenes and effects. Witness the biggest leap forward in SceneKit yet with the introduction of its new Physically-Based Renderer (PBR). Dive into new APIs for accurate materials, physically-based lights, HDR effects, and enhancements in Model I/O. Walk through an example game using PBR and see how to integrate its workflow into your development.

Speakers: Amaury Balliet, Jean-Baptiste Bégué, Sébastien Métrot, Nick Porcino

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Good morning. Good morning and welcome to Advances in SceneKit Rendering. My name is Amaury and I'm delighted to be here to present you how we brought SceneKit to the next level with state of the art graphics. So we have a lot to cover today. So I will start with a quick intro on SceneKit before we dive into this new rendering advances. Next Jean-Baptiste and Sebastien will join me on stage to present a cool demo, explain how we built it, and present all the new features such as great new camera effects. And finally, Nick will join us to present a base to Model I/O.

So in a nutshell. As you know, SceneKit is a high level API under the GameKit umbrella and it focuses on 3D graphics. It plays nicely with [inaudible] and it's built on top of Metal [inaudible]. And use SceneKit in any situation where you need to disperse with the graphics on screen. And when you start to think about it they are used in a great deal of places. For instance, we just introduced Swift Playgrounds where SceneKit makes scenes more visual and helps kids in the first steps in learning how to program.

In Xcode we use SceneKit to create an innovative and extremely useful interface that helps you develop your apps view hierarchy. In iBooks and iBooks Author people can create rich books with enhanced illustrations which are interactive. And of course SceneKit can be used for games. Last, but not least, thank you. You guys found of so use cases for SceneKit and 3D graphics. You published thousands SceneKit-based applications to the Store. So thank you.

[ Applause ]

Now, as you know, SceneKit is tightly integrated with the system. It works seamlessly with all the Apple technologies and it takes the most of macOS and iOS where it's been available for a few years now. And sense we last talked at WWDC we also introduced SceneKit on tvOS.

All we had to do for the [inaudible] sample code was to add two ports for game controllers and it was ready to be played on the big screen. So it's absolutely fantastic to see how the same game and code can run on macOS, iOS, and tvOS. And this year we are closing the loop with SceneKit coming to watchOS.

[ Applause ]

Thank you.

[ Applause ]

So SceneKit on watchOS is a great opportunity to start to think about new interactions and a way to present content on your wrists. Now, as you might imagine, there's a lot to say about the [inaudible] for the Apple Watch. And we won't have time to cover this today. But we have a [inaudible] sessions, "Game Technologies for Apple Watch," on Friday, where you will learn more about what's available, how to play with SceneKit, SpritKit and other technologies.

And if you are new to SceneKit and want to learn more, you can always go online to check previous WWDC sessions where we explained basic, but also really advanced, features of SceneKit. Okay. So now let's dive into this new rendering capabilities. Well, this year SceneKit puts physically [inaudible] rendering in the hands of everyone. It means that developers, you guys, get to have stunning graphics for the arts and games. It's the biggest leap forward in SceneKits rendering capabilities since its introduction. We rely on the latest advances in 3D graphics and leverage modern technologies to provide accurate rendering and physically based shading.

Now, physically based shading has several [inaudible] requirements. So we start with linear rendering. So what you see here is a smooth radiance that goes from zero to one and, as you can see, everything looks perfect. That is until you comprise it so that it can be stored in an eight disk optic set image, for instance.

And, as you can see, bending occurs. And that's new because our eye is more sensible to variations in the dark effluence. Now by applying gamma encoding we can assign more values to these darker turns. So, for instance, here's an illustration which shows the difference between storing pixel data used ordinarily or using gamma encoding.

Now the thing is that when shading is [inaudible] all the lighting information and equations are expressed in a linear space. So in a non-linear pipeline what you end up with is color that [inaudible] that is rich with gamma encoded [inaudible] texture and then it's processed using many [inaudible] and the resulting enrichment is written to a texture or framebuffer. And, as you might imagine, that's not correct. For the final image to be correct all the operations need to happen in enough space.

So, as an illustration, here is a scene with lighting occurring in gamma space. And here is the same scene with shading in linear space. And if you compare them, you will see how light fall-offs and edges appear harsher in linear rendering. Now linear rendering is essential for physically based rendering but it actually applies to any of the SceneKit lighting model because it just makes the [inaudible] white. Now, as you know, color is a big [inaudible] thing this year at WWDC so in addition to gamma corrections [inaudible] management automatically.

So what does that mean? It means that the color profile that is assigned to a texture will now be [inaudible]. Any operation that happens between the image is loaded from disk to the moment it's handed to the system so that it can be displayed on screen, we will respect the integrity of the color data.

So a SceneKit-based application will pull this wonders of the color accuracy of a professional [inaudible] application. Now, as you know, [inaudible] just stop raw data that happen to be stored as colors. And SceneKit knows that and so it won't color match such images. Now to help you with that there's a great new features in Xcode 8 asset catalogs and they are texture sets.

In a texture set one can specify whether an image holds color data or raw data and then Xcode can automatically convert these images to CPU and GPU efficient texture formats. But to learn more about that we have a session right after lunch, "Working with Wide Color," where the Metal team gets into [inaudible] details.

Now, in addition to textures, color management also applies to color objects. So color components are no longer assumed to be sRGB. And so if you are creating colors programmatically, it's now really important that you use the right initializer. So here is an illustration with two color objects, one Display P3 and the other sRGB, that were created using the same components.

When working with color pickers pay attention to the color space that you choose. Above the menu we let you choose from [inaudible] color spaces, including device independent ones, such as [inaudible] and [inaudible]. And there's also a handy option to display values as [inaudible] rather than integers so that they can be easily copy-pasted to code.

And speaking of which, as you know, shader modifiers are a great feature in SceneKit that will allow you to customize our rendering. Now, as I said, this year shading appears in linear space. So you must be sure to convert your colors to the linear extended sRGB color space before these components are used for color [inaudible]. Now a few notes about backward compatibility now.

Linear rendering and color management are automatically enabled whenever you link your app against the new [inaudible]. There's no performance cost in enabling them but they will dramatically change the look of [inaudible] scenes. So, for instance, here's last year's demo which did not use linear rendering. And here's what happens if you just recombine it.

Now, of course, textures, lighting, and shadow modifiers can be reworked with linear rendering in mind. But if you want to deploy your application to older versions of the system, or want to update a linear rendering in color management for some reason, we have found a way to do that. You can [inaudible] level by specifying a key in your apps Info.plist file.

And then there is wide gamut content. So, as you know, with wide gamut color spaces such as extended sRGB [inaudible] exists and they are really important when working for modern hardware. The new iPad Pro and iMac with Retina Display have wide gamut displays that SceneKit can [inaudible] automatically.

All you have to do is to bring your wide gamut content, so textures or colors, and SceneKit will enter that transparently. Now wide gamut textures and framebuffer will require more memory to hold that data and that will lead to an increased bandwidth usage. So may you experience any performance issue, we offer a way to upload again at the [inaudible] app level.

Now let me mention the color gamut showcase sample code that we built in collaboration with the Cocoa and Cocoa Touch team. It's a synching-based application that will allow you to see the out of gamut color components and it's also really useful because on the wide gamut display you will be able to see what this display brings because you can simulate a non-wide gamut display. So to learn now about working with wide colors and how to convert color components from between color spaces, again we have a great session this afternoon.

And so that works for accurate rendering which is a requirement of physically based rendering. Now what is physically base rendering and why? Well, [inaudible] scenes with detailed models. And that is definitely true. But shading is what makes objects tangible. So all you see here on the screen used to be a soup of polygons. And shading is a process of finding the right color for each detail on the screen. So all the highlights, shadows, and the sense of depth, it comes from shading.

Shading is that magical operation that can bring a scene to life. Now how does it work? Well, first there is light which is emitted from a source. And when light hits an object it interacts with matter according to properties of the surface and then light is reflected to find your eye or a camera in this case.

Now this interaction between light and matter is something that is really complex. And over the years many mathematical models were developed to try to best describe it. Physically based rendering is an approximation of light transport that relies on such mathematical models and they take into account the physical properties of light and matter.

But, as you know, SceneKit is a high-level API and we want to allow anyone to benefit from this new lighting model. So we will expose a super easy-to-use API so that you can use physically based rendering that artists love. So at the end of this session you will be able to get from this rendering, which is standard, to a physically based one.

Okay. So in SceneKit we will export physically based rendering from two angles. First, physically based materials and then, physically based lights. So first, physically based materials. Here is a description of a point on the surface with normally indicating its orientation is for the space. And when light hits that point, it's split into two terms, diffuse reflection and specular reflection.

Now diffuse reflection corresponds to light that goes underneath the surface and is scattered so many times and in so many directions that it appears uniform. The color of the diffuse reflection is albedo or the base color of the object. So when designing the interface for physically based material in SceneKit we will want to use an albedo map. Now specular reflection does not follow that way.

Specular reflection is just made of lights that bounces off the surface and so it's of the color of the incoming ray. So here is what we call a cube map. It's a collection of six spaces that represent the environment around the location in 3D space. And when we place a perfectly specular object in such an environment we will see that acts like a mirror. Now let's take a more realistic example with a plastic ball. As you can see, it's not a perfect mirror. At the center is the reflection is dim but as you move closer to the edge it gets brighter. And actually for raising angles all light is reflected.

Now not all materials have the same reflectivity amount. What you see on the top is a curve which represents the reflected values in function of the incident angle from zero to 90 degrees. And you will see that these reflectivity values stays almost constant from zero to 45 degrees and actually we can use this value to reconstruct the whole curve. Now gold is an interesting example because it has different reflectivity values of the red, green, and blue components.

The one last thing to note here is that metals, such as aluminum and gold, have high reflectivity values whereas non-metals or dielectrics have low reflectivity values. And this difference in reflectivity is actually essential for the final look of the object. So in SceneKit we want to expose a metalness map which will indicate which parts of the object is metallic and which part is not.

So in addition to reflectivity to different type reflectivity values, also note that metals will absorb all light beneath the surface where dielectric will scatter with light. So the visual effect of this is that metals have a wide specular reflection and no diffuse reflection and dielectric will have a lot of diffusion and specular reflection will almost be seen only at raising angles. So in SceneKit we will reuse the diffuse Metal property to store the reflectivity values of metals and the albedo of dielectrics. And for the reflectivity values of dieletrics we just use a global low constant.

So we just reuse the diffuse Metal property that we brought from [inaudible]. Now one last aspect I would like to talk about is the surface roughness. So, as you know, no surface is perfectly smooth. As a microscopic level you always have tiny bumps and cracks that will affect the specular reflection.

So the rougher the microsurface is, the blurrier the reflection will be because reflected rays of light are no longer aligned. So again in SceneKit we would want to expose the roughness map which will indicate which parts of the surface is rough and which part is smooth. And this one is a [inaudible] image.

So we just saw how we can divide three fundamental properties and each of them has a clear meaning and is derived from [inaudible] properties of the surface. Now creating a physically based material in SceneKit is straightforward. You first create a material, then set its lighting model to the new physically based lighting model, and finally you provide your maps. So let's take an example. We start with a mine cart and only a diffuse map. We will then add a roughness map. So, for instance, take a look at coal. Coal is rough so there is no [inaudible].

And finally we will add a metalness map. So, for instance, take a look at rails and wheels. Let's take another example. We have a fire truck. Again, we start with a diffuse map. Now we will add a metalness map. And finally a roughness map. So for instance, take a look at tires.

Now one thing I would like to mention. For the metalness, roughness, and ambient occlusion maps, please use grayscale images. Having different channels for the red, green, and blue would just be a waste of memory. And even more if you add another function in. Now, furthermore, if you want to use the same value over the whole surface, you can use the color object, or even better, for these metal properties we know [inaudible] numbers.

So we just saw how we can create a really simple and high-level API to create a wide variety of materials. Here is the same object and on one axis we changed the roughness values and on the other axis we changed the roughness value. Now remember how we said that we would export physically based rendering. Let's now have a look at physically based lights. Well, in SceneKit lights can be split into three categories. I will start with image based lighting, or IBL, then cover light probes, and finally point lights. So image based lighting.

As I said, you can use a cube map to describe the environment around a location in 3D space. So when shading a point on the surface we can consider the finish here above the end point according to its normal and the right lighting information on the color that is [inaudible] in this cube map. So for instance, here is an object which is lit only using image based lighting. There is no light in that scene. And you can see how changing to cube map dramatically affects the look of the object.

Using image based lighting all the objects in your scene will have a coherent look and will work nicely together. By using image based lighting in SceneKit is really straightforward. We added a lighting environment property of the scene. And you can simply set a cube map to its contents. And what's great is that it works perfectly with the background property. So for instance, if you take an object and set the same image to the background and lighting environ properties, you will be able to display an object in its context.

Now cube map, it can show the distant environment and the aesthetic. So when shading a point on the surface it's possible that this environment is not visible because you're in a cave or there's another object between them. And that can be taken into account with image based lighting. So it does not work very well for occluded objects.

Luckily we have a solution for that: Light probes. Light probes are local lights that are faced towards the scene and they capture the local diffused contribution. So when shading a point on a surface we can find the four closest light probes and interpolate lighting from these probes. So as I said, light probes, they are local lights and so they can account for occlusion.

And they are implemented in such a way that they are really lightweight and efficient. You can have dozens of light probes in the scene. And we actually recommend that. Because the more probes you have, the finer the [inaudible] will be and the better local lighting information you will have.

So creating a light probe is easy. You create the light and then change its type. That can be done either programmatically or within the Xcode SceneKit scene editor. Now just like cube maps, light probes capture static lighting information. And this information will be baked into the probe easily using the Xcode scene editor of this API.

So we just saw how using IBL or light probes you can have indirect lighting in the scene. But of course if you want direct lighting, you still have access to all the other kind of lights. So omnidirectional, directional, and spot lights work with physically based rendering. And actually we have [inaudible] so that you can be a better configure.

For instance, we added the light's intensity. A light's intensity is expressed in lumens which a default of watt 1000 which is in the order of magnitude of a light bulb. We also added a light's temperature which is expressed in Kelvin and from which we can divide for color.

And one great new feature, we added a new kind of lights, IES lights. So IES lights, or photometric lights, can account for any attenuation shape. So while the spot light or omnidirectional light has a really symmetrical attenuation curve, IES lights can better accumulate the behavior of a theater world light.

And, for instance, it can account for [inaudible]. It can account for shadows. For example, due to the frame of the light. Now creating photometric lights in SceneKit is really easy. Again, you create the light. Then you change its type. And finally you provide the URL to put them into profile which can, for instance, be downloaded from the website of a manufacturer.

So as a quick recap, we just saw how simple it is to create a physically based material in SceneKit and all these properties derive from here where properties on the surface so they are really easy to understand and how we can work with lights in the context of physically based material. So with that please welcome Jean-Baptiste and Sebastien for great demos.

[ Applause ]

So thank you, Amaury, for this great presentation of the new rendering capabilities of SceneKit. So let's see them in action, the [inaudible]. So, as you will see, almost everything that has been presented by Amaury is actually very [inaudible] available in the [inaudible]. You will be able to tweak properties and see the result in real time. So I have a very simple scene ordered here with just one light on this truck. I go to the Materials inspector.

As you can see we have just two materials of the subject. One for the body and one for the accessories, et cetera. So I'm going to select those two. We are continue using the Blinn lighting model so we'll switch to the physically based lighting model. Now I've set the two materials as metallic.

And, as you can see, there is an issue because we don't see the reflection of the environment. So we can go to the Scene inspector and we have to set the lighting environment for project. So for that I will use the cube map, for example, this cube map of a parking as the lighting environment.

So. Shortly I'm focusing on those three main properties of the physically based lighting model. So let's now move to the roughness value. The roughness is indicate how smooth the surface is. So you will see that the rougher the surface is, the blurrier the surface will be. So if I move the value of the roughness closer to one, I have a blurry reflection. And then almost no reflection at all when we reach one.

So if I move back to zero, I have a very smooth surface and, as you can see, everything is, the whole environment is reflected in the metal. So now I'm using just one constant value for the roughness. And I would like to be able to use, to specify a value for each part of the object. For that I just have to use a roughness map. So let's use a roughness map for the body.

And a roughness map for the accessories. So we have the same kind of issue with the metalness. So we want to be able to specify which part of the object is metallic or not. So for that we [inaudible]. So let's set the metalness map for the accessories. A different map. So, as you can see, the body parts of the object is nonmetallic while the front radiator grill is completely metallic. The final touch is to add the albedo.

And we will be done. So that's it. So we have a full [inaudible] rendering of this fire truck. I can now switch to the [inaudible] and change the cube map. For example, this cube map of the lighting environment with trees. I can set it in the background. So that's it. So, as you've seen, it's very simple to use the new SceneKit scene detail. And, you know, now to demonstrate this kind of rendering in action we've built a cool demo that I'm going to show you now while Sebastien is presenting it.

Hello.

[ Applause ]

Thank you.

[ Applause ]

So I'm delighted to present you our new furry friends for this year. Bub. Bub is a badger. He rides in the mining cart. And he tries to catch gems and boosters for speed. So everything you see is rendered with the new SceneKit's renderer. All the materials are physically based. All the lights, too. We also used the usual properties of SceneKits such as action, animations, and everything you used to have. It's a Swift application that runs on macOS, iOS, and tvOS.

It's fully built with Swift, about 700 lines of code. We placed light probes along the track to take into account the change of local light. And pay attention to the light that changes when we go in the caves or in the tunnels. We have also added new effects such as motion blur which you can see when Bub catches a speed bonus just like this.

We have a new HGI camera which is why the light changes and when there is a bright light or when the environment changes. We also use IDL's for the light environment. Again some new, some, we love the motion blur. You can also see bloom when there are bright lights. And all the materials, as you see, are completely PBR so we have free reflections for the crystals, and for all the bonuses, and the gems. Once again, you will see the light change. It's tone mapping doing the work. Thank you.

[ Applause ]

So, let's go to the slides now. I will tell you a bit more about this demo. And the first thing that we're very glad to tell you this year is that as usual the demo is a simple code. Yes. Thank you.

[ Applause ]

You can download the code on all the assets from the website, from the developer website, and play with it, inspect the scene code, see how we build it. And it's 700 lines of Swift code. We think it's pretty simple to understand and we hope you really like what you see and learn a couple of things from the demo.

So this year we had to decide if, for the demo, we discussed it with our artists. And we produced some drafts to take into account the design idea we had. And once we agreed on the design ideas and the workflow the artists started to model the world. And as it's an interactive process we really needed tools to be able to ingest the models as they were built and to start programming right away, without waiting for the final assets.

So we have a custom tool written in SceneKit that uses the full power of SceneKit in a common line application. To involve the tools from the DAE files they convert the units to meters and they also place light probes automatically in the scene because there are more than 200 light probes and we don't want, we didn't want it to placed in by hand each time the scene changes.

We have used image based lighting. So we have a cube map for the background image, another cube map for the lighting environment. We used the lighting environment to add the reflections. And it's, as you've seen, great for outdoor scenes. We have also used light probes. You can see these light probes as they were displayed in Xcode and we've highlighted them. You see that only from this point of view there are already many light probes so you can imagine how many there are for the whole scene.

So the custom tools placed them in the environment and compute them. You can also do it by hand in Xcode but, of course, the more light probes you have, the more tedious it gets. It's essential for the inside but it's also adds a nice touch to the view in the outside to detect small changes in the scene.

We have added light maps for the inside because it overrides the environment, the lighting environment, which is very important for the caves as the light is very different in the caves so we have the probes and the light maps that change the light and the mood of this part of the scene. Of course we use normal maps as usual to add details to the models. We also use baked ambient occlusion maps for a very much better lighting view and rendering.

We have one big point light to simulate the sun. It's very high in the sky in the scene and we use it to create dynamic shadows and to improve global lighting. As I said, all the materials you see in the demo are 100% physically based materials. So we get the nice water palms reflecting the environment as well as the crystals.

Talking about crystals, this is very simple material that we built. It has no texture map so it's very simple to create. It's fully metallic and has no roughness at all. And just a diffuse color. So it's a very nice way to create a gem that reflects the environment almost for free.

On the other side of the spectrum you can see this tower which is one object with metallic parts and nonmetallic parts. We used, of course, metalness and roughness maps, texture maps to create that. And, as you see, we still have diffuse color on the normal map to add detail.

So basically the demo used all the new capabilities of SceneKit. Physically based shading, all the SceneKit API for materials, lights. We used Xcode integration and also new custom tools we built for the work flow. And we think it's a great showcase for this year's capabilities and a great sample code for you to learn new things. And we hope you will really like it. Thank you.

[ Applause ]

So, as you've seen, we've quite upgraded what's happening with materials and light this year. But we also had to change how the camera behaves because now that we have great materials and light, we also needed a much better camera. And now that we have light that are realistic we needed to have an HDR camera or High Dynamic Range because the usual camera used to have Low Dynamic Range which is 8-bits per components.

Now we have float per components so we can have very small, very unbright light such as a candle or a light bulb going to, for example, the sun which is very, very bright light. So this creates a very high dynamic range that we need to remap to the dynamic range of the screen. And for that we used tone mapping. Tone mapping is the action of remapping part of the rendering to a smaller capability device.

So we need to enable the HDR camera. It's not automatically set by default. You can set that in the API or in Xcode. And you can configure the tone mapping. You can change the gray point, the white point, and the range you want to expose. And you can also frost the exposure offset. So, for example, you can have this nice look of the scene, but you can create a low key one with underexposed rendering or overexpose it, well, but just by changing the offset. It's very simple.

We have added also very nice effects thanks to the new HDR camera. The first one is bloom. Bloom is a way to simulate being blinded by very bright lights from the scene and reflections. And it will be created by bleeding the ejection on the light on the surrounding pixels.

So you can see in this example it's a very nice effects and we can see how it looks in action with a reflection on the roof of the tower. I think it's a very nice way to see how the light bleeds on the surrounding pixels. And it adds a very nice touch to the rendering.

Next we have added motion blur as you've seen in the demo. So it smoothens the camera movements. And the thing is when you just add motion blur to the whole scene this is what you get so sometimes we don't want to blur everything. For example, we wanted the badger and the cart to be sharp and crisp. So we have a new API that enables us to exclude some objects from the motion blur and the result gives you a nice, crisp look for the subjects.

We have added a couple of variations from real life camera lenses this year. The first one is vignetting. Vignetting is a way to, is an aberration in real life lenses that creates shades on the corner of images. So you can change it from this image to this one. And you can also change environmentals to change the filtering going from the center of the image to the border of the image.

Another aberration we have simulated this year is color fringe. Color fringe is a defraction of lights that happens in real lenses, in the glass of real lenses. So it creates a magenta and sienna shadow of the lights in the rendering. And we go from this look to this look. This is a very exaggerated one. You can go more subtle to get a nice look.

We have also added a very nice way to change the mood of your scene with color correction. So you can change the saturation, go for an almost black and white look or overblow the colors if you want to. And you can also change the contrast of the scene. So you can have the normal look or a desaturated one, or oversaturated image, and change the contrast.

And the last one we have, it's a really very great effect. It's color grading. Color grading enables us to completely remap the colors of the scene to completely different colors. So we use a strip of a square image to create the 3D color cube that we use as a lookup table to remap the original colors in new ones. For example, in this case would remap normal color that you see on the upper side to a sepia tone. So we get this looking like that, like on, in sepia. And it's very simple to use and we think it's great. It's a very nice look.

So that's all we have for HDR camera this year. We think it's a very nice upgrade for cameras. We can't wait to see what you do with that. We got brand new effects that are cumulative so you can use, you don't have to choose in between, for example, bloom or motion blur. You can use all at the same time. Of course it has a cost but you can really create a very nice image and very cool looking scenes. So now I hand over to Nick to tell you about Model I/O improvements for this year. Thank you very much.

[ Applause ]

All right. Hi, everybody. So I'd just like to start out by covering a little bit that's improved on input and output of models and SceneKit. So this year SceneKit can import models in their native authored format, i.e., not necessarily just triangles as before, but in the topology of quadrilaterals or arbitrary polygons that the authors originally created their content in.

SceneKit, if necessary, will automatically triangulate for you in order to perform rendering. And the thing is if you want to use our new tessellation facilities you're going to want to have accurate tessellation for good shapes. So you'll need to opt-in, using the preserved original topology flag. That flag corresponds to the same flag in Model I/O and you bring in the assets and you specify this. It will preserve holes, and creases, and all the things that are important for an accurate rendition of the object.

Now this year we have improved our subdivision algorithms to the new system and OpenSubdiv 3 from Pixar. You can see in this example here that previously we would have imported as triangles and when you do the tessellation that box which we want to smoothly subdivide comes out a little bit lumpy. Now if you bring it in preserving its topology, you can see that the quads go to a uniformly round surface and it looks very nice.

So this kind of facility is great for having lightweight objects that can scale the resolution to your scene, and so on and so forth. Now the other aspect of input and output that I want to emphasize is that last year we introduced physically based materials and things to Model I/O. They bridge naturally onto all of the SceneKit stuff. So if you have a high dynamic range camera specified in Model I/O, it will come across and without losing any attributes.

So on to Model I/O. Yeah. Quick refresher. As it says on the [inaudible], it's for input and output of models onto our frameworks and systems. Need this so obviously to bring your data from your apps where you've created things, translate objects between frameworks, such as SceneKit and MetalKit, and so on. And we provide support for a number of standard file formats.

Now file formats are the method by which things come from your art program into your tools. And historically the formats that we had have been quite narrowly specialized. For example, they might just bring in a model. Or they just might bring in bulk data. Now really exciting thing that I'm bringing to you this year, we're bringing to you, is in conjunction with our friends at Pixar we're introducing support for Universal Scene Description.

Now Universal Scene Description is a new open standard. And the thing that's really interesting and exciting about it is it's not only a file system and a format that can be either easy to read in ASCII or efficient for loading in binary but it also includes a scene composition engine. That really distinguishes it from any other format that's come before.

It embodies years of practical production experience. Pixar uses this for their films. And "Finding Dory," coming out tomorrow, is rendered entirely from USD files. Now USD has data types that are specialized for scenes. And it introduces, once again unique to this format as an open format, is file layering to enable concurrent workflows. Now concurrent workflows is kind of an awesome thing.

Here is a representation that might get in Universal Scene Description for a typical scene in a film. We have a shot layer, the layer, the shot is layered from components, background, characters. The characters themselves might be made out of many components. Now you can see there's layers in that image of a shot layer there.

That's because not only can you just create the scene with all of these things composed, but you can make variations. And so the scene description will know that this is like take three, maybe the characters come in a little bit faster or a little bit slower. And you can have all of those variations embodied in one file and for review.

Now another really unique aspect to Universal Scene Description is, as far as I know, it's the only open source file format that allows the specification of classes in variations of objects. Now, you can imagine that you might have some sort of a situation where you have lots of monsters, and they all want to go to university and stuff, and there's like books. Now in a traditional workflow you're probably going to find yourself creating your books and your program, slaving out millions and millions of different files for every little book, and then placing them on your bookcase, and getting it out for rendering like that.

Now that is tedious. In games you have things like teams of characters, maybe they all differ in like hairstyle and shirt. And you might have to bake those all out. Now Universal Scene Description allows you to specify in a single file classes of objects. So the class represented here obviously is a book. So the file can represent many different geometrical interpretations of books. Like you obviously got a wide one, and a tall one, and a thick one.

And when you instantiate your book into the bookcase you can tell Universal Scene Description, "I want this book and I want it to be this wide and that tall." And it will provide the information that you need to instantiate that into your runtime, or your shot, or whatever.

The variations that you can have in a single file can vary along many axes. In this case I'm changing some shading properties. So previously I had all those books. I can make them whatever color I want as well. And the magic of that is I place the book and when I finally ask, for the purposes of rendering, "What color is the book on the shelf in this place?" it will work out, according to all of the logic about scene composition that file and the engine embodies, the way that it should be represented.

Now beyond that you can also represent a simple, in a single file, different capabilities. So what I'm showing here is that on the very low end, like say for a wearable device, I might have a low poly version. The same file can have one that's suitable for use in the highest rendering capability that you've got.

Now we've integrated Universal Scene Description across all of our systems and frameworks. So at a very nuts and bolts level. If you import a Universal Scene Description file into Model I/O -- I don't expect you to be able to read that -- you're going to be able to get a hierarchy of familiar Model I/O objects with all the properties that were in the Universal Scene Description file exactly represented so that you can use our tools that are provided in Model I/O, such as placing light probes and evaluating them towards optimal positions. However, beyond that, let's say that you're working on a project and your art team just gave you a folder full of stuff.

You can just open that window in the binder with all the stuff that you just got and Finder will prepare thumbnails for you so you can see what's there. And Quick Look works with it as well. So you can select one of these things, whack the spacebar, and it'll pop up and you can tumble it. Now, of course Quick Look shows you one thing at a time.

If you want to hold things up for comparison or maybe your USD file has multiple cameras or something in it that you want to inspect individually, you can bring that up in preview and Universal Scene Description is working great there. And if you're bringing Universal Scene Description file into Xcode, it imports via Model I/O into SceneKit with an exact representation of what was in that file so that you can inspect it in the hierarchy browser, you can look at the properties, you can move things around, you can add cameras. You make edit scenes, send it back out to USD.

And then you can send it back to your artists and say, "Hey, you know, I've got some edits for you. Can you, do you know the rep?" So finally, it's incorporated into SceneKit. And so friends at Pixar supplied up with Mr. Ray from "Finding Dory." And this is just stock out-of-the-box SceneKit with the new physically based shading that you just heard all about. And we're just playing the movie asset with three seconds of animation, and it looks really, really nice.

So plugins are the thing that you're going to need in order to incorporate Universal Scene Description into your workflows. So that will enable the motion of your assets between people, your content creation programs, the apps that you make. Now the plugins, and the open source information, and all availability, and schedules, et cetera, are available on the openusd.org website which I encourage you to go visit to find out how you can use this in your pipelines and processes. So that's Universal Scene Description.

[ Applause ]

So a quick summary. SceneKit is available across our entire ecosystem on every platform. It's kind of an amazing thing. We have physically based rendering for any state of the art looks and state of the art representation, just a beautiful look. And HDR cameras and effects give you control over how things are represented and how they look, really high quality.

And we've got support for Universal Scene Description which we're really happy to get behind and think it's going to make a big difference in workflows in coming days and months. More information on this session which was 609, is available on the site. There's related sessions: Visual Debugging with Xcode, Wide Color, Game Technologies and Apple Watch" that you can attend today and tomorrow. And thank you very much.

[ Applause ]