Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2020-10613
$eventId
ID of event: wwdc2020
$eventContentId
ID of session without event part: 10613
$eventShortId
Shortened ID of event: wwdc20
$year
Year of session: 2020
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2020] [Session 10613] What's ne...

WWDC20 • Session 10613

What's new in USD

Graphics and Games • iOS, macOS • 24:41

Discover proposed schema and structure updates to the Universal Scene Description (USD) standard. Learn how you can use Reality Composer to build AR content with interactive properties like anchoring, physics, behaviors, 3D text, and spatial audio that exports to USDZ. And, discover streamlined workflows that help you bring newly-created objects into your app. If you're interested to learn more about USDZ as a distribution format, check out "Working with USD.” And for more on creating AR content with Reality Composer, watch “The Artist's AR Toolkit." We’d love to hear feedback about the preliminary schemas. After you watch this session, come join us on the Developer Forums and share your thoughts.

Speaker: Abhi Ashutosh

Open in Apple Developer site

Transcript

Hello and welcome to WWDC. Hello, everyone. My name is Abhi, and I'm a software engineer here at Apple. Thanks for joining our session. Today, we'll take a look at USD as it's used around the world, new workflows enabled by Reality Composer's new USD export functionality, and then take a peek behind the curtains at the new AR USD schemas helping make these workflows possible.

Last year, we introduced Reality Composer for macOS and iOS, with the goal of making interactive AR content creation intuitive and easy for developers. Reality Composer allows you to import your own USDZ content or get started with a great built-in library of static and procedural assets and bring them to life with interactions, physics, anchoring, and more.

This year, we've made it even easier to bring content from a Digital Content Creation tool, or a DCC, into Reality Composer with the introduction of Reality Converter. We've also added support to export content from Reality Composer as a USDZ, which enables new workflows between Reality Composer and a variety of DCCs. To make this possible, we've worked in collaboration with Pixar to develop new preliminary AR USD schemas, which we'll take a look at in-depth later in this session.

For those unfamiliar with USDZ, USDZ is the compact, single-file distribution format for USD. It is optimized for sharing and is deeply integrated into iOS, macOS and tvOS in applications like Messages, Files, Safari and more. If you're interested to learn more about USDZ as a distribution format, including more about its underlying technology, USD, and relevant concepts such as schemas, composition, the stage, prims, and properties, we encourage you to check out last year's session, "Working with USD," in the "related talks" section.

Over even just the last year, we've seen an exciting growth in the adoption of USD and the USDZ file format. USD is being used everywhere, from films to gaming to AR to DCCs. A few examples include Pixar, which uses USD for its films, and Maya, Houdini, Unity, Unreal Engine, and Adobe Aero, which support USD interchange, either as export or import or both. So, why USD? Let's take a high-level look at other formats out there today.

The most basic format is .obj, which essentially contains a single 3D model. It has limited material support and no support for animations. Then there's a large group of more modern file formats. They usually support multiple models that can be laid out in a scene graph and usually support material definitions and animations. USD supports all of this, and is additionally designed to be scalable. Pixar developed USD for use in its films. USD also allows for collaboration, allowing for many artists to work on a single scene without getting in each other's way.

USDZ is the archive package and inherits most of these features and is optimized for sharing. Next, we'll take a look at new workflows enabled by Reality Composer's new USDZ export functionality and then take a peek behind the curtains at the new AR USD schemas that make this possible. When Reality Composer launched last year, it supported the import of USDZ content and the export to Reality File.

This year, we've expanded the artist workflow with the introduction of Reality Converter, which makes it easy to convert content from DCCs to USDZ for import and use in Reality Composer. New this year as well is the ability to export content created in Reality Composer to USDZ. This enables not only new artist workflows between Reality Composer and DCCs, but creates an ecosystem of content creation tools all speaking the same language: USD.

So, for example, we can start by creating custom content in a DCC and use Reality Converter to convert it to a USDZ and import it into Reality Composer. Or we could import a USDZ we found online or exported from another application directly into Reality Composer. And the third option is we can start with any of the great built-in content inside the asset library. Next, we can add functionality specific to Reality Composer, such as interactions or physics to bring our content to life and anchoring to help place it in the world.

We can then export our creation as a Reality File or as a USDZ. Last year, with the export of only Reality Files, we would have been able to share our content online with family and friends, view it in AR Quick Look or use it in an application. However, this is where our content creation story would've ended.

This year, the journey continues. We can take our USDZ and continue making edits in any DCC that supports the USDZ file format. For example, we could scatter our content in Houdini, export it, pick it up in Maya, make a few more edits, export it again, and bring it back into Reality Composer to make a few final additions.

We've designed the new AR USD schemas, which we'll take a closer look at later in this session, so that they are compatible with DCCs and viewers that haven't yet adopted them, allowing you to make edits without losing information and view content as accurately as possible. So let's take a look at an artist workflow.

So here I'm in Reality Composer, and I've brought in some USDZ assets from a variety of different sources. For example, here we have a plane and a toy car from the AR Quick Look gallery available online. I've also worked with artists to create some really nice assets, including this sun asset and this cloud asset.

And finally, I've also worked with our artist to create a nice wooden flag asset that fits well with the rest of our content. We've already brought it through Reality Composer and added a few behaviors. So when I tap on this flag, we'll see that it performs a behavior and then displays some additional content. So let's go ahead and preview that. Again, this is a single USD. So we'll see our content move and additional content show up.

That looks pretty awesome. So what I want to do in this demo now is take all of my USDZ assets, export them, bring them into Houdini, and make them race against each other with the beat of some music that I made in GarageBand. So the first thing we'll want to do is export this content to USDZ, and we can do that by first going into our preferences... and selecting "enable USDZ export." Now when we go to export our content, we'll see two different options. We'll see the USDZ option and our existing Reality File export option. Let's go ahead and export to USDZ. And I've already done this and brought it into Houdini, so let's jump over to our prebaked scene.

So here we have our USDZ assets in Houdini now. We can see our toy car and our toy plane, and I've worked with my artist to position them to create this racing scene. So we can see we've instanced some of our road assets and placed them along, and we've also animated all of our content to some additional music that I made in GarageBand before. We can see the full setup of our scene right here in the Houdini editor, and we can also get a preview of our content right here.

So we'll see in our scene I've also placed our flag asset. Houdini doesn't yet understand the behavior schema, so we won't actually see our flag animate and additional content show up. However, it's still an asset, so we can still place it. When I go to export my content, these behaviors are still inside of the USD, so when we bring it back into Reality Composer or another RealityKit-based application, we should see our behaviors show up again.

So this is looking pretty good, and what I want to do next is export my content so I can bring it back into Reality Composer and add a few final behaviors like a play animation and some audio. So, let's go ahead and export our content from Houdini. And I've already done that and brought it into Reality Composer, so let's jump over to our final scene.

So here in Reality Composer, we have our baked asset from Houdini. We can see it comes in as one big asset together. So we have our instance road, our toy car and our toy plane and some instance clouds as well. And we'll notice that it has a behavior. I've already gone ahead and set up a couple of behaviors here. Specifically, a Tap Trigger, a USDZ Animation and a Play Music action.

The USDZ Animation targets our baked scene and will play that animation that we built in Houdini, which moves the plane and the car to the beat of some music. So let's go ahead and preview our scene. We can see that when I tap on the flag, our original behavior comes through. It's gone all the way through Houdini and now comes back in Reality Composer. And now...

[lively music plays]

...we'll see that our content animates, and we hear some audio. That looks pretty sweet. I think I'm ready to export this now as USDZ and bring it into my RealityKit application and publish it to the App Store. So we've seen how we can start with some content in Reality Composer that we've brought in from various different sources, including the AR Quick Look Gallery and other DCCs.

And we saw how, for example, the flag asset kept the behaviors that we imbued from Reality Composer through a full import, export, and import flow between Reality Composer, Houdini, and again, Reality Composer. So that's a look at the new workflows and USD-based content creation ecosystem enabled by Reality Composer's new USDZ export functionality. Next, let's take a closer look behind the scenes at the new AR USD schemas making the export of Reality Composer content to USDZ possible.

Reality Composer enables you to create many different kinds of experiences with features like scenes, AR anchoring, interactions, physics, 3D text, spatial audio, and more. We've worked in collaboration with Pixar to create new preliminary schemas and structures for all of this to enable the export of Reality Composer content to USDZ. As a reminder, schemas are USD's extension mechanism, allowing you to specify new types in the Library.

In this section, we want to give you an overview of these new schemas so you can gain an intuition of their design to adopt into your own content or editor applications. We encourage you to also check out our in-depth documentation available on the Developer site for more information and examples. So let's start with scenes. Scenes are a fundamental part of Reality Composer. A single scene can contain multiple models and specify scene-wide properties, such as gravity, ground plane materials and where in the world our content would like to be anchored.

You can create multiple scenes in a single Reality Composer project and load each individually in an application or stitch them together with the change scene action to create a larger overall experience. The scene structure in USD defines multiple scenes in a single USD file, with scenes being targetable by behaviors. You can also load a scene by name in a RealityKit-based application just like you can for a Reality File.

Let's take a look at the scene structure in a USD. Here we're taking a look at the plain-text version of USD, known as a USDA, for readability. Scenes are structured under a scene library, which is specified using a new type of "kind" metadata called sceneLibrary, on the root prim.

Each Xformable prim under this root prim is considered a scene, and it can contain its own tree of prims defining meshes, materials, behaviors, anchoring, and more, just like in a single-scene USD. Scenes can be marked active by using the "def" specifier and inactive by using the "over" specifier. This also allows DCCs and viewers that haven't yet adopted the scene structure to still be able to view all active scenes. So if we want to swap the "def" and "over" for our scenes, we'll now see the sphere instead of the cube.

In addition, each scene can be given a readable name. This name can be used to load a particular scene from the USDZ in a RealityKit-based application, just like you would for a Reality File. Note that RealityKit, AR Quick Look, and Reality Composer only support a single active scene in a scene library and don't yet support nested scenes. Next, let's take a look at anchoring. Anchoring helps specify where content should anchor in the real world.

For example, one scene can anchor to a horizontal plane, like a table or a floor, and another scene can anchor to and follow an image or a face. The AR anchoring schema is an applied schema that supports specifying the horizontal plane, vertical plane, image, and face anchoring types. Note that ARObject and GeoLocation anchors aren't yet supported in USD.

Let's take a look at how to add anchoring information to a USD. Here we have a basic cube in our USD. We want to anchor this cube to an image in the real world. We can do this by first applying the anchoring schema to the prim... and then specifying the anchoring type. In this case, because we're aligning our content to an image, we use the "image" type, and then specify our related image reference prim.

The image reference prim contains a reference to the image, which can be a JPG or a PNG, as outlined in the USDZ specification, and then define its physical width. This property is defined in centimeters to avoid unit changes due to composition or edits made by a DCC that hasn't yet adopted the schema. Next, let's take a look at behaviors. Behaviors in Reality Composer allow you to easily bring your 3D content to life.

A behavior in Reality Composer contains a single trigger that can target multiple objects, such as a tap, collision, or proximity event, and can contain one or more actions targeting multiple objects, such as Emphasize, Play Audio, and Add Force. Here, we have an example of a tap trigger with a bounce action.

The behavior schema follows this same structure, but also pulls back the curtain and allows for more complex nesting and composition of triggers and actions. The schema defines only three new prim types: a behavior, a trigger, and an action. Specific triggers and actions are defined with data schemas similar to UsdPreviewSurface, which allows the behavior schema to be much more flexible beyond the initial triggers and actions we've added support for this year. Let's add a behavior to our scene. Here, we've defined a single behavior defining a tap trigger and a bounce action as seen in the video in the previous slide.

The behavior contains an array of trigger relationships and another array of action relationships. Nested in the behavior, we've defined a trigger, which is our tap trigger. The trigger defines itself as a tap trigger using the "info:id" property... and the objects that are observed for tap events which invoke this trigger. In this case, our tap trigger targets our cube prim from the previous examples.

We also have a nested action and have related our behaviors trigger and action properties to their object paths. The action defines itself as a bounce action using the info:id property and motionType properties. The motionType property is defined as a property in the emphasize actions in the behavior's data schema, and the bounce action's target is defined with the affectedObjects property, which, again, is our cube. Together with the trigger, we've defined a behavior that bounces the cube when it is tapped.

In the previous example, we only had one trigger and one action for this behavior, but multiple triggers and multiple actions can be related to these properties. When multiple triggers are defined, the satisfaction of any of them will invoke the related actions. When multiple actions are defined, they each run serially, one after the other. We have also encoded the concept of group actions in our action data schema, which allows for serial or parallel execution of related actions.

Also, behaviors are automatically loaded as part of the scene. If behaviors are defined in a multi-scene USD, they will be scoped to the scene in which they're defined. In this example, we have three behaviors which are all loaded as part of the "My Cube Scene." Next, let's take a look at physics. Physics in Reality Composer helps make your AR content feel at home in the real world.

In Reality Composer, you can define the physical properties of an object, such as its physics material, like rubber, plastic or wood, its collision shape, such as a box or a sphere, and motion type, as well as scene-level physical properties like the ground plane material and gravity. The physics schema allows you to set up a physics rigid body simulation. It does this with schemas for the physics material, colliders, rigid bodies and forces, specifically gravity.

Here we have a wooden ball that we want to make participate in the physics simulation as a dynamic object. To achieve that, we're applying the collider API and the rigid body API to our prim. We're using the prim's own geometry in this case to define its convex collider shape. This property comes from the collider API. We're then giving our wooden ball a mass of ten kilograms. This property uses kilograms to avoid unintended scaling due to composition, similar to the anchoring API.

Next, we can apply a wood physics material to our wooden ball. Here we're defining a wooden material. The physics material schema is an applied schema, so we first apply it to our wood material prim. We've opted for an applied schema so that these properties can be applied to an existing material in the scene without having to create a brand-new prim and then define various properties about the material, such as its restitution and its friction. We can then apply the material to our wooden sphere.

Next, let's make sure our content doesn't fall into infinity by adding a ground plane collider to our scene. We can do this by specifying an infinite collider plane and then marking it as the scene ground plane. For maximum compatibility, we're putting this into the custom data dictionary for the prim so that older versions of USD that do not have this registered can still open the file.

We can then specify the plane's position and normal... and we can also apply a material to our plane. In this case, let's reuse our wood material from earlier. Let's take a look at the scene we've just created. Finally, let's define the gravity in our scene. For fun, let's put our wooden ball on the moon.

We can do this simply by creating a prim in our scene with a gravitational force type. We recommend that there only be one gravitational force per scene. We can define the gravitational force in stage units per second using a vector. Together we have a scene that looks like this. Next, let's take a look at audio.

[whinnies]

Audio in Reality Composer is driven by behaviors, specifically the Play Audio action. This allows audio to be played at the start of a scene or after a supported event, such as the tap of an object. The audio schema, which is distinct from the behavior schema, allows for the embedding of audio content in a USD.

Audio specified in this way will be played back alongside the stage's animation track and can be configured with various playback options, such as aural mode, playback offset and volume. When a USD containing the audio schema is brought into Reality Composer, its audio will play alongside the USD's animation. This can be invoked with the "USDZ Animation" action, which now supports audio controls as well.

If you're working with an editor that doesn't yet support the spatial audio schema, you can use the USDZ Python tool, available on the Developer website, to add it to your USD. Let's take a look at the audio schema in the USD. Here we have a model that specifies audio to be played back alongside the animation in a USD. First, we're defining a brand-new prim type named "SpatialAudio." We specify the audio file itself using the filePath property.

We can also specify the auralMode, which is how the audio will be played back. Spatial audio will emit audio from a specific transform, and non-spatial audio will play the audio without taking transform into account. In addition, you can set the start time and media offset of your content, which will begin your audio clip at a specific point and after a specific time respectively.

Since the spatial audio prim inherits from the xformable prim schema, it can be placed in space. By default, it will inherit the transform of its parent. However, we can set our own local transform to offset our audio to play it from a specific location. In this case, we want to play our horse neigh from the horse's mouth.

Next, let's take a look at 3D text. 3D text in Reality Composer allows for the addition of readable content in the scene. Text can be configured with a variety system fonts and weights and additional options such as alignment, depth, bounding volume, wrap mode, and more. The 3D text schema defines all of these properties an IsA schema. Let's take a look at the text schema in a USD. Here we're defining 3D text with a content "#WWDC"... with the font Helvetica and a fallback font of Arial. If we chose a different main font, our system will use that if it has it.

We can also define additional properties, such as the wrap mode and alignments. Finally, let's take a look at the new metadata options. These options allow for the specification of a playback mode, whether the animation and audio automatically plays or is paused upon load, and scene understanding metadata that specifies if a scene's content interacts with the real world, as generated by the new scene understanding feature in RealityKit and ARKit on the new iPad Pro with LiDAR scanner. This allows objects to not only fall on content in your scene but also interact with real-world objects in your environment.

Let's take a look at the new playback metadata in a USD. Here we have a USD with an animation and are specifying that it should loop and not start automatically. This is a hint to a viewer that it should display a play button, so the user can initiate the animation. Auto-play is also disabled for all content coming out of Reality Composer, so they can be driven explicitly by behaviors.

Next, let's take a look at the new scene understanding metadata in USD. Here we have a USD with some objects that have physics properties applied to them. In our scene, we're specifying that all content in the scene will interact with the environment as generated by the scene understanding capability in RealityKit.

As next steps, we encourage you to check out the schemas in depth through new documentation available on the Developer website, send us feedback through Feedback Assistant or the Developer Forums, check out Reality Converter in the related talks, and finally, begin adopting the new AR USD schemas in your content and editor applications.

So that's an overview of the new workflows in USD-based DCC ecosystem enabled by USD export in Reality Composer and the new AR USD schemas. We've seen how we can start with content in Reality Composer, bring it to life with interactions, physics and anchoring, export it, modify it in a DCC and re-import it into Reality Composer or a DCC, continuing the content creation story and creating an ecosystem of content creation tools that all speak the common language of USD. We've also taken a closer look at the new AR USD schemas, making USDZ export in Reality Composer possible. With this new functionality and schemas, we're excited to see what amazing creations you will continue to create. Thank you, and enjoy the rest of your WWDC.