Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2025-273
$eventId
ID of event: wwdc2025
$eventContentId
ID of session without event part: 273
$eventShortId
Shortened ID of event: wwdc25
$year
Year of session: 2025
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC25 • Session 273

Meet SwiftUI spatial layout

SwiftUI & UI Frameworks • visionOS • 20:21

Explore new tools for building spatial experiences using SwiftUI. Learn the basics of 3D SwiftUI views on visionOS, customize existing layouts with depth alignments, and use modifiers to rotate and position views in space. Discover how to use spatial containers to align views in the same 3D space, helping you create immersive and engaging apps.

Speaker: Trevor Adcock

Open in Apple Developer site

Transcript

Introduction

Hi, welcome to “Meet SwiftUI Spatial Layout”. I’m Trevor, an engineer on the SwiftUI team. And in this session, we’ll explore techniques for building delightful Spatial Experiences using SwiftUI. I’ve been putting SwiftUI’s new Spatial Layout capabilities to work, expanding an app I love called BOT-anist. The app allows you to customize fun robots from various building blocks, colors, and materials. You can then use your newly minted bots to tend your own virtual garden.

I love building these little robots, and recently I’ve been working on some new views to catalog my creations. Now, not only can you customize a robot, but you can save those bots and collect a whole host of them. I’m excited to show you some new 3D scenes for browsing through robots. I created all of these experiences with SwiftUI.

If you’ve built 3D experiences on visionOS before, you may have used RealityKit. RealityKit is a great framework for building 3D apps, especially those with complex behaviors like physics simulations. If you’re coming from a SwiftUI background, you may want to build in the declarative syntax you already know. And you may not need all that RealityKit power everywhere in your app. Now, in visionOS 26, you can use SwiftUI’s existing 2D layout tools and ideas to build 3D applications.

When you use SwiftUI layout, you get built in support for animations, resizing, and state management, which means when I remove a bot from the carousel, SwiftUI can animate the positions and sizes of all the other robots to accommodate more or less space. and resizing the volume automatically resizes the carousel and each robot inside it. Let's dive into the new tools I used to build these Automaton Arrangements.

But first, these 3D extensions of SwiftUI’s layout system build on existing 2D layout concepts. If working with SwiftUI layouts is new to you, check out “Building custom views with SwiftUI” and “Compose custom layouts with SwiftUI” before diving into this content. In this video, we’ll talk about the basics of 3D SwiftUI views on visionOS, how to customize existing layouts with depth alignments. rotation3DLayout, a new modifier for rotating views within the layout system, and finally, SpatialContainer and spatialOverlay as a way to align views in the same 3D space.

3D views

Let’s talk about views and the layout system. For each view in your app, SwiftUI calculates a width, height, X and Y position. Some views, like a non-resizable image, have a fixed frame, which matches the size of the asset. Some views, like this Color, have flexible frames and will take up all the space that’s provided to them by a parent.

Layouts compose their children into a final frame. The frame of this VStack, shown in yellow, is determined by the space available to it and the children it contains. Here, its height ends up being the sum of the two image views inside it. visionOS behaves the same way, but views are 3D on visionOS. The layout system has the same behaviors, now just applied to three dimensions instead of two.

Meaning for each of your views, in addition to width and height, SwiftUI also calculates a depth and a Z position. I often use the border modifier to visualize 2D frames on iOS. Here, I’ve created my own debugBorder3D modifier to visualize 3D frames on visionOS. I’ll show you how I built this modifier at the end of this video using a couple of the APIs you'll learn about in the meantime.

The debugBorder3D shows that Model3D behaves similar to an Image, but in three dimensions instead of two, taking up a fixed width, height, and depth. While all views are 3D, some have zero depth. Many of the views you use to build planar experiences like Image, Color, and Text occupy zero depth, meaning they behave just like they do on iOS. Some views have flexible depth, in the same way that Color takes up all the available width and height proposed to it by default. On visionOS, certain views, like RealityView, take up all the available depth proposed to them by default.

GeometryReader3D has this same flexible sizing behavior, as well as Model3D with the resizable modifier applied, which has stretched our robot friend like a piece of taffy to fit all the width in this window. It has a bit of a long face in this aspect ratio, though. I’d like to get it back to its original proportions while still scaling it to fit the available space. I can use the new scaledToFit3D modifier in addition to resizable(), causing my robot to maintain the model’s aspect ratio while still sizing up or down to fit the available width, height, and now depth.

So where is this available depth coming from? Just like width and height, the Windows contents receive a root depth proposal. Unlike width and height, which may be resizable, this depth proposal is fixed for Windows. Outside of this depth, your content may be clipped by the system. Similarly, a volume will propose a width, height, and depth to its content, but in a volume, depth is also resizable.

Check out “Designing for visionOS” in the Human Interface Guidelines for more details on when to use a volume or a window. Some views can alter these depth proposals for contained views. In the same way a VStack composes the heights of its subviews, ZStack composes depths. So the depth of this ZStack is the depth required to fit both robots stacked one in front of the other.

And similar to the way VStack may propose different heights to its subviews based on factors like available space, the number of children, and the type of children, ZStack may propose different depths to its children based on the same factors. Here, the RealityView pushes the robot forward in the ZStack, filling all the available depth in the scene.

Existing Layout types and Stacks are actually 3D on visionOS and will apply some sensible default behaviors for depth. In this example, the HStack will carry through a depth proposal from its parent and establish its own depth to tightly fit the two models inside it. The HStack also lines up the backs of these two robots by default.

Depth alignments

We call this concept depth alignment. Depth alignments are a new tool you can use to customize existing SwiftUI Layout types to better accommodate 3D views and depth. If you’ve worked with vertical or horizontal alignments, these are going to feel familiar. I’d like to build a new volumetric window to display my favorite robots with the name and description of each. First, let’s update the code for our robot Model3D to make this more reusable.

I start with a Model3D that’s scaled to fit. I refactor it to use the new Model3DAsset type, which allows me to preload the model for my robot. I encompass this all in a new ResizableRobotView, which I can use throughout the app. I also remove the debugBorder3D for now.

Now I’ll create a RobotProfile using a VStack containing a ResizableRobotView, plus a RobotNameCard with some details about the bot. There's a problem, though. This card is hard to read since it’s placed at the back of the VStack, and it’s getting a bit lost behind the robot model. Just like you can configure in HStack to align its content on the center, top, or bottom edge, you may want to configure how views are aligned in depth on visionOS. By default, Stacks and Layout types use a depth alignment of back. Now, in visionOS 26, you can customize DepthAlignments on any Layout type. I’ll update the RobotProfile to use VStackLayout. So I can apply the depthAlignment modifier. I ask for .front alignment here.

You can also use the center or back guides. But I think front is the right choice to make this robot name card legible. Now, I’m never going to forget Zapper Ironheart and its encyclopedic knowledge of obscure facts. Using the standard front, back, or center depth alignments are great if you want one of those three standard configurations. But what if you need something more complex than those behaviors?

I’ve been creating a volume to show my three favorite robots with three of these robot profile views in an HStack. Greg-gear Mendel is my favorite robot, and I’d like to make it a bit more prominent in this view than the other two. In fact, I’ve been thinking about a sort of Depth Podium where the more I like one of these robots, the closer it is to me. So Robot 1 is the closest, then 2, then 3.

From the top down, I want it to look something like this, where the back of the first robot is aligned in depth with the center of the 2nd place robot and the front of the 3rd place robot. I’ll need a Custom Depth Alignment to do this. First, I’ll define a new struct which conforms to the DepthAlignmentID protocol.

I implement the one requirement, which is the default value for this alignment. I use the front alignment guide as the default for our DepthPodiumAlignment. Then I define a static constant on depth alignment that uses this new DepthAlignmentID type. Now I can use this depthPodium alignment guide as a depth alignment on the HStack containing each robot.

This will align all the robots on their front face given the default value we just specified for this guide. Now I’ll customize the depthPodium alignment guide on the trailing robot to align its depth center with this guide. I’ll modify the center robot to align its back with the depthPodium guide. The leading robot will continue to use its front guide as the default for this alignment. Here it is in the simulator. With my bots staggered in depth, no one will question that Greg-gear Mendel is first in my heart.

Rotation layout

Depth Alignments are great when you want to make tweaks to depth position within an existing Layout. But what if you want to build something even more depth oriented. Rotation Layout is a great tool for more advanced 3D use cases. You may be familiar with the existing rotation3DEffect modifier, which applies a visual effect to a view to rotate it around a given axis. This modifier is great for basic rotations. But if we place our model in an HStack with a description card about it, and rotate the rocket 90 degrees along the Z-axis, it runs into the card and begins to run out of the volume.

If we apply debug wireframes before and after the rotation effect, it’s a bit easier to understand what is going on. The solid red wireframe is rotated by the effect, but the dashed blue wireframe shows me where the layout system understands the rocket’s geometry to be. The HStack sizes itself and places its content relative to this blue frame. These don’t line up. This is because visual effects don’t impact layout. Which means the HStack doesn’t know about the rocket’s rotated geometry when using rotation3DEffect.

This is true for all visual effects, including scaleEffect, and offsets. In all of these cases, the layout system won’t adjust the size or placement of views due to these modifiers. That’s great when you want to animate one view without impacting the frames of others around it. But what if you do? How can we fix this rotated rocket?

Good news. In visionOS 26, we’re introducing a new rotation3DLayout modifier, which does modify the frame of a rotated view in the layout system. When I apply it to my rocket model, the HStack can adjust its sizing and placement to give the rocket and the details card plenty of room.

rotation3DLayout supports rotations on any angle and axis, which means I can rotate my rocket at 45 degrees, which I think really makes it look like it’s blasting off into space. I apply a debug wireframe before and after the rotation3DLayout modifier. This shows the rotated frame of the rocket in red. The wireframe in blue shows the frame of the modified view within the layout system. Notice the blue bounding box is axis aligned to the parent and tightly fits the rotated frame in red.

Now let’s see how we can use rotation3DLayout to build the robot carousel I showed you at the beginning of this video. I’ll start by borrowing the RadialLayout from “Compose custom layouts with SwiftUI”. This custom Layout type places views in a circle with the circumference defined by the available width and height.

MyRadialLayout was originally written for placing 2D views on iOS, but it works great on visionOS. Even when it’s positioning 3D models of robots instead of 2D images of pets, we can use a ForEach to place our resizable Model3Ds of each robot inside this custom layout. This looks good, but it’s still a vertical experience. I want my robots to be horizontally oriented in the volume.

I’ll apply a rotation3DLayout to the radial layout rotating the view 90 degrees along the X-axis. What was previously the carousel’s height will now define the rotated view’s depth in the layout system. My carousel is oriented correctly now, but my robots are laying down, sleeping on the job. We can stand them up by counter rotating each robot inside the ForEach using a second rotation3DEffect of -90 degrees along the X axis. These drowsy droids are now standing at attention. There’s just one last thing to fix. The carousel is center aligned inside the volume’s height. I’d like the carousel to be flush with the base plate of the volume.

This is easier to notice with a debugBorder3D applied to the entire carousel. I can use the same strategy I would for a 2D layout. I want to push the carousel down inside a VStack with a Spacer above it. My robots are looking great at the bottom of the volume now.

Spatial containers

Let’s talk about one more pair of tools in your 3D layout utility belt, SpatialContainer and spatialOverlay. There’s one more feature I’d like to add to our robot carousel. Tapping on a robot should select it, showing a controls menu as well as a ring at the bottom of the model, indicating that it's selected.

This ring is also represented as a Model3D. We want the ring to fill the same 3D space as our robot. We don’t want these to stack along any axis. We need a new tool that will place the models in the same 3D space. The new SpatialContainer API allows you to place multiple views in the same 3D space like a series of Nesting Dolls. You can apply a three dimensional alignment to all of the views. Here we line up all the children according to their bottomFront alignment guide.

And here, according to their topTrailingBack guide. spatialOverlay is a similar tool, which allows you to overlay a single view in the same 3D space as another. Similar to SpatialContainer, it supports 3D alignments. I only have two views to line up, the robot and the selection ring. And I really only care about the geometry of the robot. I’m happy to have my ring resized to fit my robot size. So let’s use a spatialOverlay to implement our selected robot visuals.

I’ll add a spatialOverlay modifier to our robot model. And if it’s marked as selected, place the resizable ring view as its content. We'll use a bottom alignment to line the bottom of the ring up with the bottom of our robot. I think our robot carousel is looking great. And it’s easy to make even better with all the existing composable SwiftUI APIs. Let's recap everything we've learned by implementing the debugBorder3D modifier. Here’s the modifier I showed earlier applied to a Model3D.

I define a debugBorder3D method as an extension on View. I apply a spatialOverlay to the modified content so we render the border in the same 3D space as the view it's applied to. I place a ZStack inside containing a 2D border, a Spacer, and another 2D border. Next, I apply a rotation3DLayout to the entire ZStack to place borders on the leading and trailing faces of the view. Finally, I place this inner ZStack inside another ZStack with 2D borders for the back and front faces. With that, we have borders on every edge.

Next steps

I love how I can compose these existing 2D SwiftUI modifiers with new 3D APIs to make something completely new. There are 3D analogs for many of the layout tools and modifiers you may already be familiar with from a 2D context. Check out the documentation for more of these APIs. SwiftUI is a great tool for building 3D apps, but there are many use cases where you’ll still want to reach for RealityKit, often mixing both in the same app.

Now that your SwiftUI content is 3D, you may need it to interact with RealityKit code. My friends Maks and Amanda have built some amazing additions to BOTanist using both frameworks together. Check out “Better Together: SwiftUI and RealityKit” for more information. I can’t wait to see what your app looks like in 3D.