Frameworks • tvOS • 39:33
Apps on tvOS entertain, inform, and inspire with their content and interactive experiences. tvOS 12 brings new technologies that help make these experiences even more enjoyable and engaging. Get an introduction to focus engine support for non-UIKit apps, new UI elements, and Password AutoFill. Learn how to bring it all together to create incredible tvOS apps and experiences.
Speaker: Hans Kim
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
Good afternoon. Welcome to What's New in tvOS. My name is Hans Kim, and I'm one of the engineers on the tvOS team. I'm thrilled this afternoon to tell you about some of the new features in tvOS 12. We're going to talk about Password AutoFill, and we'll take a look at some new enhancements in tvOS's focus engine.
And, we'll conclude with some exciting news about some new UI patterns on tvOS. We have a lot of ground to cover, and we have some great demos, so let's begin. And, to get us started, I'd like to invite my colleagues Alex and Conrad to tell us about helping your customers sign into your app.
[ Applause ]
Thank you, Hans. Hello everyone. My name is Alex Sanciangco. And, today I am so excited to talk to you all about Password AutoFill. Signing in is a common experience that users have with apps. For instance, here's an app I've been working on called "shiny," that shows our users an adorable photo of their favorite dogs every day when they log in. In order to make logging in as smooth as possible, we've brought Password AutoFill to tvOS 12. With an experience just as familiar and easy as Password AutoFill on iOS. Let me show you what I mean.
First, I'd like to show you continuity keyboard. As you might remember, continuity keyboard is an iOS feature that allows users to enter text on the Apple TV, using their iOS device. Let's see it in action. I'm going to navigate to that shiny app, and I'll select the email field.
As I expect, I see the continuity keyboard notification appear, which has now been enhanced to offer Password AutoFill as well. So, I'll open that up now. And, I see that my username and password are suggested for the shiny app right there on the QuickType bar. And, I know that with one tap, it'll be automatically filled into the TV, and I'll be signed in. So, I'll do that now. Aw, look at that guy.
[ Applause ]
So, I got the notification on my iPhone because my iPhone and Apple TV are on the same iCloud account. And, if you're a user of the TV remote and control center, or the TV remote app, the experience is similar there as well. But, we have something else brand new we want to show you today, that shows just how easy we've made signing in, even for a guest that might not have ever used your Apple TV before. And, to demonstrate that, I'd like to invite Conrad up to the stage.
[ Applause ]
Thank you, Alex. Now, suppose that I'm Alex's cousin. I'm visiting for the week. Now, his puppy's pretty cute. But, I really want to sign in to shiny so I can see pictures of my own dog Max, or maybe his adorable sister, Min. Now, I've never used his Apple TV before, so I don't have any of the keyboard options available that he just discussed. Furthermore, since I use a strong password saved in iCloud keychain, I haven't memorized it. I expect that I'm going to have to look up my password, and then type it in character by character on the Apple TV. Well, let's see what happens.
I'm going to go over here. I'm going to go to my phone. And, let's sign in. Oh, and even before I can open settings, or ask Siri to look up my password, my iPhone is already offering to help me AutoFill my password. So, I'm going to open up this notification. There we go. And, by entering a PIN I know that I am securely connecting to this Apple TV. Going to Touch ID. And, my shiny credential appears right at the top, with one tap I'm signed in. Super easy.
[ Applause ]
Aw. Sorry, it's just-- it's Min. OK. Now one thing you might be wondering is how did that work without all the iPhones in the room showing a notification? I think this is really, really cool. The Siri remote is able to locate nearby iPhones on which to offer AutoFill. So, in this case, only my iPhone showed a notification. And that is Password AutoFill for tvOS. I'd now like to thank-- sorry, I'd like to invite Alex back up to the stage to show you just how easy it is to make this work flawlessly in your apps. Alex?
[ Applause ]
Thank you, Conrad. Man, I love that demo. I just think that feature is so cool. Password AutoFill for tvOS is part of a suite of features that you might have heard about on Monday, meant to simplify the password experience. These features integrate into existing apps, without you having to make any big changes. And, there's a few things that you developers can do to ensure that Password AutoFill is as great as possible. Let's go over those briefly now.
First, you'll want to ensure that the QuickType bar appears for your text fields. Next, you want to make sure that your app's credentials are suggested on the QuickType bar. And lastly, you want to enable a one-tap sign in experience. Now, let's look at how to do this in more detail.
To ensure the QuickType bar appears for your text fields, you want to adopt UITextContentTypes. tvOS will automatically try to detect username and password fields, and you can help this process by explicitly marking your text fields. This is as easy as setting a single property on those text fields. Simply set, for your username text fields, it's textContentType to .username, and similarly for your password field, you'll set its textContentType to .password. Very simple, and this can be done in Interface Builder as well.
So, after adopting the textContentTypes, here's what viewers would likely see in that continuity keyboard notification. And, this is good, because it gives users access to their passwords. But, it's behind that key icon. Too many taps away. What we really want the user to see is this: your app's credentials suggested right on the QuickType bar. The way you get this behavior is by adopting a technology called Associated Domains, which securely tells Password AutoFill which domains to suggest on the QuickType bar. Associated Domains is a powerful technology that enables other features, such as universal links and Handoff.
It creates a secure relationship between the domain the user's accessing and the app that they have downloaded. Password AutoFill then uses this relationship to suggest the exact right credential, right on top of the QuickType bar. To learn how to adopt Associated Domains, I highly recommend checking out "Introducing Password AutoFill for Apps" from WWDC of last year. This talk contains a step-by-step guide for how to adopt Associated Domains for iOS apps, and the steps are identical for tvOS apps.
So, after adopting Associated Domains, this is what the user will likely see on the shiny app after tapping your credential on the QuickType bar. And, this is awesome. The user has their username and passwords on their Apple TV without having to type a single character. But what we really want the user to see is this. Being automatically signed in and seeing their pupper of the day, because they've been signed in with one tap. You can get this behavior by implementing the preferredFocusEnvironments API.
So, after filling in a password, a focus update occurs. And, Password AutoFill will perform the action of the focused button if one is focused. So, you should implement the preferredFocusEnvironments API, so you can provide your login button to the focus engine. Let's look at how you might implement this.
Here I have a sample implementation of the preferredFocusEnvironments API, which returns an array of UIFocusEnvironment objects. First, we'll try to grab text out of our username and password fields. And, if we can, we'll just return that login button to the focus engine. If we weren't able to grab any text, that just means the user hasn't entered any yet. So, we'll return the username text field to be focused. Super simple.
So, let's recap some of the major takeaways of Password AutoFill. For customers, Password AutoFill reduces the friction within your app by enabling a one-tap sign in experience. If any adoption is needed on your part for developers, this is super simple. You simply have to just adopt UITextContentTypes, adopt Associated Domains so that your apps credentials are surfaced to the QuickType bar. And, lastly, implement preferredFocusEnvironments to enable that one-tap sign in experience. And that's Password AutoFill for tvOS. We cannot wait for you and your users to get your hands on it. Next, I'd like to invite up Ada to talk about focus enhancements.
[ Applause ]
Thanks, Alex. Hi, my name is Ada Turner, and I'm really excited to talk to you today about some enhancements that we've made to Focus in tvOS 12. Focus is a great way for users to interact with your apps on tvOS. It allows them to gracefully scroll through content and provides playful movement hinting as they interact with the touch surface of the Siri remote.
Apps built with UIKit, SpriteKit, and SceneKit, all have native support for focus. However, apps that use alternative methods to render their content have not been able to directly support the focus engine. Today, I am pleased to announce that in tvOS 12, the focus engine, now supports apps regardless of how they are rendered.
[ Applause ]
This means that apps written in frameworks such as Metal can now adopt focus directly into their interaction model. This is accomplished by allowing you to conform your own classes to focus protocols, even if they don't inherit from UIKit components. Now, what this means for your apps is they will get state management of what's currently focused. Geometric determination of where focus can and cannot move. Full accessibility support, and focus movement, hinting, and scrolling that feels just like native tvOS apps.
Before we take a look at the new focus API's, let's briefly refresh ourselves on some existing focus components. First, we have UIFocusEnvironments. This protocol manages state where focus interactions can take place. It is notified when focus updates occur, and can influence focus update behavior. In UIKit, UIViewController is a great example of a focus environment.
Next, we have UIFocusItem. This protocol inherits from FocusEnvironment, and adds the ability for focus items to actually be able to become focused. UIKit's UIView, and SpriteKit's SKNode are great examples of focus items. And, finally we have the UIFocusSystem, which provides the famously useful functionality of being able to register custom sounds to play during focus interactions.
For more information on how to work with these components, and debug focus, I highly recommend that you watch "Focus Interaction in tvOS 11" from last year. And now, I'm really excited to introduce some new components to the focus engine. First, starting in tvOS 12, we are expanding the functionality of the focusSystem object.
You may now retrieve a focusSystem for a given environment, and access the currently focused item off of the focusSystem. Next, we are introducing a new protocol, called UIFocusItemContainer, which provides geometric context to Focus items. A focusItemContainer is owned by a focusEnvironment, and can locate focus items within specific regions, allowing the focus engine to move focus to the best candidate.
Next, we have a special type of focusItemContainer, called focusItemScrollableContainer, which adds support for automatically scrolling through content as focus is moved. And, finally, we now supply focusItems with movement hints, which contain raw values that you can use to create visual effects that suggest what direction focus is about to move in.
Now, let's take a closer look at how we can form our own classes to these protocols. Let's start by implementing a custom focusEnvironment. In order for the focus engine to find your environment, and its child environments or items, you must supply a parent focus environment, and a focusItemContainer. For example, UIViewController might provide its parent viewController as its parentFocusEnvironment, and its view as its focusItemContainer.
FocusEnvironment provides several methods you can use to control and react to focus updates. For example, preferredFocusEnvironments allows you to guide the focus system in selecting what item is focused after view initialization, or a programmatic focus update. I'd like to call your attention to two specific methods on the focusEnvironment. SetNeedsFocusUpdate and updateFocusIfNeeded. Your implementation of these methods must call to a specific method on the focusSystem. Next, let's implement a custom focusItemContainer.
First, you will need to provide a coordinateSpace. UIView provides itself as a coordinateSpace. If your container is more abstract, you may return an existing coordinateSpace or implement your own. Next, you will need to implement focusItems in rect. This method must return any contained focusItems whose frames intersect with the provided rect. Note that the rect passed to this method is expressed in the coordinateSpace of the container, and the frames of each focusItem you return from this method must also be expressed in that coordinateSpace.
Next, let's implement a custom FocusItem. Remember that this protocol inherits from FocusEnvironment, so you will need to implement all of those methods as well. In order for the focus engine to move focus onto your item, it must return "true" from canBecomeFocused. DidHintFocusMovement is an optional method that is called whenever the user moves their finger on the touch surface of the Siri remote. It provides the focusItem with a movement hint that contains raw values you can use to create an effect that indicates what direction focus is about to move in.
Finally, you will need to provide a frame. As I said before, this frame must be expressed in the coordinateSpace of the containing focusItemContainer. For example, UIView expresses its frame in the coordinateSpace of its superview, which is also its containing focusItemContainer. Now, let's take a closer look at the focusMovementHint object.
Movement direction is a vector, whose values range between negative 1, negative 1 and 1,1 representing how close focus is to moving in a particular direction. This value is tied to the path a user's finger creates on the touch surface of the Siri remote. Perspective, rotation, and translation are all values you can use to match tvOS's native interaction hinting. And, interactionTransform combines all three of these values into a single 3D transform.
Next, let's look at how to implement a custom focusItemScrollableContainer. This is a special type of focusItemContainer, and by conforming to this protocol, your container signals to the focus engine that it supports scrolling. For example, UIScrollView conforms to this protocol. It provides three additional properties that allows the focus engine to manage its scrolling behavior. First, we have contentOffset, which is a read/write property representing how far the container has been scrolled. This property will automatically be set by the focus engine as focus is moved in order to keep the currently focused item on screen.
Second, we have contentSize, which represents the total size of your scrollable content and third, we have visibleSize, which represents the onscreen size of your container. This property is analogous to bounds.size on the UIScrollView. It is important to remember that contentOffset will be set automatically, and it is your responsibility to update your rendered content as appropriate whenever this property is set.
Now, let's talk about adding accessibility to our custom rendered apps. It's actually incredibly easy. By implementing focusItemContainers, focusItems in rect method, you are providing the focus engine with enough information to allow voiceover to assist your users in navigation. Remember to set accessibility labels and accessibility hints on your focusItems in order for voiceover to give your users the best experience. I highly recommend that you watch "What's New in Accessibility" from WWDC 2016 for a more in-depth look at how focus and voiceover work together in tvOS. And now, I'd like to invite my colleague, Paul, to give us a demo on how to create a focus-powered Metal app.
[ Applause ]
Thanks, Ada. So, I'm working on the setting screen for a Metal-based game. At the bottom of the screen, there's some standard UI buttons. And, up at the top are some tiles for selecting levels. These tiles are actually rendered by the game engine itself. I'm sure you can tell by the incredible 3D graphics that you see up there.
So, I want to be able to select the tiles using the remote. Previously, I would have had to handle events from the remote myself, and implement my own navigation, trying my best to match the feel of the focus engine. In tvOS 12, I can connect these tiles directly to the focus engine. So, let's go ahead and do that.
The first thing I'm going to do is extend my LevelTile class to implement the UIFocusItem. This is what will allow it to become focused. There's a few methods here. I'm just going to direct your attention to a few at the top. It canBecomeFocused, I'm going to enter in "true." That's straightforward. For parentFocusEnvironment, I'm going to return the MetalView that renders these items. Finally in didUpdateFocus in context, I'm going to set the tile to draw itself in an active state when it becomes focused.
Next, I need to tell the focus engine about these new items. To do that, I'm going to extend that MetalView that renders them. The view is already in the view hierarchy, and the focus engine already knows about it. So, it's a great place to hook in. And, because this is a UIView, it already conforms to UIFocusItemContainer. And, it provides itself as a coordinatespace.
The only thing I have to do is override FocusItem in rect to return my level tiles, which of course are now UIFocusItems. And, I can get a performance win here, by only returning the tiles these frames intersect the path in search rect. Yikes. Let's take a look and see how that works.
So, now you can see that the tiles are focusable, and the system even plays a standard sound when they become focused. I can even move focus in between my custom tiles, and the standard UI buttons down at the bottom. There's a problem here, though. The tiles extend offscreen. If I move focus offscreen, I can't see what I'm interacting with.
Of course, what I want is for the tiles to move onscreen as they become focused. So, let's implement that. I'm going to extend that RenderView again, this time to implement UIFocusItemScrollableContainer. Now, the important thing here is to adjust my rendering by the contentOffset. The focus engine will set my contentOffset as focus moves to keep the currently focused item onscreen. Because this is a UIView, I'm going to also update by bounds.origin so that the coordinateSpace conversion continues to work correctly.
See how that works? So, now you can see that as I focus a tile, it moves onscreen. If I keep going, I get nice, smooth scrolling with the same momentum-- thanks-- nice, smooth scrolling with the same momentum and animation as if this was a UIScrollView. So, this is looking pretty good, but I think we can do even better. What I really want is for these tiles to come alive when they're focused, just like the system elements.
Let's go ahead and do that. Going to go back up to LevelTile. And, I'm going to implement an optional method, didHintFocusMovement. I'm going to take the suggested perspective, rotation, and translation values from the UIFocusMovementHint, and apply them when I render the focusTile. Let's see how that looks. Now, as I move my finger on the trackpad, the tile interacts just the way I'd expect. So, now I've added focus support to my custom Metal objects, and they feel just as familiar as if they were written using ULikeIT. Back to you, Ada.
[ Applause ]
Thank you, Paul. Wow, that was a really Metal demo. With just a few lines of code, we were able to give our Metal user interface beautifully smooth focused movement and scrolling, and delightful interaction hinting, just like native tvOS apps. Now, let's recap all the awesome new focus features we learned about today.
First, we learned how to implement custom focusEnvironments and focusItems, even if they don't inherit from UIKit components. Second, we learned how to use focusItemContainer, so that when users move focus through our apps, it feels just like native tvOS apps. Third, we learned how to use focusMovementHint to make our interface come alive as people interact with the Siri remote.
Fourth, we learned how to use focusItemScrollableContainer to allow people to scroll through our apps content with that smooth, familiar, native feeling. And, finally, we learned how to give our apps full accessibility support simply by adopting these protocols, and providing accessibility labels and hints so that everyone can enjoy our apps.
All of these awesome new features are available today in the Developer Beta, and I highly recommend that you download it, and find out just how easy it is to add focus support to your custom rendered apps. And now, I'd like to invite Hans back on the stage, to tell us about some awesome UI patterns on tvOS.
[ Applause ]
Thank you, Ada. Focus interaction is integral in how we feel connected to the screen across the room. And, tvOS has many common UI patterns that make the most out of it. One such example is a label that animates its text when it's focused. Internally scrolling text, or marquee animation, is a useful technique in presenting variable length strings without altering the label's external geometry.
It's also very effective in visually highlighting where your current focus is. This behavior is very widely used across tvOS, but there's not an easy way to do this. That is, until now. tvOS 12 makes it really easy. All you have to do is to set a new property on your label, enables MarqueeWhenAncestorFocused to "true."
[ Applause ]
Then, when a view containing the label comes in focus, and the label contains a string, it's too long for the width. It will animate the string horizontally in a loop. We've used this API ourselves, and we think you will really like how simple it is to adopt the behavior. So, that's text scrolling in UI label.
But, tvOS has many more idioms and patterns, such as an image and a label when come alive when in focus, an arbitrary view hierarchy that floats as one solid unit, buttons with customizable focus movement and content, and widgets for representing people. And, as you can see, these patterns are ubiquitous across tvOS and Apple's own apps. And, since they've been available in TVMLKit, we see them adopted in your apps as well. But, what if your app is based on UIKit? Well, we're really thrilled to share with you that tvOS 12 will make several of these available for your UIKit-based apps.
[ Applause ]
And, we do this through a new lightweight framework, TVUIKit. The first four elements in TVUIKit are poster, caption button, card, and monogram. Let's take a look at each. Poster view is about images. And, TVPosterView is a composed view specializing in presenting an image, and a footer, which itself is composed of up to two labels. When a poster view is in focus, the image grows in size, and the labels move out of the way to give room.
When it relinquishes focus, the image and labels come back to the normal layout. When you program an image to a TVPosterView, it works out just the right amount by which the image grows. And, TVPosterView is an ideal API for creating a UI like this. It's really easy. And, that's TVPosterView. Next, is caption button. Caption button is about call-to-action.
And, TVCaptionButtonView is a composed view specializing in presenting a button-like content, and a footer, which itself is composed of up to two labels. The content view has a blurred background, and can either have an image, or a text. When a CaptionButtonView is in focus, it floats and unlike the PosterView, it increases its size only in the leading, top, and trailing directions. You can also limit the floating motion to horizontal only or vertical only.
And, when you use multiple TVCaptionButtons and give them a consistent motion direction, you can create a sense of grouping. And, TVCaptionButton makes it really easy to do this. Next, is CardView. CardView is about custom views. And, TVCardView specializes in presenting arbitrarily composed view hierarchy. When a CardView is in focus, it's content view floats and all of its subviews move in unison as part of the floating content view.
TVCardView is a great way to create a UI like this. It's really straightforward. Next, is monogram. MonogramView is for representing people. And, TVMonogramView has a circular content image, and a footer, which itself is composed of up to two labels. When you don't provide either a person name, or an image, TVMonogramView provides a generic silhouette. If you do provide a person name, TVMonogramView will create a monogram from the initials.
And, of course, when you provide an image, it doesn't question it, we'll just use it. When TVMonogramView is in focus, the labels move out of the way for the image to grow in size. When you use a TVMonogramView, creating a UI like this becomes really straightforward. Now, you might have noticed that there's a consistent pattern among these four elements. Namely, there's a main content, and an optional header and footer. And, when this composition comes in focus, the header and footer move out of the way, and the content to grow in size. This common behavior is encapsulated in a base class, TVLockupView.
Some of the things you can customize in a TVLockupView are the explicit content size. This is really helpful when you're laying out multiple LockupViews. Another one is the content-- the size by which the content grows when it's in focus. These are directional insets, so you can even specify different amount in all four directions. You may recall that the CaptionButtonView is actually using this.
When you're providing your own content to the TVLockupView, you can take advantage of TVLockupViewComponent protocol. Whenever TVLockupView state changes, it will call updateAppearance forLockupViewState method on all of its subviews that implement it. This is your subviews opportunity to update its behavior, or customize the appearance based on the state.
You can use TVLockupView to create your own widget that responds to focus interaction, or further customize the four special purpose subclasses we just discussed. So, that's TVLockupView and its subclasses. Finally, you may recall seeing something like this. It's simple, but its simplicity disguises just how difficult it is to implement this screen. TVUIKit makes it really easy to do this, and that's with TVDigitEntryViewController. TVDigitEntryViewController manages a fullscreen view that presents a title label, prompt label, a digit view, and the numeric keyboard.
Among the things you can customize in a TVDigitEntryViewController are the number of digits, and whether the entry is secure or not. And, a completion handler that allows you to process the entered digits. Instead of just talking about it, I'd like to invite my colleague Marshall over to give us a demo.
[ Applause ]
Thank you, Hans. My name's Marshall, and today we're going to take a look at how you can use TVDigitEntryViewController to collect numerical data from your users. So, I have an app here that we call Top Movies which allows me to watch my favorite content, but not all this content may be appropriate for everybody in my household, so what I'd like to do is protect it behind a PIN code, so I can restrict who's allowed to watch it. So, what we have here is a collection view, full of some TVPosterViews that Hans introduced.
So, if we dive in to our collection views: didSelectItemAt indexPath. First thing we want to do is vend a TVDigitEntryViewController, and send appropriate title and prompt text, let the user know they should enter a 5-digit passcode. We set the number of digits to 5, and we set the isSecure entry to "true" since we are collecting a passcode.
Next, we're going to implement the entryCompletionHandler. This returns a string, once the user has filled in the total number of digits in the digit view. So, here for now, since I'm working on the app, we're just going to check to see if it's all 1's. If it is, we're going to dismiss the view controller, and play the content. Otherwise, we're going to update the prompt text, let the user know that it was an invalid passcode, and we're going to call the clearEntry animated true.
What this will do is clear out all the digits, and shake the digit view to let the user know that something went wrong. And then, finally, we're going to present the viewController. And, since this is just a viewController, we can use custom presentation styles. So here we're going to use the blurOverFullScreen, which was introduced in tvOS last year.
Let's run this and see how it works. So, I'm going to select my movie. And, we see we get the blurOverFullScreen animation, and we get prompted to enter our passcode. Now, we know that it's all 1's right now, so let's see what happens if I enter all 2's. We see that we get a nice shake animation, and we update the text. Now, we go enter all 1's.
We know that's the correct passcode, so we dismiss the viewController and we can play our content. Now, if the user wants to use-- they most likely want to use their own passcode, so what we're going to do is we're going to take this settings button here, and we're going to allow the user to set their own passcode.
So, I've got my [inaudible] action down here. And, again, we're going to vend a TVDigitEntryViewController, and we're going to set the title text to let them know that we're collecting a passcode that will restrict which content can be watched, and again, set the number of digits to 5.
Next, we're going to implement the entryCompletionHandler again. Now, I have an extra variable up here, a optional string called passcodeToVerify. What this is going to do, is it's going to hold the passcode the user enters the first time so we can verify it to make sure that they actually entered the correct passcode. So, we see that when our completionHandler gets called, we check to see if the passcodeToVerify is nil.
If it is, this is the first time, so we're going to ask them to verify their passcode, and we're going to call the clearEntry animated, except this time false. We don't want to shake it, because they didn't do anything wrong, but we want to clear it so they can enter it again.
Otherwise, if there is a value stored in the passcodeToVerify, we know they are verifying it. And, we check to make sure they're equal. If they are, we save the new passcode, and dismiss the viewController. Otherwise, we reset the prompt text back to what we originally asked, clear the entry, and then set the passcodeToVerify to nil so they can try again. And, finally we present our viewController.
So, now we're going to go to Settings. It's going to ask us to please set a passcode. Let's go ahead and set it all 1's first. Now it's asked us to verify. But, this time I'm going to enter all 2's. Since we know that was incorrect. So, now let's do it correctly this time. We've got five 1's, we're going to verify our PIN code. And, we can save the PIN code so the user can use it when they need to. And, that's how you can implement TVDigitEntryViewController into your application. I'd like to invite Hans back up on stage.
[ Applause ]
Thank you, Marshall. That was a great demo showing just how easy it is to adopt a common UI patterns, using TVUIKit. In addition, TVUIKit also has built-in support for localization, including right-to-left language support, and of course, accessibility. TVUIKit is available in the Developer Beta, so please download and check out the headers. And, as you use it in your app, we think it will really save you time and resources so you can instead, focus on what makes your app truly shine. So, that's TVUIKit.
This afternoon we've just looked at a few areas where tvOS 12 can improve your app's user experience and performance. We learned about Password AutoFill, which makes it really easy for your customers to sign in to your app. And, it's especially more helpful if they have strong password. We've also saw how we expanded tvOS's focus engine to support all apps, regardless of how they are rendered. This is truly game-changing.
And, finally we looked at TVUIKit, which makes it really easy to adopt common UI patterns on tvOS. We have a session page with all of this information and more. And, we'll be available at the tvOS labs, and Safari WebKit labs throughout the week. Please drop by and say hello, and bring your questions and code. Thanks for joining us this afternoon. Have a wonderful WWDC.