Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2017-228
$eventId
ID of event: wwdc2017
$eventContentId
ID of session without event part: 228
$eventShortId
Shortened ID of event: wwdc17
$year
Year of session: 2017
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2017] [Session 228] Making Grea...

WWDC17 • Session 228

Making Great SiriKit Experiences

App Frameworks • iOS, watchOS • 50:02

People love Siri so it's essential to use SiriKit optimally for your app to provide a great Siri experience. Learn how to solve common pitfalls related to contact resolution, using Touch ID, and more. Find out how UI tests can benefit your SiriKit extension and how they can speed up the development process by automatically giving Siri text to process.

Speakers: José Angel Castillo Sanchez, Rohit Dasari

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Welcome to Making Great SiriKit Experiences. My name is Jose Angel Castillo Sanchez and with my coworker Rohit [assumed spelling], we want to tell you all about it. It has been a year since we announced SiriKit. Since then, many applications have [inaudible]. We have heard your feedback, the developers feedback. With that, we have put together four major areas that we want to focus on today.

First, we want to talk about contact resolution. If your application is a messaging, call, or a payments application, you probably have dealed [phonetic] with it. After that, we're going to talk about security. If your application requires an authorization to finalize a transaction, we'll tell you how to achieve that in simple steps.

Next, we're going to cover custom vocabulary. Siri understands many words in many languages, but there might be some specific words to your application you might want to tell Siri about. Last but not least, we're going to cover UI testing. Since the last year, we have made great improvement of how to automate Siri testing.

Let's get started with contact resolution. For this, I want to use a messaging example. A messaging application. You may be familiar with it. UnicornChat. For this, we're going to walk through the process of sending a message. Text hello to John on UnicornChat. We clearly see that John is who we want to message to. In this case, on my address book I have multiple Johns. I have John Appleseed and John Baily. From the user intent, it's not yet clear which of these John I want to message to.

In iOS 10.3, we introduced a new property in the INPerson object. SiriMatches. SiriMatches is what Siri understands. In this case, the multiple Johns. It can also be homophones to John. You will get an array of INPerson objects representing all the possibilities that Siri understood. Let's walk through the flow of text hello to John on UnicornChat and see how we can take advantage of these siriMatches to resolve our contact. Text John hello on UnicornChat. Siri clearly understands that you want to text John.

After that, Siri is going to contact your app extension or your intent parameter resolveRecipients. In there, Siri is going to provide you an array of INPerson objects, in this case, John Appleseed and John Baily. As we saw earlier, John Appleseed and John Baily are part of my address book. It could also be they could be part of my custom vocabulary on my property address book on my application.

At this point, the resolveRecipients intent parameter is going to get invoked and it's up to the app extension to make a decision. Do we have enough information to decide which John we want to send the message or do we need the help from the user to decide which of these John? In this case, we want to disambiguate.

We are going to return back to Siri the disambiguation list. The list is going to be John Appleseed and John Baily. Siri is going to bring a great experience to our user, going to ask which one. John Baily or John Appleseed? At this point, it's up to the user to decide which John they want to use on their message.

For the purpose of this flow, let's assume John Baily was the one we chose. Siri clearly understands that John Baily is the message-- is the contact we want to use. Siri's going to provide back to your resolveRecipients intent parameter that John Baily is the chosen INPerson object. At this point, it's clearly that we want to message John Baily and your intent recipients-- intent parameter is ready to move to the next stage. To recap, we walked through the process when we have multiple Johns on our address book and we walked through the flow to disambiguate between them. We decided that we want to message John Baily.

But what if-- what if John Baily has multiple phone numbers or email addresses or a combination of both? It is yet not clear that-- which of these handles we want to use to send our message. So, let's take a step back to the point where the user chose John Baily. Siri understands that we want to message John Baily.

Siri's going to communicate back to your extension under resolveRecipients intent parameter with John Baily. At this point, it's up to your resolveRecipients intent parameter implementation to access your address book and make sure that you have enough information to which one handle is going to be used. In this case, as we saw, we have multiple handles, so we need user input. Focusing on the phone handles, we're going to provide the disambiguation list as we did before with the multiple Johns, but in this case it's the multiple phone number handles. As we saw, we have home, mobile, and work.

Siri is going to bring up a prompt to the user. Which one do you want to use? So, in this case, it's pretty clear to the user that they need to pick one handle to send their message. Let's assume the user wants to send to the mobile number.

User says mobile to Siri. We go through the same cycle where Siri understands that you want to message on the mobile number to John Baily. Siri's going to share that information back to the resolveRecipients intent parameter. Here you're going to get the whole information. We want to message John Baily and we want to do it in the mobile handle.

At this point, the resolveRecipients intent parameter returns success, so we have finalized picking our recipient and we can move on to the next stage. As we know, any SiriKit intent implementation has three steps: resolve, confirm, and handle. So, let's take a look at sample code of the resolveRecipients intent parameter implementation.

Here I have resolveRecipients implemented. The first thing that we're going to do is make sure a recipient was provided. In the case that the recipient wasn't provided, we can basically ask Siri to ask for that value to the user and we do that by returning needsValue. Siri will prompt the user. To who do you want to send your message? In the sample we walked through, the user said they want to message John.

So, we got John. We got John Appleseed and John Baily. Here is where we're going to resolve which of these recipients we want to use or provide the disambiguation list back to Siri so it can prompt to the user. In the case that we need to provide back a list to Siri, we basically return that under a completion handle. Let's see a demo of this in action.

Here I have my resolveRecipients intent parameter to be implemented on my UnicornChat application. As we saw, the first thing that we need to do is to validate that we got a recipient. Here we're making sure that a recipient got provided. In the case the recipient wasn't provided, we ask Siri to ask the user that we need that value.

Once we got that recipient provided, we're going to process those recipients. Here we're going to walk through all the recipients that we got from Siri so we can process one by one. The first thing that I'm going to do, I'm going to check with my address book to resolve this recipient. My address book will do the heavy lifting to making sure that we have a contact backing up that recipient.

Once that's done, we need to make a decision. The first case is where there's not a contact backing up that recipient. Going back to our example of John, if I don't have any John on my address book, we basically tell back Siri that I don't have it and it's not supported. It will make Siri to ask the user to provide a valid recipient.

In the case where we found one, we need to process that recipient. So, let me walk through this piece of code. The first thing that we're going to do is validate if that recipient that we found has one or multiple handles. This case, we are focusing on the phone numbers.

Where there are more than one phone number and we cannot make the decision for the user, we're going to return back a disambiguation response code with the handle that we want to display. This is going to be used by Siri to ask to the user which of the handles you want to use.

In the case where we have a single recipient with a single handle, we basically return success. We tell them we found the recipient and we're ready to move on to the next stage. Have a special case here. This special case, it will cause Siri to make a question to user. Are you sure you want to message John? I use this in the cases where I do some level of fuzzy matching when the contact that Siri got requested doesn't completely match to the one I have on my address book.

We're missing one or more case to handle. This is a case where we have more than one match. As we saw, we have John Baily and John Appleseed on my address book, so we need to disambiguate between them. Here we're using the same technique, using a disambiguation and providing the list of matches.

There is one more case. As we saw on the flow that we walked through, first we asked with John and then when the user picked John Baily we asked which handle. We are going to call back the resolveRecipients intent parameter and we need to handle that case. In that case, we need to validate if the handle has been already provided, we can basically return back with that recipient. So, let's see this in action.

In order to do that, I'm going to use the edit a scheme. So, you can provide the Siri intent query from the get go so you actually have to trigger Siri and speak to it. So, when I run my example, it's going to trigger Siri for me and it's going to say text John hello on UnicornChat.

As we see, we have multiple Johns on my address book, so we need to pick one. Let's pick John Baily. As we walk through the flow, we also have to pick a handle. In this case, we want to pick the mobile handle. At this point, we are ready to send the message and all the-- the recipient has been resolved.

[ Applause ]

We just saw how to walk through the process of resolving a contact when trying to send a message. This technique also applies to any other intent that requires contact resolution. Messaging applications are great for sending messages, but they're also great for reading messages. A great example of that is reading messages on CarPlay. If your application implements the INSearchForMessagesIntent, you will get access to reading experience on iOS as well as on CarPlay. So, on iOS 11, we introduce a new property to make replying to the messages that Siri just read to our users easier.

That new property is conversationIdentifier. This has been introduced on the INMessage object as well on the INSendMessageIntent. This conversationIdentifier is going to be used to reply to a message. Siri is going to read a message for you from UnicornChat and the value of this property is going to help you to short circuit the contact resolution process that we walked through. Let's see the flow of how this is done. Read my messages on UnicornChat. We want to read the messages that we have on the UnicornChat. Siri clearly understands that.

Siri is going to send an INSearchForMessagesIntent to your app extension. Your app extension has implemented the SearchForMessageIntent handle. This handle is going to provide the list of INMessages object that Siri is going to read back to the user. As we notice, the INMessage object contains a property named conversationIdentifier.

This identifier represents the uniqueness of that conversation. With that uniqueness, you can use that conversationIdentifier when the user wants to reply back. As we see, Siri finished reading the message and it will ask the user if they want to reply. To continue this flow, let's assume the user wants to reply saying hello.

Siri clearly understands the user say just hello, meaning I want to reply to this message and I want it to say hello on my content. Siri is going to send down an INSendMessageIntent with a conversationIdentifier populated. This conversationIdentifier value is the same one that you populated on the INMessage object that Siri used for reading the message.

With this value, now it's provided in the resolveRecipients intent parameter. Would it-- we can short circuit the contact resolution by basically using it and going straight to our messaging conversation, grabbing that conversation, getting the recipients, and moving on to the next step of replying to the message. To learn more about enabling your app for CarPlay, I recommend you take a look to this video that we released this year.

In addition to doing contact resolution, we also had looked at response codes. Last year, when we introduced SiriKit, we introduced a set of response codes that you might be familiar with on the messaging world. For example, when the service is not available, you can return back a failure service not available and Siri will gladly tell that to the user what went wrong.

We look at that and we have improved the response codes on calling intents, messaging intents, and payment intents. In this case, I have an example of a call intent where we introduced an invalid number response code and it will help us to be more transparent to the user when the intent created doesn't have a valid number. That way Siri will provide a nice experience and it will be pretty clear what went wrong. For more information for this, you can look at the documentation.

To recap what we have covered so far, we talked about INPerson.siriMatches, the new property that we introduced on iOS 10.3 that can help you do better contact resolution handling. After that, we walked through the process of resolving multiple handles, meaning multiple phone numbers or email addresses or a combination of both.

We also covered the introduction of the conversationIdentifier and how it's helpful to replying messages that Siri just read. And we also briefly covered response codes and the enhancement we have provided in iOS 11. With that, I'm going to hand over to my coworker Rohit so he can talk to you more about security.

[ Applause ]

Thank you, Angel. Hello everyone. My name is Rohit Dasari. I work on the SiriKit team. So, let's talk about security in the context of SiriKit. Let's go back to the example of UnicornChat which we saw earlier. You see here that the user is able to send a message even when the device is locked. Now, Siri lets this through makes-- Siri makes a decision based on the intent whether-- how to balance usability and convenience with security.

In this case, for the messaging case, Siri lets requests through even when the device is locked. Some of you might be required to ask the user to unlock the device before handling intents of this kind. This might be due to some security policies or some corporate or legal requirements that your app has to conform to. There's a way to do this with the current SiriKit mechanisms without any code changes. That is zero code changes.

All you have to do is go into the info.plist of your SiriKit intents extension and in the place where you list the intents that your extension supports, add this entry for the intents you want to restrict. The section is called IntentsRestrictedWhileLocked, which speaks for itself. So, with this one change, Siri now knows that when the user wants to perform an intent which falls into this category, the device must first be unlocked. Zero lines of code change.

But sometimes that's not enough. Unlocking the device is not enough. Let's take this example of an app which users can use to lock and unlock their cars. For an app like this to integrate with SiriKit, it has to handle an intent called setCarLockStatusIntent. This is an intent we added in iOS 10.3. Let's take a quick look at this intent. It's a pretty simple intent. There are two properties. The name of the car you want to lock or unlock and the state that you want to set on that car.

Once the app provides an extension that handles this intent, it can handle commands like this where the user can say unlock my car and Siri will know to invoke the app extension, give it intent, and let the intent-- and let the extension handle it. Notice I'm not saying the name of the app here.

This is because Siri is smart enough to know when there is only one qualifying app to handle a particular intent and it will not ask the user for either confirming the name of the app or for disambiguating. So, with this intent handled, the example app I have reflects the change in the state of the car's doors by showing it graphically, as you can see here.

So, this is a pretty sensitive transaction. You are unlocking somebody's car. So, for transactions like this, even when the device is unlocked, you might want to present another authorization request to the user. We recommend you use the secure authorization mechanism which is already built into iOS, which is Touch ID. Using the local authentication API, you can even customize the prompt which is shown along with the Touch ID to tell the user to give the user some context about why they're being asked to authorize all over again even when the device is locked-- unlocked.

In case your device does not-- in case the user's device does not support Touch ID or maybe the user has not configured Touch ID yet on that device, the local authentication API allows you to fall back to unlocking the device using just the passcode. You notice it has the same customized string which is provided by your app over here to give the user some context. This is pretty simple to do. Let me show you a demo of how to do it.

So, what I have here is the intent handler for the setCarLockStatusIntent of the test app I just showed you. Let us see what this app looks like. As we saw, there is a representation of the car, of the user's car, and this also represents the state of the car, whether it's locked or unlocked. In this case, it's locked and I can unlock it with a tap.

Set it to the lock state and I want to ask Siri to unlock the car using this app. So, as my colleague Angel showed earlier, I'm going to use the scheme editor to enter the request I want to send to Siri and I'm going to issue the request.

The scheme editor is very handy, in case you haven't realized. It saves you the trouble of speaking to your phone when you are testing, which is really useful in a shared workspace environment. So, in this case, Siri has handled the request, given it to the app, and has reported success. Let us confirm that it actually did unlock the car.

So, now the car is-- state is update-- is updated to show that it has been unlocked. Let's go back to the intent handler. This intent handler extracts the fields that we saw on the intent, which is should I lock the car or unlock the car and the-- and it looks up an internal car object matching the name on the intent.

Then it sets the lock state on the car object based on the input on the intent. And, finally, after having-- after having changed the state of the car, it returns a success response to Siri. In case of a failure or in case when there is no-- when there is not enough information on the intent, it returns a failure response.

So, you notice how that this extent-- this intent handler is unconditionally handling the request that the user gave it. We want to add a prompt here using the local authentication API. So, once we confirmed that the intent has all the parameters we need, let's call the local authentication API.

In order to call the local authentication API, I'm first generating a string based on the lock state that is requested. That is if it is-- if the user wants me to lock the car, then I will show the appropriate prompt. Then, to the local authentication context, I'm going to ask it to evaluate a certain policy.

In this case, I'm asking it to evaluate the device owner authentication policy. This is a policy which attempts to do a Touch ID prompt first and if that doesn't work it falls back to the device passcode prompt. Next, I pass in the prompt string which I just generated in the previous line, and finally I have the reply block. So, let's fill in the reply block.

If the user has granted access via Touch ID or a device passcode prompt, I'm going to go ahead and change the state of the car as I did earlier without prompting, but this time I'm actually waiting for the user to give me access. And then I'm going to call the completion block with the success response to tell Siri that request has been completed successfully.

I also need to tell Siri when there is a failure, that is if the access was not granted, then I need to tell Siri that there was a failure. I do that by returning a failure response using the same completion block. So, with this change, let us go and see if unlocking the car prompts the user to do authentication. So, let's restore the car state back to being locked and run this. I'm using the same request as earlier, which is unlock my car.

So, you notice during the Siri request a prompt comes up. If the simulator was configured to do a Touch ID, it would have shown a Touch ID prompt. In this case, I configured it to do device passcode prompt. So, let me enter my super secure password. Please look away. And once I've entered the password, the request went through. Let us confirm that it actually went through by launching the app. Yep. The car was unlocked.

[ Applause ]

So, with a few lines of code, we were able to add a local authentication prompt during a Siri request. Using the local authentication API allows you to do multiple forms of authentication. Touch ID, if supported and configured, or a device passcode if not. The place where you call this local authentication API is in the handle method of your intent handler before calling the completion block. And because the local authentication framework is part of the system, Siri knows to coordinate with it so that if the user has made a request over Hey Siri, then the request waits for the user for a prompt to be responded to.

To learn more about the local authentication API, we refer you to this talk from 2014. Some of you might be working on apps which require a payment to be completed before a Siri request can be satisfied. In this example, the user wants to book a RainbowCar on UnicornRides and you need a payment to be made from the user before you can accept this request.

We recommend you use the secure Apple Pay API for handling payment transactions during your Siri request. Using the Apply Pay API gives you the same advantages as using the local authentication API, which is you get coordination with Siri and the place where you could call this API is in the same place that is in the handle method before you call the completion block.

Now let's talk about teaching Siri new words. Let's take this example of an app called UnicornPay which allows the user to query account balances on their accounts. This-- an app like this might also allow users to give names to accounts in the app. Now, these names are user configured, so they need not be real English words. In this example, we have an account called NestEgg, which would have been a word, except there is no space. So, it's made of two English words, which, without help, Siri might not be able to recognize.

When we introduced SiriKit last year, we talked about some mechanisms that will allow you to help Siri understand words like these. These words are not unique to an app, but are unique to a particular user of that app. Examples of these are names of photo albums, names of workouts, and so on.

The mechanism that we recommend you use to teach Siri these words is the INVocabulary API. To get a refresher about custom vocabulary introduction, we refer you to these talks from last year when we introduced SiriKit, but today let's dive deeper into the INVocabulary API. The API is pretty simple. It has one method which takes an NSOrderedSet of vocabulary strings and this set is meant for a particular vocabulary item type. For example, a photo album name or a workout name.

Note that this is an ordered set, which means that Siri gives preference to the words towards the beginning of the set. But also note that these are strings that you're teaching Siri. Plain strings with no context around them. That might have some limitations. Let us see what those limitations are by diving deeper into how this would actually work.

Here we have an application where in its private store has some accounts which are named by the user. To tell Siri about these accounts, the app would use the INVocabulary API and send those strings using the API which we just saw. With this knowledge, now Siri is aware of these new words and the next time the user makes a request that uses one of these words, Siri is capable of looking up the knowledge that the app has given it and identifying candidates in the user's input that match a custom word like this and recognize it correctly as the user would expect to see. Now, once this word has been recognized, Siri will be able to give this word to the application's intents extension and expect it to use that word to look up the corresponding object in the store.

But note that this is just a plain string with no context around it. So, your application might not have enough information to look it up unambiguously. It might have to-- it might have to do fuzzy match, it might not be an index based on the string in your application store. So, this is not an ideal situation. In iOS 11, we thought we could make this better.

We added this new API which allows you to send Siri objects-- any objects-- as long as they conform to an INSpeakable protocol. Again, these objects have to be in an ordered set which follow the same rules as the strings in the older API and they are meant for a particular parameter type. Let's take a deeper look at what this protocol is. The INSpeakable protocol is a detailed description of the vocabulary item that you are teaching Siri.

The main parts of it-- the main property on it is the actual word that you're teaching Siri. In this case, it would be the word NestEgg without a space in it or some other creative misspelling of a workout name or it could even be a word which has mixed in numbers or special characters.

You can also teach Siri-- you can also give Siri hints about how to recognize the words that you are teaching it. In this case, they are spelled in a sounds-like pattern, which is the same pattern you would use for configuring phonetic names in your contacts app. With each INSpeakable object, you can also provide a vocabulary identifier which can uniquely identify this object in your app store. We-- it's a plain string, so we don't restrict what value you can give this identifier, but using a unique grid or something like that will help you match this object unambiguously in your app's data store.

Finally, the INSpeakable protocol also allows Siri to provide alternative matches for words which the-- which Siri thinks the user might mean. This is analogous to the siriMatches property that my college Angel talked about earlier where an INPerson object which actually does conform to the INSpeakable protocol can have candidate alternative matches exposed as siriMatches. With this new protocol and API, let us take a look again at the same example and see how things have changed.

So, the application wants to tell Siri about the same two account objects with the customized names. It would call the INVocabulary API, but this time it would give it the objects themselves and Siri will use those INSpeakable parts of these objects and when the user says a request that has a word which matches this new object that Siri has learned about, Siri will recognize the pattern and replace it with the expected spelling and when communicating back to the app's intent extension, instead of giving it a plain string without any context, Siri will now give it the INSpeakable object. Because this INSpeakable object has the vocabulary identifier, which, as I said earlier, could be a unique ID for that object. The lookup on the app side for this object becomes easier.

So, we hope that with this new API requests like this where the user is able to issue a request with a custom vocabulary word to Siri, your app can handle them without worrying about doing fuzzy matching against your app store. The other example where you might want to teach Siri new words is when there's a word which is unique to your app, but common to all users of your app. An example of this would be a ride-booking app which has a branded or a customized vehicle name or a workout app which might have a non-English workout name. In order to teach Siri words like this, we recommend that you use the AppIntentVocabulary.plist.

Once you provide a plist to Siri, in examples like this where a ride-booking app has customized its ride names-- which are not real words in this case, as you can see-- RainbowCar without a space is not a real word. Users can now say the words and expect Siri to recognize them.

The way you would register words like this with Siri is to use the custom vocabulary plist where the main pieces are the intent parameter for which you are providing the custom vocabulary item. In this case, this parameter would itself be of type INSpeakable and inside that parameter the subfields of the INSpeakable object can be specified.

For example, you can give a vocabulary identifier for the SparkleCar which you are teaching Siri about, and the actual name that you're teaching Siri as the spoken phrase, and the pronunciation hint, similar to the one we saw in the earlier case for user-specific vocabulary. Having done this, Siri will now be able to understand requests like this where the user is using the customized vocabulary word which are specific to your app.

Now let's talk about how you can test the quality of SiriKit integration with UI testing. We introduced UI testing a couple of years ago as a mechanism for you to write UI tests-- automation tests-- inside your project in Xcode. Now that Siri is supported in the simulator, you can run these tests on any hardware configuration without actually requiring to have the hardware with you.

These UI tests are automatable, so you can keep them running automatedly in your CI pipelines or custom integration pipelines to make sure that code changes are not breaking existing functionality. Using these UI tests also gives you a certain degree of language independence, that way you can test Siri integration in a language which is not the primary language of your development. We will see an example of this shortly.

For further info about UI testing, we refer you to these docs about when UI testing was introduced in 2015. To enable UI testing with Siri, the main API that is new to iOS 11 is a way to refer to the Siri service on the device. Using this reference, you can feed in Siri request strings directly into the Siri service in code. And because this is done in code, you no longer need to do a manual test of your Siri integration. Let us see a demo of this.

So, I'll take you back to the same app that we saw earlier where we added an authentication prompt while unlocking a car. So, it's the same app where there's an image showing the locked state of the car and using Siri I'm able to test whether I'm able to lock and unlock the car.

So, if I want to test my Siri integration in this app, how would I do it as a manual test? I would first launch the app, set the state to the state I expect it to be. Let's say I want to test whether my lock requests are working, so I would want to start it off in an unlocked state. Then I would want to issue a request to lock the car. So, let's issue that request.

OK. And then I would need to verify that the request was actually handled successfully. And in order to do that, I would need to launch the app and check the state-- visually inspect the state of the app. So, it looks like when I asked Siri to lock the car, the car was locked, so the test succeeds. This was the manual test. How would I automate a test like this? Using the UI automation API, I can create a UI tests class in my project using-- by adding it to my UI testing bundle and create a test like this.

So, a test is basically a function with these four steps. First, I need to set up the test, then I need to invoke Siri, then wait for Siri's response, and then confirm that Siri's request actually went through and that the app actually handled the request. So, let's start by setting up a test.

So, in order to set up a test, I've created a helper method to set the car lock status to false, that is to unlock the car. So, before running the locking test, I'm unlocking the car. Now, this helper method is-- I configured it by using the UI recording mechanism, which if you refer to the UI testing framework talk, then you can learn how to do the UI recording for generating the test like this.

Then for invoking Siri, I'm going to use the API which I talked about, the new API. So, I'm feeding in the request lock my car. And then, finally, I'm going to-- then next I'm going to wait for Siri's response. In order to wait for Siri's response, I'm using this API called wait and this wait parameter-- this wait method takes two parameters. One is an expectation and the other is a time out.

The expectation I'm waiting to be fulfilled is a predicate expectation and I'm using the predicate which basically sleeps for 5 seconds. It's nothing fancy. I think it's enough time for Siri to respond to this request. So, it waits for 5 seconds and I'll wait for that wait for 10 minute-- for 10 seconds. So, that was the waiting for Siri's response piece. Next, I need to confirm that the state of the app has actually changed.

In order to do that, I have tagged one of the UI elements in my app with an accessibility tag which I am now using to extract that view and then comparing the value of the accessibility label on that view to see whether it matches what I expect. So, now that we've written-- automated the test that we just saw manually done, let us run this automated test and see if it does what we expect.

Because this is an automated test, it frees me up to do real development work and not spend my time testing something which I already know how it's supposed to work. So, you notice that it's waiting for Siri's response for 5 seconds and then, finally, it launches the app and checks whether the state matches what it expects.

[ Applause ]

And these indicators of happiness tell me that the test has actually passed. Now, Siri is supported in many languages and I am most comfortable in English, but I might want to test my app's Siri integration in Mandarin. Now, I don't speak Mandarin, but I can look up the utterances for this intent on the developer website. So, the intent I'm looking for is INSetCarLockStatusIntent.

And in the documentation for this intent we actually have examples of sample utterances in all the languages that Siri supports. So, I am going to take the example utterance for Mandarin and feed it in here. And I'm going to run this test on a device which is configured to use Siri in Mandarin. So, let me first bring up that device. So, it launches the app, sets the initial state, invokes Siri, gives it the request, waits for a response. And then it launches the app again and confirms that the state matches what it expects.

[ Applause ]

Again, happiness. Yay! So, it was that simple to automate my SiriKit integration testing. So, let's recap what we heard today. From my colleague Angel, we learned how to handle situations where users need to be prompted about disambiguating between contacts with the same name or between handles of the-- of a particular contact.

We then saw some ways to reauthorize Siri requests using the local authentication API. Then we saw how we can teach Siri words which are specific to users or specific to apps, and then, finally, we saw some demos of how we can do UI testing of our SiriKit integration even in languages that we are not conversant in.

For more information about the sample codes and the APIs that we talked about in this talk, please refer to this URL. And we have some other talks which are relevant to your SiriKit integration, including Apple Pay and CarPlay talks and the talks from previous years which we referred to in this talk are listed here. Thank you for coming and we hope you have enough information to create great SiriKit experiences after this talk. Thank you everybody.

[ Applause ]