Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2020-10216
$eventId
ID of event: wwdc2020
$eventContentId
ID of session without event part: 10216
$eventShortId
Shortened ID of event: wwdc20
$year
Year of session: 2020
$extension
Extension of original filename: mp4
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2020] [Session 10216] What's ne...

WWDC20 • Session 10216

What's new in ResearchKit

Frameworks • iOS, macOS, watchOS • 30:48

ResearchKit continues to simplify how developers build research and care apps. Explore how the latest ResearchKit updates expand the boundaries of data researchers can collect. Learn about features like enhanced onboarding, extended options for surveys, and new active tasks. Discover how Apple has partnered with the research community to leverage this framework, helping developers build game-changing apps that empower care teams and the research community.

Speakers: Pariece McKinney, Joey LaBarck

Open in Apple Developer site

Transcript

Hello and welcome to WWDC. Hello, everybody, and welcome to "What's New in ResearchKit." My name is Pariece McKinney, and I'm a software engineer on the health team. Later in the talk, we'll also be joined by my fellow colleague and software engineer, Joey LaBarck. Thank you for taking the time to join, and we're extremely excited to show you all the new updates ResearchKit has to offer. There's quite a bit to cover, so let's jump right in.

In order to make things easier to follow this year, we created a ResearchKit sticker pack, where each sticker corresponds to a particular topic of this talk. At the end of each topic, we'll collect the sticker for that particular subject and slap it on the back of our laptop, which is pretty empty at the moment. Now that everyone knows how to follow along, let's get started with our first topic: community updates.

Each year, we're excited about the new apps that take advantage of our frameworks to advance health and learnings in various health areas. To name a few, the Spirit Trial app built by Hello Thread was created in support of a clinical trial on advanced pancreatic cancer. Also, CareEvolution and the NIH launched the All of Us app to speed up health research and breakthroughs by building a community of a million or more people across the US with the aim to advance personalized medicine.

We've also seen apps utilize our frameworks to build high-quality apps very quickly in response to COVID-19. The Stanford First Responder app aims to help first responders navigate the challenges of COVID-19. And the University of Nebraska Medical Center 1-Check app aims to provide real-time situational awareness of COVID-19 to investigators. Last year, Apple also announced and released the Research app, which heavily utilizes ResearchKit, while paving the way for conducting large-scale studies all through your iPhone.

Last year, we also announced that we would release a newly redesigned website, and we're proud to share that website with you, which launched in late 2019, at researchandcare.org. On our overview page, you can read about the frameworks and their capabilities and features before diving in to create your own app.

If you navigate to the ResearchKit page, you can find even more information about the models it provides, as well as case studies that showcase amazing examples of studies and programs built in the community, like the one you see here. We also announced our new Investigator Support Program, through which researchers can submit proposals for Watches to support their studies.

You can now read about that on our website, and learn how to reach out to us if you're interested in the program. And lastly, we welcome all of you to reach out to us through our website so that we can hear about all the amazing work you all are doing. Now that we've collected our community update sticker, let's move on to the next topic, which is onboarding updates.

For the vast majority of study-based apps, the onboarding views are usually the first thing your participants will see and interact with. Knowing this, it's extremely important to convey exactly what the study is, and what the participant should expect if they decide to join. As you can see here, we've moved towards leaning on the instruction-Steps capability to support custom text and images so that you can have complete control over the content you wish to display. Let's take a look at the code to create this step.

After importing ResearchKit, the first thing we'll do is initialize the instruction-Step and pass in a unique identifier. After setting the title and detail-Text, the last thing we have to do is set the image property to the heath_blocks image seen in the previous slide. In the second step of our onboarding flow, we're using Instruction-Step again, but this time, we also incorporate body items, which is an extremely useful feature to further educate your participants.

In this example, we use SF Symbols for our icons, but it's important to note that everyone watching this video has access to these icons and more. So, if you're interested, feel free to download the SF Symbols app to find the icons that match your use case. Let's take a look at the code to create this step.

Much like the previous code slide, we initialize the ORK-Instruction-Step and pass in our unique identifier. After setting our title property, we also set our image property again, but this time we pass in an informed_consent image seen in the slide before. Next, we initialize our first body-Item, making sure that we pass in an image and that we set our body-Item-Style to dot-image. The last thing we have to do is append our newly-created body-Item to the body-Items array that sits on the Instruction-Step.

Now let's take a look at an enhancement we've made to our Web-View-Step. Previously, presenting the user with an overview of the consent document and collecting the signature for it were handled by two different steps. Now we've added the signature capture functionality to the Web-View-Step so that you can present the overview of the consent document and ask for the participant's signature within the same view. Let's see how this step is created.

The first thing we have to do is initialize the ORK-Web-View-Step, passing in an identifier and the HTML content you wish to display. And the last thing we have to do is set the show-Signature-After-Content attribute to "true," and this will ensure that the signature view is shown below the HTML content when the step is presented.

This year, we're also introducing the Request-Permission-Step. Previously, if you wanted to request access to Health data, you would have to do so outside of the ResearchKit flow. Which means you had to create the necessary views to ask for access, and maintain those views yourself. Now all you have to do is initialize the Request-Permission-Step and pass in the HealthKit types you want access to, and we'll do the heavy lifting of requesting the data for you. Now you can do more with less code while making the experience and flow of your app much better. Let's take a look at the code to create the ORK-Request-Permission-Step.

We start off by creating a set of HK-Sample-Types, and these represent the types you want to write access for. Then we create a set of HK-Object-Types, and these represent the types you want to read access for. Next, we initialize the ORK-HealthKit-Permission-Type, making sure that we pass in the HK-Types-To-Write, and HK-Types-To-Read sets created above.

And the last thing we have to do is initialize the ORK-Request-Permission-Step, and pass in an array of permission types that currently only has our HK-Permission-Type within it. That brings us to a close for the onboarding update session, so let's collect our sticker. Now that we have our sticker, let's keep moving forward and talk about survey enhancements.

Before we show any questions, we always want to give the user some insight on what the point of the survey is. To do this, we use an Instruction-Step again to provide some brief context, but this time, we also provide an icon image that is left-aligned to the screen, which is all handled by ResearchKit.

In the second step of the onboarding survey, we use our ORK-Form-Step to collect basic information about the participant. But as you can see here, we've made some UI improvements by using labels to display errors, as opposed to previously using alerts, which didn't always make for the best user experience. Let's fix those errors and move on to the next step in our onboarding survey.

In the third step, we preview the new SES-Answer-Format, which can help present scale-based questions. Much like the example here, where we asked the user to select the option that they feel best depicts the current state of their health. Let's look at the code needed to present this step. The first thing we do is initialize the SES-Answer-Format and pass in the top-Rung-Text and bottom-Rung-Text, as seen here. The last thing we have to do is simply initialize our ORK-Form-Item and pass in the SES-Answer-Format created above.

In this step, we use a Continuous-Scale-Answer-Format and a Scale-Answer-Format to get information on the participant's current stress level and pain level. In the past, if the user wasn't comfortable enough to answer the question, or simply didn't know the answer, they would either leave the question blank or provide an answer that wasn't accurate because the question might be required.

Now we've added the ability to use the ORK-Dont-Know-Button with select answer formats. This will allow the participants to select the "I don't know" button when they don't want to answer the questions presented to them. You can also pass in custom text, as seen here, to replace the default "I don't know" text. Let's take a look at the code for the second scaled question to see how we added the "Don't know" button and added custom text.

First, we initialize the ORK-Scale-Answer-Format and pass in all the required values. Next, we set the should-Show-Dont-Know-Button attribute to "true." Then we set the custom-Dont-Know-Button-Text to "Prefer not to answer," and this will override the default text "I don't know." The last thing we have to do is initialize the ORK-Form-Item and pass in the Scale-Answer-Format created above.

In the last question of the survey, we use an ORK-Text-Answer-Format to collect any additional information the participant thinks we should know. Previously, we supported setting the maximum character count, but there was no visual to let the user know what their limit is, or how close they were to approaching it.

Now we've added a maximum character count label so that the user can have a much better idea of how much information they can provide and base their response off that. We've also added a "Clear" button so that the user can remove any text that they've typed in. Let's check out the code to make this happen.

First, we initialize the ORK-Text-Answer-Format. Then we begin to set some properties on the Answer-Format, such as setting multiple-Lines to "true," setting maximum-Length to "280," and setting hide-Character-Count-Label and hide-Clear-Button to "false" to make sure that both of these UI elements are shown when this step is presented. And the last thing we have to do is initialize our ORK-Question-Step and pass in the text-Answer-Format created above.

After finishing the onboarding survey, we present the participant with the new ORK-Review-View-Controller. One of the biggest challenges for any study is making sure that the data entered by the participant is accurate. As humans, making mistakes in our everyday lives is very common. So when participants fill out surveys, it might be safe to assume that a small mistake might have happened. To help alleviate this problem, ResearchKit now provides a Review-View-Controller that will allow the participant to view a breakdown of all the questions they were asked, and the response they gave. If they want to update any of those questions, they can simply click "Edit" and update their answer.

Let's look at the code to present the Review-View-Controller. First, we initialize the Review-View-Controller, which requires us to pass in a task and a result object, which, in this case, we get from the task-View-Controller object passed back to us by the did-Finish-With-Reason delegate method. But keep in mind that you can also initialize your task separately, and also pass in a result that may have been saved at an earlier time.

Next, we set ourselves as the delegate, and this requires us to implement the did-Update-Result and did-Select-Incomplete cell methods. Then, after setting our review-Title and text, we're done creating our first Review-View-Controller. Now that we've finished reviewing our survey enhancements and collected our well-deserved sticker, let's move on to the next topic, which is active tasks.

Let's first take a look at the improvements we've made to our hearing task. For the Environment-SPL-Meter-Step, we added a new animation that clearly indicates if you're within the set threshold for background noise, as seen here. We also made updates to our dB-HL-Tone-Audiometry-Step by tweaking the button UI, adding better haptics, changing the progress indicator to a label to make it more clear to the participant how far they've made it, and we also added calibration data to support AirPods Pros. Let's collect our hearing sticker before moving on.

Now that we have our hearing sticker, let's chat about our next topic, which is 3D models. When running a study, giving your participants clear and informative visuals to explain a specific concept can be invaluable, especially if they can also interact with it. Using 3D models to do this is, by far, one of the best solutions to educate your participants, while also increasing engagement.

However, writing the code necessary to present 3D models and maintaining it can be cumbersome to say the least. So, whether you're trying to present something as simple as a human hand, or something more complex such as the human muscular system, we've made the process of presenting 3D models much easier for you by creating two new classes.

Those two classes are the ORK-3D-Model-Step, and the ORK-USDZ-Model-Manager. Using these two classes, you can now quickly present 3D models within your ResearchKit app by... first, adding a USDZ file to your Xcode project. Second, creating a USDZ-Model-Manager instance and passing in the name of the desired USDZ file that was imported in your project. And third, initialize an ORK-3D-Model-Step with the USDZ-Model-Manager instance and present it.

And just like that, you can now present high-quality 3D models that your users can interact with and touch without having to maintain any of the code yourself. Before moving on, we wanted to point out that we're well-aware that creating your own model can take a good amount of time.

However, there are models accessible to you online that you can download for free to practice with. So, in the upcoming example, we use a toy robot and a toy drummer 3D model that are both publicly available at the URL seen here. Let's get started. In the first example, we'll present the toy robot model. Selection has been enabled, and the user is required to touch any part of the model before continuing.

In the second example, we'll present the drummer model. Selection is disabled, but certain objects on the model have been pre-highlighted to draw the user's attention. The user also has full control to inspect the highlighted areas. Let's take a look at some code to see how simple it is to present a 3D model.

First, we initialize our USDZ-Model-Manager, passing in the name of the USDZ file that we wish to present. Next, we set a few properties on the Model-Manager, such as allows-Selection, highlight-Color, and setting enable-Continue-After-Selection to "false" to ensure that the user isn't blocked from moving forward. Next, we pass in our optional array-Of-Identifiers, where each identifier maps to a specific object on the model we want to highlight before we present it. Then the last thing we do is initialize the ORK-3D-Model-Step and pass in the USDZ-Model-Manager created above.

So, some of you might be wondering, "Why create a Model-Manager class instead of just adding that functionality to the 3D-Model-Step itself?" And the reason is to make the process of creating a custom 3D model experience much easier for any developer interested in doing so. To understand it further, we need to learn about the parent class of the USDZ-Model-Manager, which is the ORK-3D-Model-Manager. Let's take a look.

The ORK-3D-Model-Manager class is an abstract class that we've created which shouldn't be used directly. The point of the 3D-Model-Manager is to be a subclass while requiring the subclass to implement specific features that we believe every 3D model experience should have. So, after creating your subclass and making sure that these features are handled, you can then move forward to add all the extra functionality you want, as seen here with the USDZ-Model-Manager.

If the talk were to end here, we definitely believe that using the USDZ-Model-Manager could create endless possibilities for your ResearchKit app. However, we are open source, and we always encourage members of the community to contribute to help push ResearchKit forward. With that being said, we're excited to announce that someone from the community has also taken the opportunity to create their own Model-Manager class.

BioDigital, an interactive 3D software platform for anatomy visualization, has provided the ORK-BioDigital-Model-Manager class so that their already powerful iOS SDK can now be integrated easily into any ResearchKit project. Some of their features include: presenting custom models created via their admin portal, programmatically adding labels, colors and annotations to any model loaded within your app. And since all of our digital models are loaded via the web, you can dynamically add new models to your project without having to update any code. Let's see a couple of examples in action.

In the first example, we use an instruction step to inform the user that we'll present an interactive human model. This could be used in many situations that most of us have experienced, such as visiting a physical therapy clinic or an orthopedic physician's office where you're usually given a piece of paper to describe your pain, or circle the area on some kind of picture. Now we can get rid of paper and make the experience much better.

As you see here, we've loaded our model while also being presented with a card that contains useful information that can be updated via BioDigital's admin portal, or their SDK. Users can also interact with the model so that they can reach and view the exact areas of interest. After clicking on the muscle where pain has been experienced, we're also presented with a label that can give us even more information on that specific organ. This can be updated via BioDigital's admin portal, or locally through their SDK.

In the next example, we imagine a scenario where a patient has visited a hospital for chest pain. After receiving a CT scan, the physician would like to give a visual to show the patient the exact arteries that are experiencing blockages. To do this, we'll present an interactive 3D human heart model with dynamically-added annotations to specific coronary arteries, all done directly through BioDigital-Model-Manager class.

As you can see here, we presented our heart model along with another card view for additional information. The user can then interact with the heart model and select the programmatically added annotations to find more information on the severity of each individual blockage. Let's take a look at the code to present the animated heart model.

After importing ResearchKit and HumanKit, which is provided by BioDigital, we first initialize the ORK-BioDigital-Model-Manager instance. Then we set a couple of properties that were inherited from the ORK-3D-Model-Manager class, such as highlight-Color and identifiers-Of-Objects-To-Highlight. Then we focus on some properties and instance methods added by BioDigital, such as identifiers-Of-Objects-To-Hide, the load method, where we pass in the ID of the model we want to present, in this case, the heart model, and the annotate method, where we pass in the identifier of the object we want to annotate, in this case, the right coronary artery. After setting the title and text, the last thing we have to do is initialize the ORK-3D-Model-Step, and pass in the BioDigital-Model-Manager created above. To find out more information about BioDigital and their SDK, visit their GitHub page, seen here.

Now that we've collected our 3D model sticker, I'll hand things over to my teammate Joey to talk about building a custom active task. Take it away, Joey. Thanks, Pariece, for those awesome updates coming to ResearchKit. Today, I'm going to be showing you how to create your very own custom active task.

So, we've collected a bunch of stickers already... so to collect our front-facing camera sticker, we will walk through the process of creating an active task in ResearchKit. Then we will open Xcode and implement a custom application to show off our new task. Our task is going to show the user a preview of what they're recording in real time. We'll let the user control when to start and stop recording, and show a timer for how long they have been recording for.

Additionally, the user will have the opportunity to review and retry the recording, in case they want another take. Before we get into it, I want to give you a quick refresher on the relevant classes and protocols included in ResearchKit to help you accomplish this. First, your application needs to create an ORK-Task object. ORK-Task is a protocol which your app can use to reference a collection of various step objects. Most applications can use the concrete ORK-Ordered, or ORK-Navigable-Ordered-Task included in ResearchKit.

The task object you create is then injected into an ORK-Task-View-Controller object. This object is responsible for showing each step in your task as they are de-queued. Your application has no need to subclass ORK-Task-View-Controller, so you can use it as-is. Additionally, ORK-Task-View-Controller is a subclass of UI-View-Controller internally, so you can present it in your app as you normally would any other View-Controller in UIKit.

Finally, the ORK-Task-Result is an object which contains the aggregate results for each step in your task. The results that are collected from the task are then delegated back to your application upon completion of the task, using the ORK-Task-View-Controller delegate. This is the essential round trip from your application into ResearchKit, and then back. Since I'm going to be showing you how to create your very own active task, we need to dive one level deeper with some coding examples that will set up our active task.

So first, our application needs to create a collection of ORK-Step and ORK-Active-Step objects to make up the data model of our task. Since we will be creating a Front-Facing-Camera-Active task, we will really create a task which includes an Active-Step subclass. First, import the ORK-Active-Step header from ResearchKit and create a new subclass of ORK-Active-Step. We'll name this class ORK-Front-Facing-Camera-Step.

I'm also going to add three additional properties here I would like to configure. An NS-Time-Interval, to limit the maximum duration we want to record for, and two Booleans for allowing the user to review their recording, as well as allowing them to retry their recording. Next, we will declare the View-Controller type to display when the step is de-queued.

In the case of an ORK-Active-Step, this should be a subclass of ORK-Active-Step-View-Controller, which you can implement similar to any UI-View-Controller in UIKit. The ORK-Task-View-Controller presenting your task is responsible for instantiating the associated View-Controller of each step in your task. Here's a quick look at the interface of our ORK-Front-Facing-Camera- Step-View-Controller, which subclasses ORK-Active-Step-View-Controller.

In our ORK-Front-Facing-Camera-Step, we will declare the type of View-Controller to associate with the step, so we can override the step-View- Controller-Class method of the superclass. In this case, we will return the ORK-Front-Facing-Camera- Step-View-Controller class object. You can use a custom UI-View to represent the content of your step.

So here, our ORK-Front-Facing-Camera- Step-Content-View is a simple subclass of UI-View. I've declared some View events here which we can use to pass relevant events back to our View-Controller, as well as a block typedef. Inside of our interface, we have a method to set the block parameter to invoke when events are passed from the content view.

Since we want to give the user a preview of the recording in real time, we will pass the AV-Capture-Session to the content view, and internally, the content view will set up an AV-Capture-Video-Preview-Layer. And finally, we have added a method to start our timer with a maximum duration as well as a method to show certain recording options before submitting.

We can now add this content view into our View-Controller. We already have a reference to an AV-Capture-Session which is initialized in another method. We also have a property to reference our ORK-Front-Facing-Camera-Step-Content-View. By the time we reach view-Did-Load, we are ready to initialize our content view. Next, we will handle events coming from our content view. We'll use weak-Self here to avoid a reference cycle.

We will add our content view as a subview. And finally, we will set the Preview-Layer session using our AV-Capture-Session from before. After our step finishes, ORK-Active-Step-View-Controller asks for the ORK-Step-Result. This is your View-Controller's call to add the appropriate results and any data you collect to be delegated back to the application when your step finishes. In our case, ORK-Front-Facing-Camera-Step-Result is going to be a subclass of ORK-File-Result. We have also added an integer property so we can keep track of how many times the user deleted and retried their recording.

If we revisit the View-Controller, we override the superclass's result method to append our custom ORK-Front-Facing-Camera-Step-Result. First, we create an instance of our ORK-Front-Facing-Camera-Step-Result. Then we set the relevant parameters, such as the identifier, content-Type, retry-Count, as well as the file-URL. Finally, we append our new-Results into the current-Results collection and return. This effectively completes the implementation of our custom active task. Let's jump into Xcode and try it out.

So here I have a demo application that I've been working on which includes ResearchKit as a submodule. So I'm going to go ahead and I'm going to create a method which allows us to construct and present our task. Inside of this method, we're going to instantiate our steps that are part of our task. So I'm going to go ahead and create an instruction-Step which welcomes the user to the task. Then I'm going to use the ORK-Front-Facing-Camera-Step that we just created. We'll set the maximum recording limit to about 15 seconds, and we'll allow the user to retry and review their recording.

Then we'll go ahead and add a completion step, thanking the user for their time. So, now that we have all of our steps, we'll go ahead and create an ORK-Task object. So here we have an ORK-Ordered-Task and we'll include all of the steps from before... then create a task-View-Controller object injecting our task as well.

Then we present this task-View-Controller. In view-Did-Appear we can go ahead and present Front-Facing-Camera-Active-Task and conform to the delegate, the ORK-Task-View-Controller delegate. Then we can make ourselves the delegate for the task-View-Controller. Then respond to the did-Finish-With reason ORK-Task-View-Controller delegate method. Inside of this method, we're going to check to see if the currently presented View-Controller is the task-View-Controller, and then dismiss it. And we'll go ahead and try to extract the ORK-Front-Facing-Camera-Step-Result. And once we have that result object, we can go ahead and print the Recording File URL, as well as the retry-Count.

So let's go ahead and run this on the device. Okay, so here we have our instruction-Step that we created, and we're "Welcome to WWDC!" So here we have our preview-Layer session, which we can see our recording in real time. We'll go ahead and click "Get Started." So here we have our Front-Facing-Camera-Step, and I'll go ahead and create a recording. Hello and welcome to WWDC. So let's go ahead and review this video.

Hello and welcome to WWDC. Okay, let's go ahead and just retry that 'cause I didn't like that. Hello and welcome to WWDC. I think that one was good, so I'll go ahead and submit. Here's our completion-Step thanking the user, and we'll go ahead and exit the task gracefully.

If we go back to Xcode, we should be able to see in the console that we have printed the Recording File URL as well as the retry-Count. This concludes our demonstration for today, and we have implemented our own custom Active-Step in ResearchKit and constructed the task in our application. We then extracted the results object to verify our result. We hope you enjoyed. Thank you. Pariece, back to you.

Thank you for that demo, Joey. We hope everyone viewing enjoyed it, and we're very excited to see what you can do with the new Front-Facing-Camera-Step, or any task that you decide to create yourself. Now that we've collected our Front-Facing-Camera-Step sticker, that brings us to a close to all of our ResearchKit updates this year. But before moving on, let's go over all the stickers we've collected throughout our talk.

First, we talked about community updates, where we mentioned a few apps that have leveraged our frameworks over the past year, our new website at researchandcare.org, and the new Investigator Support Program. Then we moved on to onboarding updates. We spoke about the new additions, such as body items, inline signature functionality and the Request-Permission-Step. Then we talked about survey enhancements, where we previewed new error labels, the "I don’t know" button, and the Review-View-Controller, to name a few.

Then we talked about hearing test UI updates, where we previewed UI enhancements to the Environment-SPL-Meter and Tone-Audiometry-Step. Then we moved on to 3D models, where we went over and previewed the 3D-Model-Step, the USDZ-Model-Manager, and the BioDigital-Model-Manager classes to add 3D models to your app. And last, but not least, Joey walked you through the process of building your own active task, while also previewing the functionality of the new Front-Facing-Camera-Step. We have a pretty solid collection of stickers here, but it wouldn't be complete without the final ResearchKit sticker.

For more information on the topics discussed today, feel free to visit the resources shared here. As always, we want to remind everyone watching that we are open source, and we welcome anyone using or interested in ResearchKit to visit our GitHub repo shown here and contribute to help the framework grow. Thank you again for taking the time to watch our talk, and we're looking forward to seeing the powerful apps and experiences you will create with ResearchKit. Thank you.