Tools • iOS, OS X • 47:47
Unit testing is an essential tool to consistently verify your code works correctly. Learn how Xcode 6 takes this to the next level with support for performance testing, baselining, and integration with Xcode bots to continually monitor your performance over time and devices.
Speakers: Brooke Callahan, Wil Turner
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
Good morning.
[ Applause ]
All right. You're happy to see me. That's good. My name's Will Turner and with my colleague, Brooke Callahan, we'll be talking to you about testing in Xcode 6. And let's get started. So we're going to cover several topics today. To start off, to just kind of motivate this.
Think about the benefits of testing. Why should we bother in the first place? Then we'll get into what it takes to add test to a project. Maybe you have old projects that don't have tests. And then we'll talk about some new features in Xcode 6 which allow you to test asynchronous systems and then also testing features that allow you to catch performance regressions.
So why should we test? Obviously, every moment we spend developing is an investment of a resource. Our resource is time and our colleagues for maintaining the code base that we create. So the obvious thing about testing is it helps you find bugs and there's many classes of bugs that you find with testing. They are regressions so cases where you've shipped your product, you make a code change, add a feature and in the process something breaks and now your 1.1 or 1.2 has a bug and your customers are unhappy.
We hate regressions. Tests are a fantastic way to ensure your code ships without regressions. There's also bugs where performance changes. You make a change in your code and now suddenly something is taking a lot more time to execute and maybe that's only on some devices and not others. So tests again can be a great way to catch performance regressions.
The other sort of less talked about and less obvious benefit of testing is that it codifies the requirements of your project. So especially if you're sharing code with engineers, you create classes as classes have APIs and a lot of times because Objective-C and Swift are great languages for expressing the API, semantics of the API, they're self-documenting. We think, okay, our job is done.
But in truth, they don't really account for all the possible permutations of inputs to those functions. So if you write tests, those tests help codify the requirements of those APIs. So another engineer can come along. They can make code changes in that area and the tests helps them understand what the expected behavior is.
So to get started with tests, you may have a project that doesn't have test already. So you want to add test to that project and then the obvious thing to do is make sure that those tests pass. Alternatively, you may start with a new project. And in that case, you have the option of a different workflow where you create tests first. And then you write code that passes those tests and that's sometimes referred to as test-driven development.
So now you're set up. You've got your test. They're passing. You can go into a workflow that we talked about. We consider continuous integration workflow. And you start off in a green state where everything is working and you're making code changes. You're adding features, fixing bugs. And at some point, one of these code changes introduces a bug.
And so now, your continuous integration because you've written tests identifies that bug and you know right away, right after that first code change, you know that the bug has been introduced. So then you can take the necessary steps to fix the bug and return to a green state. And what you really want to think, s a green state represents known quality. And having tests and having continuous integration ensures that your projects remain in a known state of quality all the time.
So to get started, let's talk about how testing works in Xcode. Xcode ships with a framework called XCTest and this is framework for testing. It provides a set of APIs that lets you create tests, run the tests, express expectations, passes and failures. It all starts with the base class XCTestCase. And to create test, you subclass XCTestCase and then you add test methods. And these test methods follow a naming convention. They return void.
They're prefixed with the word, "test" in lowercase and they take no parameters. The remainder of the method name is at your discretion. It should be sort of a name that conveys the purpose of the test. Inside these test methods, you can use assertion APIs we provide to report failures. For example, XCTAssertEqual compares two scalar values and if they don't match, it outputs a string and reports a failure to the test harness.
So in Xcode, we manage a lot of different parts of our projects through targets. And we have test targets for managing tests. Test targets build bundles and these bundles contain the compiled test code and also resources that you would want to use in the test. Maybe you have data files that drive your tests.
These go in your test bundle but you don't really want to ship those with your application. You go and ship them with -- though you don't want to ship them at all. You want them to be with your test bundle. So these are -- test targets are automatically included in new projects. If I go today and I create a new application, you'll see there's a test target and a test class already there to start me off, get me right in test from the beginning.
You can also add test targets to existing projects. And this may be something you want to do to a project that has no test or it also be a step you take to help organize your tests because you can have as many test targets as you want and it's useful sometimes that you will run just this test target or that test target or you can run them all together. So let's think about a moment about how tests run. Because they are bundles, these are not executables that can be launched themselves. So instead, we need to host them in an executable process.
Generally, we inject these into your apps. So your test can be written against your application and it can access all the code in your application, which means when we run the test, we run it in the context of your application. Alternatively, you can run them in a hosting process that's provided by Xcode.
Resources for tests, as I mentioned before, are not in the main bundle. They're in your test bundle. So when you go to load them, don't use NSBundle mainBundle. This is something that trips people up. I think a lot of us just kind of have NSBundle mainBundle on autocomplete in our heads. But instead, we want to use NSBundle bundleForClass and pass your test class. That ensures that you're going into the test bundle to locate that resource.
So running tests, Xcode lets you run tests in a number of ways. The simplest way is Command-U and this takes your active scheme and runs the tests that are associated with that scheme. You also have buttons and the Source Editor's [Inaudible] next to the test method that lets you run just that test or the class and all of the tests in that class. So you have a similar set of buttons in the Test Navigator.
And you can also run tests using xcodebuild so if you have a set of scripts that you've used to create your own kind of automation setup, you can use xcodebuild. And you pass at the test action and you pass your project. Tell it which scheme -- this is essential because your project may have many schemes -- and the destination. And you can have multiple destinations so if you want to run on multiple devices, you can pass these all with distinct destination flags and xcodebuild will run them.
When your tests are done, where do you see the results? Again, there's a number of places where we display the results of the tests. The first is at Test Navigator. We will see a green checkmark or red icon indicating failure. Also in the Issue Navigator. If you hit a test failure, you'll see not only the failure but the reason it failed and so if you've used our insertion macros, this is where you'll see that error string that says, you know, 50 was not equal to zero or whatever the assertion you were testing.
They also show in the Source Editor's gutter and then finally, most useful is the Test Reports where we show all the tests that have run and the logs associated with them and some additional data that we'll get to later in the presentation. So let's get started. See what it looks like to add tests to an existing project.
Let's take a look at how this is put together. To simplify this for the demo, I've tweaked it a little bit from the sample code that's online. But basically the guts of this app is this NSOperation subclass that parses the XML data. And NSOperation, as you know, is designed to provide concurrency and be run in the background. This class has a Delegate that it calls back with certain events when parsing is done, when it's parsed a certain number of objects or if it encounters any kind of error.
So we have this property here, parsedEarthquakes, and that's something you can access safely at any time to see the array of parsedEarthquake objects. So I'd like to write a test that validates that this parsed operation behaves correctly. So let's start clean. We have -- not a clean slate here.
We're going to reset this a little bit so you can see what it looks like to add test to an existing project. So we have our project and at this point, there's no test target. So to add a test, I'm going to go to the file menu and will select a new target. I'm going to select under other CoCotest testing. And I'm going to name this target. I'm going to call it two just to be safe.
And now we've noticed a few things. We see a new group here. Inside of that, we see a template test file and if we look at the scheme, we'll also notice that under the test action, we have this one that I removed but more importantly, we have this new one that just got added.
So there's a few template methods in here. I'm just going to remove these so we can start clean and build our test out. So the first thing, to recap, what I want to do is write a test for this operation subclass. So I'm going to import the header so that I can access the APIs.
The next thing to do is I'm going to add a test method. And just to reiterate from earlier, this method returns void. It is prefixed with test and there are no other parameters. The rest of the name is just a name that I've come up with that to me represents what this test is for.
Now, I like to sometimes go through and outline the steps I'm going to take as comments before I write the code. It helps me organize my thoughts. So just hear the summary of what the test is going to do. We're going to get a URL to a resource and that's going to be a resource of XML data that's in the bundle. We're going to load that into memory. We're going to create the parse operation then we'll run the operation directly.
And finally, we'll check the number of earthquakes and see if it's correct. So if I'm going to get a resource to the bundle, I need to add the file. Well, I've had this file here that is, let's see, I'm going to copy this file into my project as a resource.
I'll copy it in. And this is just some data that I downloaded in advance because I want to take the network out of the equation when I write my tests. There's nothing wrong with testing the network, but in general if you keep your tests simpler, when they fail, it'll be easier to figure out why they failed.
So in this test, I'm not interested in validating if the network works. I'm validating the parsing works. So by having this in my bundle, I eliminate that as a concern. So to get the URL, I'm going to go into the bundle and get a resource URL, and then the next thing that I do is just load that as an NSData.
The next step is to create the parse operation. So that is the class that we are trying to test here. I'm going to create it with this data and I'm not going to set a Delegate because in this test, I don't need to. The next thing is to run the operation directly. Now, we know that NSOperation is designed to be run in the background. But operations allow you to invoke them directly and synchronously by just calling start. That means when we call start, it's going to run and when we return, it's finished running.
So at that point, I can check the results and see if they are what I expected. Now, I have zero here. I actually know that there is more than zero but I'd like to run this and let you see what a failure looks like before we go any further. So I'm hitting Command-U to run the test.
It ran quickly and you can see here's this message, parse operation zero failed, 55 is not equal to zero. Okay, so there are 55 earthquakes in there. So let's run this again. And now we've passed our test. And again you see that same pass indicator there in the Test Navigator and we see it here in the Test Log that we parsed and succeeded.
So that's what it takes to add test to a project. We created a new test target and then we go in and we added the test code. We create test methods and we used the assertions to validate the state that we expect. So I'm going to switch back to slides now. Okay, so let's get on to what's new in Xcode 6. We have new APIs and improvements to our tools.
So the first thing I want to mention is we've added some improvements that help with the compatibility story. And I'll get into more detail but this basically means which versions of our OS you can target using XCTest. We've also added instruments integration for tests and Brooke will talk to you more about that later on.
Finally, we've got new APIs and as I mentioned earlier, these are both for asynchronous testing and for performance testing. So first, XCTest on older iOS versions. Originally XCTest shipped and it was part of the iOS release itself. So it meant when we shipped it, you could only run it on versions of iOS that had it.
We've changed now. Now we're shipping XCTest with Xcode itself. So this means that as we add new features to XCTest, you don't have to worry about whether or not they're going to be available where you're running your tests because you're always going to be testing with the current version that's in the Xcode. And this also means that we can target older versions of iOS. So anywhere that Xcode supports, XCTest will also support. And this means iOS 6 and later.
[ Applause ]
So that's probably a good time to just mention OCUnit which is the legacy technology that XCTest was derived from. In Xcode 5.1, we deprecated OCUnit. We're not adding new features to it and we really think the time is now to move to XCTest because we've added new features to it, integration is better and OCUnit is no longer where we're focusing our energies.
If you have existing targets that you want to switch from XCTest, we recommend you use a migrator in Xcode to do so. The reason for this and also to do this, you just go to the Edit menu and select Refactor, Convert to XCTest. And it will update the project settings and also all of your test classes will get updated.
The reason for this is some of the build settings that are associated with this are not accessible in the Xcode UI. So some people in the past have tried to do the migration manually and you know, you end up -- you're not able to do it 100% by hand. If you are, if you really do want to do it yourself, there's a different way which is just to create a new test target which is guaranteed to have exactly the right settings and just move your existing tests into it manually.
So now, let's talk about Asynchronous Testing, one of our new APIs in Xcode 6. So more and more APIs on our platform themselves are asynchronous. They have block invocations when they're done that may get run in different queues. They have Delegate callbacks that may be deferred by the run loop. They may make network requests, which we absolutely know should be handled asynchronously. Or they're doing background processing like our NSOperation here.
So this creates a challenge because tests themselves run synchronously. And so, to help you with that, we've added APIs that will allow you to create asynchronous control flow that manages asynchronous activities. And we do this through what we call expectation objects, and these describe events in your test that you expect to happen at some point in the future, hopefully not too distant.
With these objects, XCTest has an API that would wait for them to be fulfilled. And that takes a timeout and a completion handler that's called either when the timeout hits or when all the events are fulfilled. And you can be waiting on multiple asynchronous events at the same time.
So let's look at a code example. UIDocument, as you may know, has an open with completion handler that is an asynchronous opening and that's great because if a large document might take a little bit of time to open and you don't want to stall the user's interactive experience while you're waiting for that.
So let's write a test for that. The first thing I do is I create an expectation. And these expectation creation methods take a string which is simply a description for your benefit. It tells you if we had a timeout, it tells you what we were waiting for. And so the more descriptive you make that, the easier your life will be.
Then we create a document which I'm not showing here. And we call the openwithCompletionHandler. And you notice I haven't filled out the CompletionHandler. It's just an empty block at this point. We'll get back to it in a moment. And then finally, I call waitForExpectations with a 5-second timeout. Probably a little on the long side but just to be safe here.
So what we have now is a synchronous flow. We have create expectation, set up the document, open it and then wait. And this is asynchronous flow within the test. Asynchronously later, that block will be called back. Inside the handler, I'm going to do two things. I'm going to use one of our asserts to validate this opening was successful. On top of that, I'm going to call expectation fulfill which will cause the waitForExpectations to return because now all of the expectations I have created have been fulfilled. So let's see what that looks like if we add that to our seismic XML tests.
Okay, so we go back to our test. It's right here. And the first thing we do is I'm going to just rename this to, let's say, ParsingInTheBackground because that's closer to our actual real-world usage anyway. So the first part of the test stays the same. We're still going to use that same resource file, load it into memory.
Now before we create our parse operation, we'll need to -- this is where things are going to change a little bit. So if you think about it, we have here, we're running the parse operation directly. Instead of that, let's run it in an OperationQueue. So that looks like create an OperationQueue and add the operation to it. And that means the operation is going to start running immediately but it's going to do so in the background.
Now if I leave the test as is, because that running is happening in the background and our test is just continuing on, it's not stopping or waiting. It's just ready to go. If we run this test now, what we're going to see is that we failed because at the point where we're evaluating this, parsing isn't done. It's off in the background somewhere and we don't know whether it's finished yet.
So what we want to do is wait for it to be finished before we check. And I'm just using a 2-second timeout here. These timeouts are largely at your discretion. If you make them really, really small then you may have cases where something takes a little longer to run and you get a failure that's not really a failure. So those are just kind of at your discretion.
So now, we can wait but before we wait, we also need an expectation that describes what we're waiting for. So think about the operation. How do we know it's done? Well, it has this Delegate API with callbacks. So let's look at what those were like again. So ParseOperation, the Delegate callbacks, we have DidParse, DidFinish, DidFailWithError. Well, DidFinish looks like exactly what I want. So going back into my test, if I'm going to be the Delegate, I need my test itself to conform to the Delegate Protocol.
And then I need to implement the DidFinish method. Now inside that, I'm going to operate on some kind of expectation and report that it's fulfilled. But since we're here and we're not in the same context here, we'll need to track that operation as a property. So we'll just add an expectation property to our test.
We'll set it up before we create the ParseOperation. So we've created our expectation. "Parsing finished" is what we're calling it here. And then inside our operation, we will fulfill it. So let's just run this. Now, secret. This is actually going to fail because I've left something out intentionally but I want you to see what failure looks like when we hit a timeout. And asynchronous wait failed. Exceeded timeout of 2 seconds with unfulfilled expectations "parsing finished". So we never fulfilled our expectation. Well, the reason is pretty simple. I forgot actually to say that this object is the delegate.
So once we hook that up, we run again. This time, everything passes because we get the callback, we fulfill the expectation, we unwind from the wait and then when we evaluate the number of parsed earthquakes, it's 55. So just walking through this again, we used an expectation to describe a future event.
We wait for the expectation here. And then when that event occurs, which in this case is in a delegate callback that could be inside a block handler or some other context, we fulfill it. That causes wait to return. So this allows us to handle an asynchronous activity in a synchronous fashion.
It's fine for testing. I wouldn't recommend you do this on the main thread of your application code [laughter]. Okay, so let's switch back. So that's what it looks like to write an asynchronous test with the new APIs in Xcode 6. Now, Brooke's going to talk to you about performance testing.
[ Applause ]
Thanks, Will. So it can be easy to introduce performance regressions in your code. And historically, finding these issues can be time consuming and expensive because you need to manually use your application to find them. Now Apple has some great tools for investigating performance issues. And performance testing is just a way to tell you when to do that investigation.
So we'll look at some new APIs that we've added to help you measure performance and detect regressions. We'll also see how the measurements that these APIs make are surface and Xcode's UI including those test failures due to regressions. And now that Xcode is going to be reporting performance issues as failures, it ought to give you an easy way to do that investigation so you can now profile your test with Instruments. The easiest way to do this is to use the new measureBlock API. This takes a block of code and runs it 10 times measuring the duration each time and showing the results in Xcode.
So for example, I've got a test here that I'm writing that I want to measure the performance of using a fileHandle. So I'll call self measureBlock and then the code I want to measure. It's creating a fileHandle, using it and then closing the fileHandle. Now that I've got this test, I'm going to want to profile it in the case that Xcode tells me something went wrong. And you can do this from Xcode Source Editor or Test Navigator from their context menus. And there's also a Command for this under the Product menu.
It's important to keep in mind that when you profile your test, this uses settings from the Scheme Profile Action. The most obvious difference here is that when you're just running your test that it uses a Debug configuration by default. But Profiling will use a Release configuration and there may be some difference in behavior there. Let's take a look at the demo. So here, I have a Mac version of the project that Will showed you earlier. And I've already got a couple of tests that used the measureBlock API. I'm just going to run my test now.
So I don't know if you can see this, but Xcode is telling me that this test is doing 4000% worse than it did before. So that's not so great. So how am I going to figure out what's gone wrong here? I could look at the test itself to see, you know, is there anything obvious that the test is doing wrong? Well, it appears to be loading its file from the test bundle.
Parsing it and then validating that the earthquakes are right. I could look and see if there are any local changes in my project? No, there aren't. I could look and see how this test compares to all the other tests in my project. And actually, this one's doing a little bit -- this one's doing bad but the other one's doing a little better.
So it looks like I'm just going to have to profile it. And I can do that by right-clicking on this button in the Source Editor and selecting Profile, or I can also do that from the Test Navigator by right-clicking on my test and selecting Profile. I'll select Time Profile from Instruments and I'll click Record.
Now, Instruments has launched my application and it's run my tests and it's done. So I'll zoom in a bit so you can see this. I'm just going to invert the call tree and expand this. And from here, I can see my test is being called. XCTest is eventually calling the block that I've given it and it's calling this validateProperties which is calling some really expensive function.
Well, that's probably not what I want. So let's look into that. I'm going to right-click on validateProperties and use Reveal on Xcode and zoom out. Yeah, yeah. It is expensive. Okay, so let's just get rid of this. I'm just going to delete that and run my test again. And it passed, great.
So what you've just seen is you can use a new measureBlock API to measure performance and detect regressions. You can view these results in the Source Editor and the Test Report and you can also Profile your test with Instruments. So you've just gotten a taste of how Performance Testing works with Xcode. Now let's go into details. First of all, setting Baselines. For XCTest to report a test as failing due to a regression, it needs a fixed point to compare against. And the Baselines let us do that.
And the Standard Deviation, XCTest should only report a performance regression when something's actually gotten worse. XCTest uses Standard Deviation to determine the spread of the measurements, to tell us how reliable they are. And finally, measuring precisely. We have some additional APIs that let you be even more precise about what you want to measure.
So let's look at the Baselines. The Baseline is the Average from a previous run that you've specifically selected for comparison. Once you've set a Baseline, XCTest will use this to detect regressions. It will fail a test if the new Average has increased from the Baseline Average by 10% or more and it will ignore regressions of less than a tenth of a second. This is to eliminate a source of false positives.
The Baselines are stored in source where you commit them to your repository and show them on routine numbers. And they're stored per device configuration. So when I run tests on my iPhone 5S, they're going to be using the same Baselines that when Will runs the same test on his iPhone 5S are used. But at the same time, if I were to run these tests in a simulated iPhone 5S, it would not use those Baselines because that would be a different device configuration.
So let's talk about how you set the Baselines with Xcode. The first time I run a test, I'm going to see an annotation like this with this gray diamond with a white dot on it telling me there's no Baseline for the test. And that lets me know that it didn't do any comparison at all and it's just showing me the average that the time that the test took.
If I click on that annotation, I'll get the performance result to pop over. And I can get this from the Source Editor or from the new Test Report. From here I can see the current average from the test, and I can also see how the individual measurements that were taken differ from that average. So here I can see that measurement number 8 was the longest one. Now when I'm running too, I can click Set Baseline and Xcode will use, will copy the current average to the Baseline.
And if I need to set this again, I can click Edit and either accept the new current average or type in whatever I want as the Baseline and I can also edit the maximum Standard Deviation from here. In order to set the Baseline, the next time I run my test, I'll see a different annotation with this gray diamond with the checkmark in it. And it's telling me that this test is doing 4% worse than the Baseline that it's compared against.
If the test were to do a lot worse, the test would fail and I'd get an annotation like this telling me that, you know, it's 68% worse in this case. And I can also see these results from that new Test Report. Also from the Test Report, I can get the performance result pop over by clicking under the time call on those measurements. If you want to get at the raw values that Xcode gets from XCTest, those are available on the logs tab of the Test Report.
So let's talk about how XCTest uses the Baseline Average. Here I have a dataset from one run of a test. There are 10 measurements for the 10 indications of the block. And they're all -- it comes around that 1-second mark with a 1-second average. I'll set the Baseline for my test, which fixes that in place.
Now the next time I run my test, if I were to get a dataset like this, the average would be 1 second at 1.5 seconds and the test would fail because that 1.5 seconds is well outside the bounds of the allowed 10% regression. So XCTest is going to fail the test if the new average has increased from the Baseline by more than 10% but it's going to ignore regressions less than a tenth of a second.
But is the average enough? Let's look at that same dataset again. All the values come through on 1 second with a 1-second average. If I were to run this test again and get a dataset like this, it hasn't really regressed. The average is still 1 second but there's some values over 1.5 seconds and some under 0.5 second.
So it isn't a true regression in terms of the average but something's gotten worse. There is something that if I were to get a dataset like this, I would want to investigate it. So the average just doesn't tell the whole story. And XCTest is going to use the Standard Deviation to indicate the spread of the measurements.
If we look at that first dataset one more time, we see that the Standard Deviation is set for these numbers tightly [inaudible] around 1 second at 6% while the Standard Deviation for the much more spread-out dataset is 40%. And the way XCTest is going to use this is if the Standard Deviation for the new dataset is more than 10% of the current average, which you can adjust from that popover, it will fail. But again, just like the average, it will ignore Standard Deviation of less than a tenth of a second, again, to avoid false positives.
So what causes excessive Standard Deviation? Well, one thing is if the block that you're measuring is doing network I/O or file I/O, that tends to vary pretty wildly. Another thing that can cause high Standard Deviation is if the block just isn't trying to do the same work each time. So for example, if the block that's being measured does some -- sets up some expensive global state the first time through and then never again or the block might just be affected by an uninitialized variables.
And lastly, another thing that can cause high Standard Deviation is if the system is just busy with other processes and short-running tests are especially sensitive to this. So how does XCTest detect issues? First of all, if there's no Baseline Average, it's done. It's not going to try and do any analysis of the measurements.
If there is a Baseline Average, first it will check to see if that Standard Deviation is more than a tenth of a second and more than 10% of the current average. If it is, it'll fail for that. Otherwise, it'll check to see if the average is increased by more than a tenth of a second and more than 10% of the Baseline. And if it is, it'll fail for that. And otherwise, it'll pass.
So how can we minimize Standard Deviation? Well, one way is to only measure the code that's important to you. So let's look at how we can do that with that previous example. Here, this test is doing the work of setting up a FileHandle using it and then closing the FileHandle each time we run the block. But if I only want to measure the time of using the FileHandle, what I might do is just move the set up and tear down work outside of the block.
But what if I can't do that? Sometimes, the work that you want to measure requires some set of work that needs to be done each time before the measurement. And for that, we have some additional APIs; measureMetrics automatically start measuring with a block. You can use this to measure just part of the block that's being called.
This API expects an array of metrics to measure and currently only time is supported. We'll also need two more APIs, startMeasuring and stopMeasuring. You can use these to isolate what part of the block you want to measure. You can call these once per block invocation. And if you're going to call startMeasuring, you need to pass NO for automatically startMeasuring.
So let's see how that would work with our previous example. First, we'd change the method to call it measureMetrics passing time as the metric and NO for automatically startMeasuring. It calls startMeasuring right before you use FileHandle and stopMeasuring immediately afterwards. So now this block is doing the work of setting up the FileHandle and turning it down each time but it's only actually measuring the duration spent in UseFileHandle.
So let's look at this in practice. So despite the name of this project, I'm actually thinking about adding a JSON parser to it and I've got a test that I'm running here that's loading this file from the web and parsing it with NSJSONSerialization. I think I'd actually like to turn this into a performance test just for my own sake. And I'll do that now by calling measureBlock and then pasting in the code. And I'll click to run my test.
So here we can see that it took, it says on average 1.91 seconds with 114% Standard Deviation and it's telling me there's no Baseline Average for this test. Well, that's not bad but I know that since it's doing this expensive work of loading this file from the web each time, I actually don't want to measure that.
I just want to measure the time spent in NSJSONSerialization. So what I'll do is I'll delete this and call measure time measureMetrics which is passing time as my metric and NO for automaticallyStartMeasuring. Then all I need to do is call startMeasuring right before NSJSONSerialization and I can run my test.
Great! So previously, it was taking 1.91 seconds and now it appears to be taking no time at all. So that's pretty good [laughter]. I wish all my APIs were that quick. But I actually want this to measure something, you know, something that Xcode will report, something significant so what I'm going to do is I'm going to change this to use a larger file. So we actually get some values here.
And this is going to take a little bit longer because what it's actually doing is it's going to be calling this block 10 times, loading the file from the web each time and then parsing it. So now the test is taking, it looks like 1.21 seconds and it's got a very low Standard Deviation, which is pretty good.
So I'm going to click to set this as my new Baseline and since that seemed to take a little while, I'm just going to move this out of the block. Since the data that it's loading is actually immutable, I don't need to do this every time. So I'm just going to do that once when the test initially starts. I'll click through on my test again.
And it still got a result around the same ballpark and still got a pretty reasonable Standard Deviation. Well, now that I'm not actually doing any work before startMeasuring, I don't need all this. So I'm just going to delete that and change this back to call measureBlock. I'll run it again.
And I should get similar results and yeah, it looks like I do. So the last thing I'd like to do with this is like Will showed us earlier. We kind of want our test to be self-contained. We don't want it to be loading files from the network. So I'm just going to change this since I've actually already got a copy of this file in my project. I'm going to use NSBundle bundleForClass to load this all-month file from my test project. I'll run it again.
And here we go. So now I've got a new performance test that's measuring the time that it takes to run this and if something were to change, I'd find out. The last thing I want to do is I want to commit my changes. So here we can see the changes that I've made so far. I got rid of that really expensive function.
I changed my test to be a performance test and we can also see the Baseline that I've added here. So I don't think you can tell from this, but this is actually a file that's stored inside the project bundle itself. So I'll commit that. And now it's saved there for the next time I need it, the next time I need it, okay.
[ Applause ]
So what you've just seen is that you can use the new measureBlock APIs to measure performance and to detect regressions. You can tell Xcode what the Baseline is to specify what constitutes a regression for your test. And it will -- XCTest will use a Standard Deviation to inform you of the spread of the measurements. And finally, when something does go wrong, you can always use Instruments to profile your test. Now I'd like to invite Will back up.
[ Applause ]
Okay great. So just to kind of go back through everything we've talked about today, we started off with thinking about, you know, why should we test in the first place? What are the benefits? You know, it helps us identify bugs before we ship. It also helps us to describe and really think about the impact of the APIs we're presenting in our project. And then we talked about how do you add tests and also how to organize test into targets in your project, the test methods, loading resources from the test bundle and using the assertions inside your test.
And then we talked about the new asynchronous testing API, which allows you to get a synchronous control flow around asynchronous tasks. And then Brooke took you through performance testing and also the instruments integration that lets you profile your test. And that's a really powerful feature because your test code is going to be executing critical paths in your project and then to easily be able to hop in Instruments and analyze time profiles or object alloc profiles. It's a really great tool for you to have.
And if you have questions, we encourage you to contact Dave DeLong, our Tools Evangelist and there's a few related sessions. You know, early on, I talked about this continuous integration workflow and XCTest combined with the Xcode server makes that possible where you can set up a server that will on commit or schedules, check out your code, and run all of your tests and report the results. So there's a session on that later this afternoon right here in this room. And that's it. Thanks a lot, folks.
[ Applause ]