Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2008-917
$eventId
ID of event: wwdc2008
$eventContentId
ID of session without event part: 917
$eventShortId
Shortened ID of event: wwdc08
$year
Year of session: 2008
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2008] [Session 917] What's New ...

WWDC08 • Session 917

What's New in Core Data

Tools • 49:05

Core Data is a rich framework for managing your application data and persistence on Mac OS X. Join us for a preview of Core Data's upcoming Snow Leopard features including Spotlight integration and lightweight schema migrations. Learn about powerful new fetch request options, sophisticated additions to using predicates, and generating aggregate statistics. Discover the new API, notifications, and changes under the hood that will take your Core Data development to the next level in Snow Leopard.

Speaker: Adam Swift

Unlisted on Apple Developer site

Downloads from Apple

SD Video (536 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon. Welcome to session 917, What's New in Core Data? My name is Adam Swift, and I'm an engineer on the Core Data team. In this session, we're going to talk about some of the new functionality and APIs that we've introduced in Snow Leopard. The content that is in the session will be covering just the things that are available on the Snow Leopard seed. None of this really applies to Leopard.

You can try it out with the CD you picked up hopefully earlier in the week. DVD. We're going to talk about the new Spotlight integration work. We've done some work to make it easier for you to develop your Core Data models and evolve them over time. And we've also introduced some new APIs you can use to boost your application performance. We're breaking this up into four sections. First, we're going to cover the Spotlight integration. Then we'll go into SQL generation, read-only property data, and then we'll go to lightweight migration to wrap it up.

So let's start with Spotlight search integration. Spotlight is an amazing search technology, and it just works wonderfully on Mac OS X. Your users want to use Spotlight to search your application data in a Core Data application. But sometimes this is a problem because Spotlight Search returns matching files. And particularly, when you've got a non-document based Core Data application, all your record data lives in one file. And your search results, you want to open individual records, not the whole file. So, for example, when you search for guacamole, you want the recipe, not the cookbook.

So what we've done is we've come up with a way to create files to represent the records you want to search. And what this really translates into is each file corresponds to an individual managed object. The path for the file encodes the managed object ID and the store identifier. This plays well with Spotlight because Spotlight can search for, index, and return individual files representing the results that your users want to search for. And when they open up the file, your application is registered to open it by the file extension or UTI.

So how does this all work together? We keep the files in sync with the store. We take care of the hard part. We create, delete, touch these marker files, and every you save changes to your manage objects context. And the file activity triggers the indexer and uses your importer to populate the index.

When you make changes to one of your objects, we update the marker file. If you delete one of your objects, we delete the marker file. This new Spotlight importer is available on the Snow Leopard seed and it has a new project template called the Core Data Application with Spotlight Importer.

So the new template provides the basic implementation, the basic source code you need to integrate the new Spotlight infrastructure into your application. It provides the code to fetch a managed object for a marker file, and it also provides the code to import the properties you've configured to be indexed in Spotlight.

How do you mark your properties to be indexed in Spotlight? Well, we've added some new API and new interface to the data model design tool that lets you mark individual properties to be indexed in Spotlight. So let's talk about the big picture here. How do all these pieces work together? You've got your Core Data Application. There's the store. There's Spotlight. Well, when your user is editing data in your Core Data application and they save their changes to the store, Core Data propagates the changes that were applied there to update the marker files.

The changes to the marker files trigger file system events which the Spotlight importer notices and then uses the file path information to look up your managed objects from the store, fetches the property data from those managed objects, and then populates the Spotlight index with that information. Then at a later time when your user performs a spotlight search, they've searched the spotlight index, which returns the marker files matching their query. And when they open those files, they're registered to open up in your application.

So how does this actually work on the file system? We actually provide a support folder where all of the marker files are saved as well as some other resources that we use to support this feature. By default, the marker files are stored in a support folder next to the persistent store, but you can override that as the Spotlight template does with this option you can provide at store load, the external records directory option. In order to actually create the marker files, you need to specify an option at store load time to provide the spotlight file extension. The file extension is the hook that lets Core Data know that you want marker files to be synchronized to your data store.

Now, the template provides code to get access to the managed objects from the store, but you really want to customize it. You have access to everything that is available in your data store. And there's some new API to allow you to extract the details of what's available from the path to the marker file.

So you can get access to the managed object ID, the entity name, the managed object model path, the store path, and the store path. You can get access to the managed object ID, the entity name, the managed object model path, the store path, and the store identifier purely by looking at the path for the support file or the marker file. These are all available as keys returned in a dictionary.

By registering the UTI and the file extension for the marker files with your application, The launch services system will make sure that those files open in your application, but it's up to you to provide the application-specific behavior to actually show the records that correlate to the matching marker file. So in general, that means highlighting or opening up the managed object that corresponds to that marker file. So I'd like to give a quick demo showing how you can set up, you can take advantage of the Spotlight integration work.

I'm running this on the new Xcode 3.2 on Snow Leopard. And we'll use the new project assistant and find... There we go. Application projects. The new Core Data application with Spotlight importer. And we'll create a real quick, easy cookbook application. I'm going to set up a very simple model just to demonstrate how this all works with a recipe entity.

We have two attributes. We want a recipe name. That'll be a string. And we want to have directions. for the recipe, which will also make a string. Obviously, this is a fairly simplified model. And here you can see a new part of the property inspector allows us to select that we want these properties to both be indexed in Spotlight. So we want the name and directions to be searchable for Spotlight searches. So we're done with our model. And the next step, we'll set up a simple user interface for the app.

And this gives me a chance to show you a great feature in Interface Builder if you haven't seen it, which is the Core Data Entity Assistant. So that allows me to select my entity, the recipe, choose the type of view I want for it. So I'll set that up with a basic interface. And there we go.

The IB guys get credit for that one. And I think that's all we need to do for our interface. That's all we need in order to create some simple recipes. But the other work that I need to do to enable the Spotlight searching is I need to customize my importer.

Hello, everyone. I'm Adam Swift. I'm the CEO of Core Data. I'm the CEO of So I know that there's a number of parameters in here I need to set. And one is the display string. And this will be the string that gets displayed in the actual Spotlight search results. So we want to pretty print this a little bit. We'll say cookbook recipe. And then we'll fill in-- turn on line wrapping.

And we'll just put the name of the recipe in there. Okay. And the other thing we want to do is if we look at this, I should have mentioned that this is a custom class that's provided by the Spotlight template. And what it does is this is the My Spotlight Importer template class, and it gives you one method to import a marker file with a certain path. So it's one place to put in all your code for actually populating the Spotlight index.

And it contains some code to put together the display string, which I just customized. And then the other part is iterating through all of the attributes of the entity that's being indexed corresponding to the marker file and checking for each attribute if it's configured to be indexed by Spotlight, I want to go ahead and add that attribute to the Spotlight index. In this case, we're just going to use the raw field value. for that attribute. And we'll put it in the index as text container, as MD item text container, or content. Excuse me.

Now, if I had some relationships and I wanted to put in some related data, there's some code in here that I could use to fill in those details as well. But since I've got such a simple model, I don't really need to worry about that for this one.

So that's all we need to do to actually get populating the property values into the index. The rest of the work we need to do is simply providing the file extension information and the UTIs so that all of the pieces know how to work together. And I'm going to use a little shortcut here to help me find all the places I need to do that. So I'm going to search for-- : I'm going to search through the whole project. And that will show me everywhere where I need to customize the UTI information. And we'll just call it mycookbook.recipe. So this is identifying and providing a unified type identifier for the marker files.

And then we need to also provide a file extension for them. So I'll call that recipe. Now we also need to configure our application to know that these marker files should be opened in our cookbook application. So I'll put the file extension in here too, recipe. And then we also need to put it in the UTI for our marker files. So that was my company -- no, mycookbook.recipe. There we go. All right, that's done. And the last remaining piece is to put in the file extension and UTIs in the Spotlight importer, which we will also be building for the project.

Hello, everyone. I'm Adam Swift. I'm the founder of Core Data. So at this point in time, we should have a functional application. And I'm going to go ahead and try and give it a run. And there we are. OK. So I'm going to add a recipe for guacamole.

[Transcript missing]

We'll put in a recipe for another fine condiment, salsa, mix, chopped tomato, onion.

Cilantro with some lime. Okay. So those are our simple recipes, and we are done with this part of the job. So I'm going to quit that. Now, I won't actually be able to perform my Spotlight search yet because I haven't built the Spotlight importer and I haven't installed it in one of the known Spotlight locations. So let me quickly build the Spotlight importer. And then I will need to copy it into the standard library Spotlight directory. So let's... Where did I put my cookbook application? Grab the importer.

Put it in there. And then with a little luck, let's see what matches tomato. Oh, look at that. And it launches our application. And if I had done the work to add the logic to select the recipe that I'd specified by marker file, then it would have selected the proper recipe as well.

And that's that. Can we go back to slides? So we think this is a really major improvement in making your lives easier for integrating Spotlight because the really hard part of that is making sure that you've got the data synced up in the marker files and in your data store.

And what we've actually done is we aren't putting the data in the marker file. You can access it just using the encoded managed object ID and the reference to the store. So one of the things I mentioned earlier on, but it went past pretty quickly, is that I was talking about non-document-based applications.

So as far as document-based applications, Let's talk about what we're doing with those. The marker file synchronization pairs the persistent store file with the Spotlight support directory. These two work together. We use file paths a lot to make this work. Unfortunately, we can't put the Spotlight search directory in a document wrapper because the Spotlight indexer won't actually go into a wrapper file to do its indexing. It won't see the files, and so the marker files won't get indexed.

Now, putting the support folder into another location outside of the document wrapper doesn't work very well, doesn't scale very well, because documents tend to move around. They get copied off of your computer and onto some other computer. So for document-based Core Data applications, we recommend that you continue to use the Tiger template for building Spotlight importers, which is the Core Data document-based application with Spotlight importer template.

So quickly recapping, try out the new template on your Snow Leopard seed. There's a lot of helpful details about what you're seeing in the importer readme file. And then if you have questions and you want to learn more, please come to the lab tomorrow. So let's move along to SQL generation. And what we focused on for SQL generation is giving you more opportunities to improve the performance of your application.

And for those of you not coming from a database background, SQL may not be a familiar term. It stands for structured query language, and it's the language that we use every time we talk to SQLite in Core Data. Any time we fetch objects, we insert objects, update and insert or delete, it's all done through SQL. And in SQL's view of the world, data lives in tables filled with columns and rows.

There's a pretty direct correlation between SQL and Core Data. A table corresponds to an entity, a row corresponds to a managed object, a column corresponds to an attribute, and in Core Data's parlance, a predicate translates to a where clause in SQL. So we use SQL to try, so a predicate fills the role of filtering objects when you perform a fetch. And when we're dealing with an SQLite database, the whole advantage of performance is based on only retrieving the objects that you really need to use.

So in order to achieve that and only have to interact with those objects that you want as the result, we need to generate SQL that's logically equivalent to the predicate. And then we let SQL do the filtering, and we get better performance. So you only might fetch the managed objects you want, you get to do the filtering in the database, and you avoid doing unnecessary work when you're able to use SQL to filter your fetch results. And that means the unnecessary work of allocating space for objects that you aren't going to use and populating the values from the database.

And what we've done in Snow Leopard is we've expanded our coverage of what predicates can be translated into SQL. And beyond the performance aspects of the performance benefits of that, we also, this also means less work for you. The alternative is you have to write special case for unsupported predicates. And that means fetching in all of the objects into memory and then evaluating the predicate in memory.

And let's take a look at how that actually gets realized as code. For a supported predicate, you simply set the predicate on your fetch request. For an unsupported predicate, you have to fetch all the objects back, and then you filter the results with an in-memory filtered array using predicate method.

So what specifically have we added? We've added a bunch more functions in SQL, support for the arithmetic functions, bitwise operators like ands, ors, left shifts, String functions, regular expressions, and a whole new category of functions called aggregate functions that will provide results like average, max, min, sum, and count.

Now let's take another look at regular expressions. This is a great addition because it means you've got the full support for predicate, for the predicate behavior of regular expressions, but now it can be expressed in SQLite so you're no longer having to evaluate the regular expression in memory. You can do it in SQLite. And this has full support for Unicode strings, including case and diacritic insensitivity.

But you need to be mindful of performance if you are going to do case or diacritic insensitive regular expressions. They are expensive, and you might want to consider creating a flattened derived attribute that represents your property value without any flattened case normalized or diacritic normalized. And if that just confused the heck out of you, I'd please encourage you to go to Creating Efficient Data Models with Core Data at 5 o'clock. Melissa will be presenting that. And a lot of other information about how to design your data models for better performance.

So another category of feature we've added is something called fetch offsets. And where this comes in handy is when you're trying to work with an extremely large set of data, you've got millions upon millions of rows. And you want to process those rows, but you don't want to have the overhead of reading them all into memory at once.

So what fetch offsets allows you to do when you combine them with fetch limits is... You can work with your data in batches. A fetch offset lets you specify the index at which you want your fetch results to be returned from. So let's take a quick look at a code sample to try and make this a little clearer.

What you can do in this code sample you can see, I've set a batch size of a thousand objects and then a loop that walks through, sets the limit to the batch size, and then sets the fetch offset to the current index multiplied by the batch size. And then for each batch of a thousand objects that comes back, I can do something useful and never do I have to fetch in all the objects at once.

So there are a lot of functions I listed in that table. The function syntax and usage is documented on your Snow Leopard seed. It's in the foundation framework, nsexpression.h. And if you do try and call a function that isn't supported, we'll throw an exception for you. So... Along with the extensions to SQL generation, we've added another way of accessing your data in SQLite. And for now, when you're fetching data, you're fetching managed objects or you're fetching managed object IDs.

When you want to access property data, you access the property data through the objects. And that gives us a way to track the changes to property data and associate it with the objects that it belongs in. But there are times, and there are certain problems, where you want to view your data across the lines of properties, and you're not as interested in the objects. You want to look across all objects at property values.

Some examples of that is when you want to see distinct values across the set of objects, like unique dates and a list of events. Or if you want to perform some statistical analysis, for example, financial data, you want to see some totals or you want to see an average value, you're really not interested in individual objects. You're interested in the properties, irrespective of the objects.

So if you want to do this now, it means fetching in all the objects, analyzing the property data across all the objects, and you definitely incur some performance costs by doing that. You're reading in a lot of extra data that you won't be using. So we've introduced some more API that will let you individually select the properties you want to fetch. And beyond that, We've introduced a new subclass of property description, which is the expression description. And an expression description is a way of encapsulating an expression into something that we can treat as a property.

So when you're specifying an array of properties that you want to fetch from an entity, your expressions can represent functions or key paths or constants or any other kind of expression that makes sense in that context. The properties you can specify are attributes or relationships, and you can even specify your properties as simple key paths, string key paths.

So to really get the most power from this, though, in order to get that unique set of event dates like I was talking about before, you want to be able to only retrieve the rows that are different. So if you're fetching just for the event date, So, let's -- there's a couple of new things in here. Let's walk through them one at a time.

First is this API to set that you want to return only the distinct results of your fetch. : Second is identifying the properties that you want to fetch back. So in this case, as I've been talking about, we're specifying the event date. And then the last thing is, well, you're not fetching objects.

You're not fetching managed object IDs right now. What are you fetching? How are we getting these results back to you? Well, we've introduced another return type, which is the dictionary result type. So you'll get back your distinct result as a set of immutable dictionaries. And why are they read-only? Because they may be calculated values, they may be derived from -- taken from different entities. They're really not something that you can work with in a meaningful way and expect that we can make any changes, populate those changes back to the database.

So your results will be returned as immutable dictionaries, and the keys in those dictionaries will be based on the set of property descriptions that you provided to us. In the case where you've provided a property description, we'll use the name. Or if you've used a string key path, then we just send back the string value as the key path as the raw string as the key.

You can combine this feature with working with managed objects in very powerful and useful ways. One of the ways is by specifying that you want some object IDs back in your results. You can specify that you want self to get the receiving entity's object IDs. Or if you specify a relationship description or a key path that terminates in a relationship, then we'll send back the object ID for the related object. Thank you. A key path that traverses a too many relationship, however, will generate an error. So I'm going to demonstrate the read-only data sample code that is included or is available for download from ADC.

Tell you a little bit about it and then we'll give it a run through and talk about how it works. The project, this read-only data project, demonstrates the read-only property fetching feature, and it uses a pretty simple model. You've got books, and you've got friends. Books have a replacement cost and a title, and you can loan your books to friends.

The project also includes some pre-canned data, so it's easy to run. And we'll take a quick look here. I don't know if -- can you guys see that at all? Maybe not. We've got four books, The Fellowship of the Ring, $10 replacement value, Two Towers, Return of the King, The Hobbit. And then you've got two friends, and two of your books have been loaned to friends. So I'm going to go ahead and build the project.

And then we'll look at the work that it does. So there are three individual fetches using read-only property data fetches that occur running this application. And Let's see, I'll start with the first one here. The first one. We'll make use of one of the new expression description objects. And what this expression description will represent is the total replacement cost of all of your books.

And the way we calculate that is by specifying a function expression to calculate the sum. And then the key path to the replacement cost on the books entity. Let's see if I can have a pre-canned script to run this with some additional SQL logging. Well, maybe I don't. What is this called? Run read only. Oh, okay.

[Transcript missing]

Which is a... Well, let's see. This is getting way too busy here. So what I did is I turned on the SQL logging so we can see the actual SQL command being issued to SQLite. And then in the project itself, we log the results of the fetch.

So the fetch was, as I said, specified with an expression that calculates, that applies the sum function to the replacement cost key path and then takes the dictionaries that are returned by the fetch,

[Transcript missing]

I don't know if you can, I guess maybe you can see that. It basically just shows that the work to calculate that result was done in the database using a select command. So, I think... So for the second fetch, what we're actually seeing is two properties. Hello, everyone.

I'm Adam Swift. I'm the founder of Snow Leopard. And then perform the search. And then we log the result as read only data fetch 2. So let's see what that result looks like. And you can see here, the result we got back was a series of dictionaries containing "loan to" with managed object IDs pointing to the friend that the book was loaned to and the title of the book.

And in our last fetch, we're going to add one more parameter to -- or one more property description to the set of properties that we're going to fetch, and that is a property description, an expression description representing the manage object ID of the book that we are printing And we'll give it a name, book ID. Expression result type is object ID attribute type. To represent the managed object ID, we use the NS expression for evaluated object.

And when we look at the results of that fetch, We can see in addition to the title and the friend that the book was loaned to, we now also get a reference to the book's manage object ID returned under the key book ID. So this is a very flexible, powerful feature. You can get a lot of use out of it. There are a lot of other uses which I hope you'll get to explore when you try it out yourself. Can we switch back to slides?

All right. So the last subject we're going to cover here is lightweight migration. And When we talk about models and stores in Core Data, we all know that a store must be open with the model that created it. In Leopard, we added a new feature we called model versioning and store migration.

And it is a really powerful feature that lets you describe a mapping model, to create a mapping model to describe changes that can be very complicated and flexible, and then a migration manager to perform those changes to migrate data from one store to another. And this feature is designed to handle any kind of migration. Very flexible, very powerful. and you can learn a lot about this and other tips and tricks for working with Core Data on Leopard at Miguel's session coming up at 3:30 after this one in Russian Hill.

One of the things we found is that when you're talking about developing your data model and the changes that happen, the workflow of developing a data model over time, is that there's sort of two categories of the types of changes you see. For major version releases and major new features, you have to introduce major changes to your data model. But during the bulk of the development time, when you're really working with the code and you're eking out performance improvements and you're tweaking features, your changes tend to be pretty limited in scope. And in many cases, they're trivial.

This happens a lot. You tend to get a lot of model churn during this development process, especially when you're trying to focus on performance. And what we found in our experience and others have given us the same feedback that it can be very -- it can be hard to keep up with mapping models over time as you're trying to deal with these many small incremental changes. So you create version one of your data model, and then you realize you've made a typo or forgotten some small feature or detail in your objects. And so you add 1.1.

Well, at that point in time, the people who have been testing your application and giving you the feedback on your application, they want to be able to preserve the data that they've built up so far. So you need to build a mapping model to make that possible, to translate the data from one store to the next.

Well, then you've gone and realized that there was another change you needed to make. Somebody came in with a last minute request. So you come up with 1.2. And then again, to preserve the data that people have been working with, you add another mapping model to carry the data from 1.1 to 1.2. But then there's the people who never tried or installed the version that used 1.1, so you need yet another mapping model to get from 1.0 to 1.2.

So, over time, obviously, this becomes very time consuming and There has to be a better way. And what we've discovered is that during this real heavy model evolution, the time where you're making all these changes, most of those changes are pretty small. So we figured out how to create a mapping model based on analyzing two data models with small changes between them and inferring a mapping model directly from them. And so we've introduced some new API to let you call this directly on NS Mapping Model.

And we've taken it a step further to allow you to specify another key at the time that you load your store to automatically infer a mapping model and then perform the migration. Just as with the original migration support, we use the store version hashes to look up the source data model, which you need to have available for us to be able to create a mapping model between the old version and the new.

So what kind of changes can we actually handle for inferring a mapping model? What's inferrable? Well, it basically comes down to can we do the right thing with your data? Is it unambiguous what we should do with the data that you've created in the old version of the store when we copy it over and migrate it to the new version?

So what are the kinds of things that we can do? Well, if you delete an attribute, that's obvious. We just get rid of the data for that. If you rename an attribute, well, we can just save the data under the new name. But if you're adding an attribute or a relationship, then it either needs to be optional or provide a default value. Otherwise, we don't know what data to put into the new store.

Let's talk about renaming. When you're looking at two data models, You can't really tell whether you've got one attribute being deleted and one being added or if it's just one attribute being renamed. So you need to use the renaming identifier and you set the naming identifier to the name used in the original model. So that way we know to take the value from the old model, from the old data, and put it in under the new name in your new model.

Now, there's new API to support this, and there's also support in the new modeling tool in the seed of Snow Leopard where you can specify directly in the interface the renaming identifier in your model. So I'd like to take a minute to demonstrate this feature. And we'll use the old cookbook application we built just a few minutes ago.

So we've realized that with our cookbook, we forgot to include the author of our recipes. Now, before I go and change the model to add this attribute, I want to be able to preserve the data I already created. I already put together some recipes, and I don't want to lose it.

If I change my model, I won't be able to read it in unless I keep a copy of the old model so that we can infer the mapping and then perform the migration. So I'm going to use the data model menu to add a model version. And that way I'll preserve the existing version of my model, but be able to make a change to a new version so I can introduce the author. And we'll make that a string.

So I also need to make a change to the template code because it's going to look for the old version -- the old name of the model, a nonversion model. And I've just created a bundle -- a model version bundle that contains two versions of my model. So in order to make sure I get the right

[Transcript missing]

What I also need to do is add the keys to my options dictionary so that when I load the store, it will automatically migrate the... is the founder of Core Data.

He's the founder of the new API, which is the new API. He's So it worked. All right. Can we switch back to slides, please? Okay, so one of the things that we've done on top of the lightweight migration work and inferring a mapping model is to try and do migrations in place within SQLite.

And this has been driven by a desire to deal with large data files and be able to perform all of the work to migrate data from one version of your data model to another version of your data model without having to read in any objects and to make the changes directly in SQLite using SQL to perform the table changes.

And we can migrate some of the inferred mapping models with the in-place migration in SQLite. And what we support is all the inferable changes to attributes that were listed before. So deletes, renames, adding an optional or attribute with a default value, and changing types, optionality, or transients. The in-place migration support is part of a bigger feature for store-specific migration. So if you have developed your own custom store, you can specify your own custom migration manager class to perform a custom migration for yourself. And that's available through some new API on NS Persistence Store.

In terms of debugging lightweight migration, you can use the existing migration logging feature using the user default com apple core data migration debug. And a note on the SQLite in-place migration. The in-place migration manager will only attempt a migration if the mapping model has been inferred. But if you want to try it out and see what's actually happening under the hood, you can turn on the SQL logging and see the changes that are happening to your data.

We've got a couple more Core Data sessions coming up after this one. There's Core Data tips and tricks at 3:30 in Russian Hill. Creating efficient data models with Core Data at 5 o'clock. And then tomorrow we're available at 2 o'clock for the Core Data Lab. For more information, contact Mike Jurowicz, Developer Tools Evangelist, [email protected]. Or send messages to ask your questions on the Cocoa Dev mailing list at [email protected]. And please, send us all the feedback you can on what we've introduced and what you might want in addition. Use the bug reporter website, bugreport.apple.com.