Digital Media • 1:09:05
With Image Capture Framework, applications can acquire images directly from digital cameras and scanners. Now your applications can support the most popular forms of digital image capture with a single API. This session explores the Image Capture Framework in-depth and explains how you can integrate it into your products.
Speaker: Werner Neubrand
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it may have transcription errors.
Good afternoon, everyone. I'm Travis Brown, the graphics and imaging evangelist. And I'd like to welcome you to the Image Capture Framework session, which is session 515. At last year's WWDC, we introduced Image Capture, which is Apple's technology to allow users to have a really great user experience with digital cameras, where essentially they can just plug their digital camera in and get images off the camera very seamlessly. A key point is Image Capture has always had APIs available for developers to essentially their application with the ability to work with a huge number of models of digital cameras. That fact is also very important because one of the things we're going to be talking about this year is scanner support, which is being introduced into image capture.
And so in one sense, you can have a single set of APIs in your application that allow you to retrieve images off a digital camera or access images being taken from a scanner. So it's a very exciting story. So to finish out the story, I'd like to invite vernon or brand image captioned into the stage Hello, everybody. So I'm going to talk about the image capture framework today. We have a pretty tight agenda, so we will... look at image capture in general very shortly, and then look into the scanner support for image capture and actually also talk about the Twain framework and some other changes in Jaguar for the image capture framework itself. And at the end, we will have a short Q&A. Image capture, where does it fit? Well, you probably saw this set up a couple of times this week, and image capture technically is part of the Carbon framework.
So image capture framework, part of Carbon framework, but it can be used from all your Cocoa applications, from all your Carbon applications. Thanks. 1 What really is meant when we talk about image capture, the framework? Well, we treat the framework as a single central piece technology that deals with image capture devices. We are focused on still images. The nice thing about image capture is really it's abstracting the device specifics, meaning you can write a single application that works with multiple devices without knowing too much about the various devices.
It also brings you a driver architecture to create your own camera or, you will see later, scanner modules. And it, of course, supports standard architectures and protocols. So for cameras, for example, we have the PTP protocol. We support mass storage devices. And those devices on PTP on the FireWire and USB.
so if you look at this slides you get basically an idea of what components are involved first of all on the left hand side we have a camera the camera the hardware that you connect to your machine it will be recognized by the system and a specific camera module that deals with this hardware will be launched this camera module talks to the image capture framework which really knows about multiple connected devices and can handle multiple clients. So one client that you all get for free whenever you install Mac OS X is the image capture application. And now, just as a reminder, when we talk about image capture, a lot of people just see the image capture application, but they have no idea about the underlying framework.
So there's really more. It's not just the application. For example, iPhoto is just another client. It's also using the image capture framework. It's just a replacement in the front end. And in there, fit your application as well. So if you want to write an application, we'll see later on, we'll look at some code samples. It's very easy to do that. Write an application that uses image capture to access images on a device.
Well, that was last year's slide. This year, we are going to add scanner support. So this year, basically the same setup. We have different modules. So scanner modules also talk, get launched when the scanner is connected, talk to image capture, to the framework, and the framework talks to your application, or your application talks to the framework.
So before I was mentioning abstracting the device, how do we do that? Well, we do that by introducing the concept of objects and properties. Objects are the things that deal with or that are used to represent like a device, like an image or a file on a device or a folder. And properties are the things that deal with the real data. Both of them, objects and properties, are identified by a type and a subtype.
Now, last year we were showing you a slide like this. And it's actually also for Jaguar. The setup is exactly the same. So we have at the top, we have a device list that's a single object. It's always there. And that object has reference to all the other devices that are connected. Now, if you look at camera two in this case, camera two has three images. So we have three image objects. Camera 2 has also, in this case, two properties, one for the name and one for the icon.
Image 3, I'm just showing that here, but all the other images exactly have the same, has also properties, also name and icon. And then there's a way using image capture APIs to really go and walk through the tree. and allow you an easy access to all the objects and properties.
But that's all good and fine. And we were finding out that having multiple properties, properties for all possible data, is just a hassle to work with. We will actually end up by splitting up all the metadata that belong to an image. We were, like, about to introduce, I don't know, 20, 30, 40 different properties. And just a hassle to get those. Because you would have to do a call for each property. You probably have to know the property type and subtype in front. So the idea was, we're introducing XML property dictionary.
And that's actually what we did, and the current system, 10.1, already has that in there. So by reducing the actual properties to just the file data and thumbnail data, so the preview icon, these are the only properties that basically stay. Well, all the others that were introduced in version 10 also stay just for backward compatibility. But the idea is reduce everything with the properties just for the real data, for the image data and thumbnail data. And all the rest is really handled with XML property dictionaries.
So again, thumbnail and actual image data are in properties. All the rest, everything else, is done and handled in XML property dictionaries. How do you get to a property dictionary? Well, there's a new call, and actually it fits barely on that first line. So the long API name, it's ICA Copy Object Property Dictionary. What it does, you pass in an object, and you get back an XML dictionary for that. Well, it's really the best way to get to image or device information.
For device object, you actually get some basic information about the device, like the device name, a reference to the device icon and all that. And for image objects, you get access to the metadata. So if we just look at the device, focus on that, and we see we get the name and, for example, also device capabilities. Can this device take pictures? Can this device delete pictures? Can it synchronize the clock?
All these things are just returned within the device dictionary. Happy. Another interesting thing that is returned is actually a way to access, a fast way to access all the data, all the images on your device. And this is actually done in two flavors. There's one flattened out, this flattened out sub-dictionary. Actually, it is a CFArrayRef. So it's an array of dictionaries that contains information about all the images that are on the device. And there's also a hierarchical representation, the tree structure.
And this one contains not only images, but it contains also other data that's on the device. for example the puff information or for example if from where that is rip really represented in the fight system you have a look at that in a sec and uh... if you look at that these two we see really they have all the basic information that you need in order to access all data on the device For the image object, we also return the name, some metadata information. So basically everything that we can extract using QuickTime, the graphics importer, we extract that and put it into a dictionary, which is actually a very easy and convenient way to get to all the information that you might be interested in.
So the best way actually to learn about that and see it is to see it in practice. So what I want to do is I want to go over here and launch an application that's actually part of the image capture SDK. It's called Image Capture Browser. So what this application does is it basically represents the tree structure that we have inside a device. So right now I have one camera connected to this camera. If I select that one, then it's talking to the device and brings back the information that it can get. Like in this case, everything in bold is a property. So we have two properties for that device. That's volume label and a camera icon.
It also has a directory, and that directory has a file name as property. So it is 001001. That's very interesting, but actually there's a store number on that device. That directory has another sub-directory, which has another sub-directory, and it actually has the images. So now we can select an image, and we will get some information about that. So these are the properties that belong to this first image, like the file name. DCP 0477. We have thumbnail information, image size. So it's about 236K.
image data, the image width, and height. That's all good and well, so basically this application is now showing all the different objects up here and properties. Just before I mentioned that it's really better and easier to just work with the dictionaries. So actually down here, I'm displaying the dictionary for the selected device up here. So for the camera, we have as part of the dictionary, we have a capability information. So this camera supports the DEL1, which means delete one. camera can take new pictures and the camera is able to synchronize clock. Then we have the flattened out directory information that has all the images really in it. So if you look at the first one here, we see we have an image file name, the file size, we have a reference to a thumbnail property, and a data property. So all we have to do is when we want, for example, to download or get the data for the thumbnail, is pass this as property into the ICA-GET property data call.
And we have additional information, like whether that file is locked or not. So if you scroll down, we see this is the flattened out structure. I don't know how many images we have on here. I guess a couple. So then we get information what device module is handling this device. Well, here we see it's the PTP camera app doing that. We learn about the file type. We learn what the ICA object for the device is. And down here, we have the tree structure. Basically, the same information as in the data structure, but now really represented as an exact copy of the layout on the device memory card.
Okay, so this is just the basic setup, and the image capture browser allows you also to look at the data and tree dictionary basically directly, and it draws you a nice outline. So you see we have one device list. That's the DC4800 camera, and these are the images, but I can also switch to the tree view and then see really the layout on the memory card.
So let's actually, as next thing, look at some sample code to walk the tree. if you want to do it all by yourself this is small application actually it's a command-line tool we're going to to look at and all it does is uh... its main get some parameters we're not interested in those now but actually all we want to look at is the dump children function which uh... takes an object that's the device the list object and prints out some information so if we look at the dumb children function, all it does is get an object and an indent and number of planks and for that object that we pass in we do a get child count. So we want to find out how many children, how many sub-objects are referred by this first object that we look at.
we get back the number in this countPB.count and now we loop over those so we go from zero up to the number of children and we do get nth child passing in our child loop as the index This call, getNthChild, actually returns information about the object. And remember I was talking an object is identified by type and subtype. So this call is returning type and subtype. And that's actually what we are dumping down here.
And if the type, the object type is a directory, or the type is a device, then actually what we are doing is we call the same function. This is just a recursive function. We call into the same function and dump its object's children. So if we actually execute this in the command line, it really lists the files that are in here. So now, let's go back to the There's another interesting call that we want to look at today, and that's ICA download file. ICA download file is a single API that allows you to download a specified ICA object.
So the call takes, as all image capture API calls, takes two parameters. The first one is a parameter pluck, and the second one is a completion proc. If the completion proc is null, then it's a synchronous call. Otherwise, it's executed asynchronously, and after completion of the call, you will get called in your completion proc.
So the parameter block for the ICA download file looks like this. So you're basically specifying the object. You're specifying a directory, fsref, that is really the destination directory. Then you specify some flags. The flags could be one of the listed ones, like delete after download, create custom icon, rotate it, create or set file type and creator, and then you specify Embed a color sync profile, very convenient, just a single call, a single flag that you have to make.
or rotates the image. And the other ones that are following, like the file type, file creator, rotation angle, and so on, they are only used if you specify the flag. Thank you. to be executed. And on return, this call just returns an FS spec. Oh, sorry, in Eversref, away from Everspecs. Okay. So, well, actually, again, let's look at some source code. Thank you. So quit the terminal. Connect the camera.
What we want to do-- actually, I want to run the application first, and then I'll show you what it really takes to do that. So the application comes up with a table view. It lists some names. These are the images on disk. and file size, we can select one image and we can download it to the picture folder. So let's do that. So here it says it was downloaded to two. And actually, if we go to the picture folder, we should see it. So what does it take to write an application like that? Actually, it's very simple. So let me open.
We see that after we awake from the small Cocoa application, of course, after we awake from Nib, we update our files and then install the notification. Well, updating the files is actually very simple. So, first, we get the device list. And then we get the nth child. We're interested in... the first device that is connected. Get the nth child. Then for that child, we use the new call, the ICA Copy Object Property Dictionary.
So we do that down here. And that dictionary on return... this. On return, it has information about the device. for example, the device name, which is keyed off under IFIL. And we use that to set the window type. And then all we do is we have our own data array and set that to whatever we have in the device dictionary under the data key.
That's all we do. Now, how does that now really display some information? Well, very simple. What we are doing is, since data array is a data member of this my window controller, we just take the count and return that as number of rows for the table view. Okay?
And when we are asked to return the values that we are going to display in the table, what we are doing is, well, we look at the identifier of the table column. So if the identifier is the index, well, then we just return basically the row that gets passed in. If the identifier is the name, then we are asked to display the file name.
Well, that's easy to do. All we have to do is look at our data array. and get the nth object out of that, which was a dictionary. So we're looking at the object at index row that gets passed in, and that dictionary contains, for the ifil key, the name of the file.
And that's all we return. Well, down here, if we are asked for the size to display, well, we basically do the same thing. Get that dictionary that's at the end, that row position of the data array, and then ask for the data that's really specified by the iSize key.
so that's what is playing what we want to do is uh... really what to download the file so we are listening to the mouse down on the download button So the target for the download button is this here, this download method. and what we're doing in here is well we want to get the selected role Once we have the selected row, we ask for the ICA object value. get the download object. Then we find the picture folder on the appropriate disk. Once we have that, then we set up the parameter plug for the ICA download file.
Then we execute ICA download files, so it will download the specified object. After that, we want to display where the file was downloaded to. So we're just updating an info string. Thank you. And we don't want to have that there for a long time, so after four seconds, we will clear off that value. So once again, let's quickly run it. And... one. So now it was downloaded and if you go to the finder, yes that's the one that you were just downloading, just selected and here it is. So it's really not much of code that is involved in order to do that. Okay, so let's go back.
and talk about a complete new area for image capture and that's the scanner support we've been asked to to support scanners for quite a while and i guess now it's really in and for jaguar we will have to scan a support and i'm going to give you a small architecture overview and talk about some additional api's for scams architectural overview is very simple because i guess i told you already everything at the beginning of this session because it's the same so we have to device and that if i says now just support scanners the same thing for that true for cameras is not true for scanners it's nicely object it will have properties handling is a little bit different The handling scanners on the image capture architecture is really session based. So, For cameras, it was easy to just support multiple clients at the same time. So you could really have two or three applications running at the same time, talking to the same device. You could handle that very easily.
for scanners it's a bit more complicated because there's really no atomic scan operation it's really you get some scanner parameters, you set some scanner parameters and then you start the scan so it's really a session that you have to work with so all our new APIs like the ICA open-- this kind of open session. and scan a closed session are working with a session ID.
So for the first one, basically it returns a session ID that you're going to use on all subsequent ICA scanner, whatever, APIs. advice. There's a scanner initialize and a scanner status call. Scanner initialize basically sets the scanner in a default mode. Getting the scanner status will report the current scanner status and give you information about the device.
These are very important calls, the get and set parameter call. Because set and get parameter allows you to control, really, the scan area resolution and everything you want to do. And these calls, you'll see that in a second, are really set up around property dictionaries that you pass around. So the get parameters returns your dictionary filled with all the information that the scanner module puts in. and on the set parameters you fill in information about the scan that you're going to start with the next call, with the ICA scanner start.
So let's have a look at scanner on... Mac OS X. Sorry, I forgot to switch. What I'm going to do is hook up the scanner. And what you should see is that we will recognize the device. So this is a small application that just shows what devices are connected. So we have a type 1 scanner connected. And let me just look at a small... application here. Let me run it first.
So what this does is basically it's just exercising the different APIs. So after I opened the session, I did get back an ICA scanner session ID. And now I can, for example, get scanner parameters. And scanner parameters have information about the device. See, we have tablet height, resolution minimum, maximum, some basic information. I can actually set the parameters.
And by just doing that, I'm filling in some information, for example, the resolution and X and Y, bit depth and all that. So I'm downloading the parameters to the scanner. and uh... with the ice is kind of i would actually exercise the download parameters you probably don't hear it but it's kind of scan is doing something currently and i'm not this this small ab is not displaying uh... anything uh... useful let's look at the court or example for the get parameters. So get parameters, all we do is we create NNS mutable dictionary.
Set that equal to the dict in the ICS get parameters parameter plug. and just do an ICA scanner, get parameters. After that, we display the result and see that. You saw that in the window, all the information that we got back. And then we release the dictionary. well for doing the uh... set parameters we basically do the same see here I do get the parameters first, but I'm going to add... information to it and i'm doing that down here so i i do set a user scan area. And the user scan area is something that I do set up here.
So for the user scan area, I'm setting up and specifying the color mode, bit depth, resolution, and offset, width and height. Now, this is a very simple setup. You can add much more parameters, but this actually is just enough to do a low-resolution scan of the entire area, scan area. Thank you.
Looking at the scanner start, well, all it takes is a session ID and then the start, and it currently produces, this plug-in that we have, produces in temp a scanned image directly. And I'm... just not displaying it right now however, we have of course also a new version of the image capture application basically does the same thing. So image capture application is now handling not only cameras, but also scanners.
The way we do it is we have come up with a plug-in concept for the UI. within the image capture application. Basically, it allows you and us to have multiple devices with different UIs all within image capture. So, for example, if you want to have your camera vendor and you want to have your own UI for a specific camera, then it's very, very easy to plug that in.
So in this case, the scanner is really handled by the scanner plug-in and what you can do is You can do a fine scan or better scan of the selected area and get that. There are some parameters that you can set, but that's just the very first UI. We're still working on that, and it will be a fleshed-out version in the Jaguar release.
And, well, another important thing is, as for all camera devices, it's also true for scanner devices, we will have device-specific parameters in the image capture application for Jaguar. So, we-- could for example have device specific profiles or settings whether you want to download and create custom icons or not That's the simple-to-use scanner UI within the image capture application. So there's really one application that does both cameras and scanners. Switch back to the... And what about the scanner drivers? Well, scanner drivers are supported by image capture, and we support basically two flavors. One is the image capture native ones. These are the scanner modules that are very similar to the camera modules that we support. And we support twain data sources. Okay.
Let's look at the image capture ones first, the native ones first. Well, we again have a scanner device framework that's very similar to the camera framework. It has just a whole bunch of code that you normally have to deal with when talking to the device and more administrative things, like whenever a device gets connected, you have to register it with image capture so that it can talk with that. So all that code is really provided by us in this framework. One for cameras, one for scanners.
And the idea behind it is by just having you implement a couple of functions that talk to the hardware directly, it will be a lot easier for you to create scanner or camera modules. 1 One thing that we were also asked by a couple of vendors was button support. A lot of the new scanners have a button or multiple buttons in front where you can click to do a copy, click to do an email and all that.
And yes, we will support that. So... Again, we will have something in the scanner framework that deals with that. And we can do it because what you have to do to enable it is really put some information in a special P list. So this camera or scanner module has device info.plist that contains information about the device. and--oh, sorry-- about the device, and you have to provide also some information about what we have to look for when this button is pressed. So next thing, the Twain framework.
We will have a short overview of the train framework on Jaguar, look at some data sources, and also client applications. Well, Twain was established as an industry-wide standard, and it's used all over the place. I mean, for a long, long time, we have Twain data sources. However, the support on Mac OS 9 was unfortunately not really that great because a long time it was not updated and all that. But for Mac OS X, I guess we have a really good solution.
So the three key components that are really important for understanding the overall Twain framework are, well, we have client applications, very similar to the image capture client applications. We have a data source manager, very similar to the image capture framework. And we have data source, very similar to an image capture camera or scanner module. So the really important part for Twain and Jaguar really is Twain is part of Jaguar. So for the first time on a Macintosh operating system, we are installing Twain by default.
And the nice thing is, if you already have an application running with the current beta, Twain, it will work. There's no need to ref. I have to say a few words about the current version of Twain. Well, it's a CFM-shared library, and that really makes it hard for Cocoa or CarbonMACO applications to use it. So really, Twain, the organization, came up with a better solution, and we will see that really in action. it's now a MacO framework. It's on 10. So there's no need to ref. Well, except... stop installing the Twain shared library, the CFM shared library. So if you have an application that you are... an application or a data source that has an installer and used to install the CFM shared library, please rev it for Jaguar. Do not install it because we will install Jaguar We will install the DSM basically at two different locations. Well, we have, as I said before, Twain as a native Mac-O framework in system library frameworks.
And we have some clue code for CFM-based applications in system library CFM support. There's also new location for the data sources. So if you are going to install a data source, it's no longer in the application support folder, but it's really in system library image capture train data sources. Thanks.
So the Twain DSM was rewritten to be now Mach-O based. And the nice thing is really it supports then Mach-O based Carbon and Cocoa applications. And, of course, they made sure that Twain supports Mach-O and CFM based data sources. So both flavors will work. There are a couple of issues if you're currently writing a DS.
unfortunately you have to read so the client application does not have to do anything but the ds writers they have to read And they have to ref in four points that we want to look at now. And the first one is packaging. It used to be that they were just CFM shared libraries, single binary. Okay.
Well, that doesn't work good with the framework and the overall Mac OS X, so we ask you to really create bundles. Well, these could be CFM or Mac OS bundles, but put them into a bundle. This gives you actually a lot of advantages. Also, like localization, so you can have multiple languages of your DS in a single binary, a single folder bundle. Thank you. Event handling, well, that also has changed. The current, whenever, I guess, Twain came up, that was 1994 or so, beginning of the 90s, then the event model on the Macintosh was really built around wait-next event.
Well, for a new Carbon application with a Carbon event model or Cocoa application, there's really no easy place to hook in your wait-next event for that. It was really bad because you had to, like, before calling into wait-next event, you had to call into the DS and give it some idle time. Then you get the wait-next event, and after that you have to ask the DS, is this event for you? The DS would say no. It comes back to the application, and the application handles it. Kind of bad. It's really this polling mechanism. We want to go-- get away from that. So the new DS has to support the Carbon event model.
It should not be too hard to convert existing DS to this Carbon event model. All you do is basically wherever you were called and asked to handle an event, now you install a Carbon event handler for that window or control, and then your handler gets called directly and you handle that event.
The old wait-next-event model was also used to pass information back from the DS to the client application. Well, of course, that's not working anymore. So what we had to do was really introduce a callback mechanism. So a new... TWAIN client that you will see some source code for TWAIN client written in a small Cocoa app, really will not get data back from the DS based on a wait next event call, but it will get called via a callback. And you will get informed through that callback whether there's a mistransfer ready or whether the DS will close.
We also ask you to support an optional feature that was the UILess operation. So what we really want is that the user choose whether to use the original Twain UI or our very simplified UI within Image Capture. So in Image Capture, in the application, there's an option which one to use. The user can choose.
And we ask you, as a 20DS developer, provide us with a device info.plist and add that to your bundle and that device info.plist should contain some information about the device itself. By just doing that, we will be able to detect, without loading the 20S code, to detect whether a device that's connected is handled by a given DS.
This has a great advantage over the current model, because if you-- and you will see that when we use, for example, Photoshop, to handle or to use a Twain DS, I guess all they can do-- and there's really no way around that-- all they can do is go through the installed DSs and add them to a menu. They cannot, at runtime, detect whether a DS is able to handle the connected device. Or if there's no device connected, they even list all the devices. And then you open the DS, and then the DS will get loaded and will tell you after maybe 10, 15 seconds, well, I couldn't find the device on the bus. That's kind of bad user experience. It would be nice to just dim that menu item or not even show it at all if the device is not connected. So we will get to that whenever we evaluate the device info.plist.
So for Twain clients, like the previous model, as I said, was around wait next event, and it still works. So if I would show you Photoshop running using the new Twain framework, Photoshop was not modified. It just runs. So what we do is really getting the event calls from Photoshop. We just pass them back immediately and still use that mechanism to really communicate from the DS to the Photoshop application. So the important thing is really here for newer Carbon, newer Carbon means really Carbon event-based applications, and Cocoa applications, they really have to register callbacks.
And the callbacks, they look like that. It's very similar to the regular DSM entry call. Thank you. All we pass into this call is DAT callback and message register callback. The callback function itself could look like that. It has the same parameters, and you get back a message. The message could be, for example, there's a close DS request, and then your client application will just call TWDisabled.js and close the DS. So let's have a look at the sample.
Well, first thing to just show you is Photoshop on 7. Well, we have here... Epson, Twain for Jaguar. So the Epson guys helped us recreate with providing us a new native DS that uses the Carbon event model. And you see the standard Photoshop application is just working. So I can select this and do a scan. Hopefully it shows up. Yep. Okay, so let's Photoshop without changing anything. But let's look at a sample application. And OK.
sample app controller and Let's see. So when the application did finish launch, what we do is we initialize Twain. Okay. Initializing Twain means we have to set up application identity and call into TWInitialize. This TWInitialize is a function that gets, is actually part of some Twain sources that are part of the Twain SDK, so you don't have to write that yourself. And just use that. So this is basically directly out of the SDK. use that, fill in some information like the language, country, and all that. So after the twain is initialized we register a callback registering a callback This also, well, that's a function that we actually added to the Twain sources.
It's basically directly communicating whatever call I was showing you earlier. well that's it on launching so we have select data source Well, that's the regular TW select DS part of the train SDK. And we have... where we do a TW acquire. Unfortunately, currently the native image type is still a pick handle.
So we get back a pick handle and have to convert that into an NSImage because we want to show that. We do that with an NSImage alloc with data and pass in the image data and set the M image and that's basically it. So let's run this small app.
So that's the new native MacO Twain DSM showing this selection dialog. You noticed before that Photoshop is not using that dialog. Photoshop is just evaluating, examining the DSS directly, and adding those DSS that it finds to the import menu. So this application is using the selection dialog. And we can do an acquire. Acquire, in this case, brings up the 20s.
And we can scan in. scan in the image, actually it's currently still open, close it, and we just scan in an image in a Cocoa application that's just a couple of lines of code. Okay, so we both have, well, we have now two different ways to talk to a scanner, image capture and Twain. See, in the sample application here, we were just using Twain APIs to do the scanning. Another question is really for you.
How do both work together? So do you have to install one or remove the others or all that? Well, actually, we will have a Twain bridge. So that means we will have a piece of code, an image capture device module, that talks to a Twain DS. So that means that you can still be all in the image capture world, the ICA API world, and use the 20S. Thank you.
The advantage of having this twain bridge is actually one thing that I mentioned before. That's also the button support. Because twain bridge will get launched whenever the scanner is connected. it. then this twain bridge is running, and it actually will monitor device buttons. So you press a button, and the twain bridge will listen to it. Then depending on the section that you made before, actually it will allow you to trigger the correct action.
So device arbitration is just what I said. And for you, the question now. which framework should you use? It's really up to your needs. So image capture should give you an easy way to work with image capture and 2nds devices. Twain is probably more powerful because you have more control. So the image capture approach is really keep it simple, make it simple for a lot of applications to just have an acquire or import button. But if you really want to go into the depth of controlling the device, Twain might be the right way. Thank you.
Now let's look at some other changes in-in the upcoming Jaguar release. First of all, digital hub. Digital Hub and application launching is something that will be new in Jaguar. And we actually got asked a couple of times, how do I change the hot plug action on a current Mac OS X system? So a lot of people were installing iPhoto, and iPhoto may be their default application for a while, but if they just want to change that to an image capture or another third-party application, how do they do that? Well, unfortunately, they had to go to the image capture application. Although it has only three pop-ups, it was not quite obvious for a lot of people, so we got a lot of questions about that.
So I guess the better solution is really, and not only for image capture devices, but a lot of other devices, since we're claiming to be the ideal digital hub, We will have, and you probably saw that this week already, we'll have a digital hub panel or part in the system preferences. So let me just show you that. Here we go.
So going in here, you see system preference come up with a digital hub part. And for photography, well, you can now choose what to do when a camera gets connected, when you insert a picture CD, when you connect a scanner, or when you press the scanner's button. So it's quite often probably that you will want to do nothing when the scanner is connected, because quite often the scanner is connected all the time. And you want to activate your application only when you push a button on it. That's the way to launch it.
And while we are here, the upcoming changes, one thing I want to show you as well That's new in the new version of the image capture application. I was saying before that one application, due to plug-in mechanism, is now able to handle multiple devices and present multiple UIs. See, the same application here is now showing the UI for camera. The only difference to the current version, at least the first difference is there's one pop-up missing. The hot-block action is missing. That's really now part of the digital hub. These options are now, I mentioned that before, are device specific.
So you can specify a profile per camera, and that will stick with that camera. you can have download folders per camera So if you're at home and you and your wife or you and your husband, they have two cameras. Each, of course, different model. You connect those. You will detect that and have preferences per device. And then you can download the images to the appropriate download locations. Yes.
So downloading SAM, well, it's also a bit new UI here. But also one very nice thing is you have the pop-ups also in this window. Basically, you can go to the data browser window, select a couple of images, specify a download folder for these two, press download, and download the two, and download the next four to a different location.
So a couple of things that we actually added to the UI here. For example, you can also add a thumbnail size here and then I guess the first images are not that great. You can delete those, and maybe these you want to download, and you could easily do that. So that's the new application.
So one upcoming change is an extended event notification. The current mechanism was not that suitable to propagate events that come from the device to the client application directly. So currently we are supporting very well device connected, device disconnected, image card removed or put in, or image was taken. This camera actually can take images from within the image capture application. Or whenever you delete an image, then also the client application gets notified.
but there are some cameras that have some events that they produce for example cameras connected you take pictures from there so there's no interaction from the client application take pictures directly on on the device and he still want to propagate that to the client application for that we really have to had to extend the the current event with the first uh... notification parameter block so you're adding things to it like uh... extended raw event type, and a way to pass data via the event data cookie, pass data from the client, sorry, from the camera module to the client application.
So registering and unregistering is actually the same as before. You register by specifying an object that could be nil to an EXCD to get notified for new notifications of 00 in both cases to get notified on all events. And to unregister, basically use the same parameters as on the registration except specify null as completion proc.
So in this case here, we are just doing a registration for all events. And on registration for all events, you see the difference is that my completion proc gets changed by, replaced by null. the DeviceInfo.plist, we talked about that a couple of times before, and we really have that as a way to identify more features of the device.
And, for example, currently we use that to display a nice camera icon. Depending on the model that you connect, you will see that image capture application comes up with a real nice icon for that device. And that's really done based on the information in the info.plist. Well, we also are going to extend that and add some more information in here. For example, a color sync profile. So the color sync profile and device class information is added to the info.plist. and allows us to use and embed later on the correct profile for your images.
One other thing where we are going to add a new feature that's a pass-through mechanism that's basically a simple way to control device-specific features. So if you know exactly about the device that's connected, and most likely that's a vendor-specific application, so if you know about that device and you want to do some private calls and private commands, send that to the device. That's also easy to do now with a pass-through mechanism where image capture just treats it as a plug of data. We send it to the device and the device knows what happens with it, what to do.
Metadata for input devices. Well, of course, there are some color issues, and you probably learned earlier this week that Megalos 10 is really a full color-managed environment. And in the color sync session, you also learned about a couple of weak links in that whole color sync workflow. Like one weak link is right at the beginning. That's whenever we get the images from a digital camera, and they may not contain color-sync profiles, but the user may want to embed a color-sync profile. So what is the best way to do that? Well, actually, a good way is whenever I'm going to embed a profile, I use the default profile for input devices. A better way is the device itself specifies a profile for all its images. Actually, the best way is if the device would specify a per-image profile.
And now it's important for your application, dealing with images, that you preserve the metadata. Preserving metadata... also preserving embedded profiles. So if the device embeds a profile, preserve it. And there are really mechanisms to do it very easily. QuickTime has, for all its graphics importer/exporter, ways to do it. So there's a way to get to the metadata and set the metadata on the export. And also for the import, get the profile and export set the profile. Please do that.
And I guess the last thing I'm going to show you is a small application that we have, and it's called CameraCheck. And the idea behind it is really give the user a chance to hook up his camera and then see about the capabilities for that camera. So in this case, I have the image capture application. Just quit that and launch CameraCheck. It comes up with a dialog saying, well, a couple of things about the application. It tells you about its preferences and allows you to... shows the connected device appear and just all you do is hit run what it then does it really exercises couple of the ICA commands runs through and spits out the result in this case for the user who really connects the camera the first time and really wants to find out what are the capabilities of my device for example you uh... we'll find out that this device is able to take pictures. So the result is okay.