Mac OS • 55:55
The Image Capture Framework enables your application to acquire images directly from digital cameras and scanners. This session explores these capabilities and explains how to integrate them into your product. Image Capture driver development for digital cameras and other imaging devices will also be covered.
Speakers: Travis Brown, Steve Swen
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good morning everyone. I'm Travis Brown. I'm the Imaging Technology Manager for Mac OS X and I want to welcome you to session 117, Image Capture Framework. You've already seen Image Capture demonstrated on the keynote where you can take a USB digital camera, plug it into a USB cable and Mac OS X will recognize that a camera has been plugged in and will then bring up a very lightweight image browser to enable the user to select which pictures they wish to download. There's another dimension to Image Capture and that is the Image Capture Framework.
Underneath Image Capture there is a set of APIs that developers can use to make their applications savvy with digital cameras. So therefore you can easily take an application such as a database application where you might want to give a field that could essentially capture a camera, an image off a digital camera and put that in a database. That would be possible to do using the framework. So there's all sorts of very interesting possibilities for applications to create new features for their users in conjunction with digital cameras.
One of the real interesting points is I want to point to some statistics is the digital camera market is exploding. It's one of the fastest growing computer peripherals available. You have an interesting statistic is that eventually that digital cameras will fairly soon surpass that of film cameras. So in terms of the way users will interact with photography and images in the future, it will very much be driven by digital cameras. So it really makes sense for your applications to become aware of how to work with digital cameras and add value.
An additional interesting point that was brought up in the keynote is that of Mac users, 60% of them have digital cameras. And they would really love to have a seamless integration of their applications with their digital cameras. So now I'd like to invite Steve Swen on stage to continue the presentation.
Thank you, Travis. Good morning. Before we start talking about image capture for Mac OS X, let's review the current situation on Mac OS 9. As you know, Mac OS 9 has no system-level input architecture implemented. So all the major device manufacturers have to do is shipping custom applications and drivers and sometimes Photoshop plug-in. Applications will have a hard time working with devices because doing so requires knowledge of device level communication. That means you have to know how to talk to each individual device using a device protocol.
So Image Capture for MICOS 10, what we're trying to do here is provide a device level abstraction. So that would enable the applications and the device communicate through this abstraction layer very easily. It's a collection of APIs and set of system services. Like I said before, it enables application easy access to imaging devices such as digital camera. We also provide very tight system level integration to take advantage of other Apple technologies such as QuickTime, ColorSync, and Quartz.
Here's a block diagram view of Image Capture Framework. So you see the bottom layer, Darwin, is the open source kernel. Right on top of it is the application service, Quartz, OpenGL, and QuickTime. Image Capture Framework is part of Carbon Library. However, you can access Image Capture directly from either Carbon applications or Cocoa applications.
There are three major components in Image Capture Framework. The first one, the most critical one, is the framework itself, its set of APIs. We have a high-level API for applications to use and also a low-level API for devices. We also ship with built-in, a set of built-in support for cameras using standard protocols. I'll get to it later. We also ship Image Capture application, which is a lightweight application to allow users to preview and download images from camera.
Here is another view of Image Capture Framework. You see on the left hand side, you have your camera. Once you connect the camera to the computer, Image Capture will load the right camera module for that device. And then your application and Image Capture application can communicate with this camera to this device abstraction layer and using the Image Capture Framework APIs.
So we have built-in camera support for four different classes of devices. The first three are camera devices that use protocols and so you can connect the camera directly to the computer. They use PDP class, digital class, and mass storage class. We also support a number of removable media.
PDP class. PDP stands for Picture Transfer Protocol. It's also known as PEMA 15740. PEMA is the standard organization to find the standard. It's an industry standard group and including platform vendors like Apple, Microsoft, and device manufacturers such as Kodak and Sony. This protocol enables full control of camera functions. So you can set the camera exposure level and you can take a picture, you can retrieve picture, you can synchronize clock, or even you can download music files. It's a very powerful protocol.
It is transport independent. The USB working group has defined the class for the PDP device already. And Mac OS X 10.0, our first implementation, supports a set of PDP commands that would enable you to do "plug and join operations." Typically you go out and take a whole bunch of pictures and you come back and plug your camera into the computer and we can download all the images for you.
Recent cameras from Kodak and Sony support this protocol. So if you want to get experience how this camera PDP protocol works for you or you want to test with your application, you can get one of those Kodak cameras like 4800 or MC3 or Sony S75. They are on the market today.
The second class is DIGTA class. This is a vendor-specific protocol by Flashpoint Technology. Some Kodak HP cameras using this protocol. Since it's a vendor-specific protocol, we rely on the device name matching to load the proper module to support your camera. So if your camera uses this protocol, let us know so we can add it into our database.
Third class is mass storage. This is the most commonly used protocol. It basically mounts the camera to storage devices as a MS-DOS file system on your desktop. So the advantage of doing this, it's pretty simple, easy to implement, and it's a familiar concept for users to use. However, the disadvantage of that is it's really a limited interface to a camera because the only way to control that is through file system commands. So you cannot really get and set camera-specific properties.
There's a number of removable media we can support as well. Compact flash, smart media, memory stick, micro drive, and the good old floppy disk, super disk, recordable CD. This is a good way to get your image into the computer if your camera is not USB equipped or your camera protocol is currently not supported by MICOS 10. As long as you can take your storage device and find a card reader and plug it in, and as long as you can mount your storage device on the file system, we can read it.
Although the storage device approach is easy to implement, but since we cannot treat it like a camera, so we cannot ask the camera where you store your files and are you a camera device or not, so we rely on a few hints. First of all, the device has to mount its MS-DOS file system. And then we search for, does it have a standard DCF file structure? DCF stands for Digital Camera File Format, which is defined by the standard organization.
If we cannot find any of this stuff and we look for, is this one of the non-standard file structure we stored in our own database, so we try to match that. So if you have a file structure which is not standard, not using MS-DOS file system or DCF file structure, or even if you do use a standard file structure, but if you store the images outside the DCIM folder, which is by default the folder stores still images. However, if you have movies and sound files stored in other folders, please let us know so we can make sure we search through your folder.
And again, unlike PDB camera, we cannot ask the camera for image files and sound files and audio files, video files. So we rely on some standard file formats that we recognize to download. Currently, mass storage interface will support JPEG, TIFF, and GIF for still image formats. Video format will support QuickTime, MPAC, and audio file format will support MP3. Again, if you If you store files, the file type not on the list, but it's important to your camera function, please let us know so we can include it in the download.
So if you're a camera vendor, there's three things to remember. First of all, use standard protocols. PDP or mass storage, both excellent choice. You don't have to worry about getting the driver, getting the support, installing software. User experience will be great. Second one is also very important. Send your prototypes to us to ensure compatibility. It's very important. Send early, send often, send multiple copies. Engineers can be bribed. And the last one, if you have a custom file structure or a file system, make sure you send us the file structure map so we can include it in our database.
Third part of Image Capture Framework is Image Capture Application. This is a lightweight application to allow users to browse and download images from camera devices. And also, you can define a set of hot plug actions and automatic tasks to customize your workflow. We also ship a number of scripts to allow the user to do common tasks such as build web page.
Here's the UI for Image Capture application. The top part is camera. It shows you what camera device you have currently connected. You can have multiple cameras connected at the same time. The middle part defines hot plug actions and automatic tasks. And the last part is you specify where you want to download your files to.
One thing about download synchronization, camera devices are increasing in resolution at an amazing rate. And you can find cameras with 3, 4, 5, even 6 megapixels on the market today. So you can imagine the file size is getting bigger and bigger. So we try to avoid unnecessary downloads by relying on a couple of things for MICOS 10.
First of all, we use file name. And if we find the camera file you tried to download, and you have identical file name in the download folder already, and those are the size of these two files in the camera and in the download folder are identical, we will skip that.
In the future, we will use the capture date in addition to file name, stored in metadata to do the synchronization. So if you're application developers, make sure you try to preserve the metadata, don't strip it out. And if you're camera vendors, because the serial number, the picture serial number is part of the file name, so make sure you don't recycle the numbers until you run out of all the combination.
Hot Plug Actions. Hot Plug Action is defined to be when you connect your camera, what do you want to see happen? There are three possibilities here. The first one is when you plug in the camera, we open the Image Capture application for you. So you will see the same UI I showed you earlier. The user can browse and download and select images to download. That's the default behavior. or you can select automatically download all.
The advantage of doing that is you can automate the system. When you plug in the camera, there's no UI will come up. All the files on the camera would be transferred to the specified download folder automatically. And if you specify an automated task, and that one will get invoked and all the downloaded files will be passed to that application to do post-processing.
One thing to point out is even if you select the default, you can come up with a UI and you can select which files you want to download. And you can also specify the post-processing automatic task. And after downloading, it will call that automatic task as well. So the third action you can specify is none. Basically, there's nothing on Mac OS X 10.0, the first release.
The idea here is the user may want to use his own favorite applications to handle this camera plug-in event. So we will not invoke the Image Capture application if you select this one. And in the future, we want to make this more useful. You can actually select an application on the launch when the hot plug event happens. And we call that application for you.
Automatic task, like I said before, this is a post-processing. After you've downloaded your images, you specify this automatic task and we will call it after we download all the images. They can be Apple scripts, applications, or aliases. Currently, you have to store these things in the Library Image Capture Scripts folder so we can pick it up in the UI we show you which one to select. In the future, we're going to put it into a preference so you can select it. You don't have to put it into this location to get picked up.
So one advantage of automatic task is even though today Not too many applications are image capture savvy. When I say that, it's really, they don't really have the API calling API, calling the image capture API to do the stuff like getting the pictures downloaded or sending the pictures, do all this stuff. But however, if your application can receive an open event with all these files passed in, and you are benefiting from Image Capture Framework already, because in that scenario, the user can select your application as post-processing event.
Future directions for image capture application. There are a few areas we try to improve. One is metadata browser. Metadata is the per image information Such things as exposure level, your color space, your creation date, the size, resolution, and all this good stuff. And it helps the application to use the metadata down the stream to do better image management and processing.
We also want to improve in the browser, you can do per image rotation and deletion. Instead of using a script, you can actually rotate it into place and delete without downloading. Third part is we're going to add a preference panel. Preference panel would allow you to select, like I said before, which application you want to see get launched when you plug in the camera. And also you can specify what are the automatic tasks you want to see in your user pop-up menu.
A couple words on scanner support for Mac OS X. Image Capture will support scanner devices in the future. However, Make sure you port your scanner to MicroStand today instead of waiting for image capture. Couple easy ways to do that. First of all, you can port your Mac OS 9 scanner application to Carbon or to Mac OS X. That's probably the least effort involved in some cases.
Or you can support your scanner device using Twain interface. And just yesterday we had this Twain announcement that Twain organization is porting Twain DSM to Mac OS X. So if you like to use the Twain interface, you can write a Twain DS for your scanner device and shipping that for Mac OS X for now. And we will figure out a way to reach the back-end compatibility once we support scanner device in image capture natively.
So although I know that you guys probably seen the Image Capture demo many times this week, but hopefully we show you something different. I want to introduce my colleague, John Nagy, and to come up here to give you a tour of Image Capture and a couple special things we did for you.
Okay, thanks Steve. I'm John Nagy. I'm one of the image capture engineers. And I'm going to give you a demo of how image capture works, kind of from a user standpoint. There you go. Does this still work? This one turned on? Okay. I'm going to give you a demo of how Image Capture works from a user standpoint and then sort of along the way show you where, as a developer, you can write applications that fit in at kind of one of two different places. So, Image Capture is all about connecting devices, so I've got a camera here. And I'm going to put it in PC connect mode and plug it in.
So you see the Image Capture app just launched by itself here. And like Steve was saying, the reason it did that is because it's set to open Image Capture application on the hot plug when it's plugged in. So you can think of these hot plug actions as what happens when the camera is connected. You can also think of them as how to select which files you want to download.
If you have the application open up, then the user can either download some of them or download all of them with a UI for setting preferences. Download all, there's no UI. The app doesn't come up. All you see is a little progress bar with icons and the file name. And the images are downloaded. Or none, nothing happens. Connect the camera, nothing happens.
So again, from a developer standpoint, this app, Image Capture application, you could, if you wanted to, write an app that did something that would take the place of this application. That's kind of the lowest level to get in there and do something with the images. You can use the API to get notifications of when the camera is connected and then bring up your own UI that did whatever you wanted to. So that's the first place to kind of slide in here.
So hot plug action. See at the top here, there's a, as far as the application that we did to do this. You've got a pop-up of the different devices. If you had multiple devices, you could pick which one you wanted to download from. Download folders. On Mac OS X, Go ahead and show you, in case you don't have Mac OS X. But you should. In every user's home directory, There's a pictures folder, music folder, and a movies folder. So that's just there by default.
Although it's called Image Capture, some devices also, like Steve said, can take pictures or store MP3s. So when the Image Capture app downloads these files, it'll sort each file type in the right folder. So pictures go in the Pictures folder, movies go in the Movies folder, and MP3s in the Music folder. So that's what that radio button means.
User can also select some other folder by clicking this button and picking a different folder. So hot plug actions is which files to download. Download folders is where they go. And automatic task is what you want to do with the files after they've been downloaded. This is pretty straightforward.
It's basically any app that you can drag files onto. So, you know, you could drag an alias to Photoshop, put it in this list. After the images are downloaded, they all open up in Photoshop. Or you can have something more complex that was really designed to open multiple images at the same time.
So we shipped a bunch of apps that the user can choose from. These are all Apple scripts that have been saved as applications. But you could put any kind of application that accepts multiple files being dragged onto it in this list. But for now, I'm going to choose the format 3 by 5. And it's going to build a series of web pages with images all scaled down and sized to fit, laid out on a page so you can print them out. I'm going to download some and pick which ones I want to print out here.
There's two views. There's an icon view like this, and there's a list view that gives you a little more information. Probably in the future we'll have more of the EXIF data over here, but for now it's just image and file size. So I'm going to pick four of these here. Image Capture Drive Actually, you know, let me open up the pictures folder.
Just so you can see him show up. So you get the icon, file name, progress bar. Now once they finish downloading, it's going to launch this AppleScript application. And the app is going to go through with each image and figure out if it's wider than it is tall or taller than it is wide and then sort it in the various pages. It generates these web pages here. So you can open this up in Explorer. Let me turn off some of these extra bars here.
So you can think of this as being just a sheet of paper. I picked four because you can fit four 3x5s on a single sheet of paper. If I'd selected six images, there would be four on this page and then two on another page, so you just print on each one. Explorer is really nice because they have kind of a nice print preview set up where you can... Adjust the settings so you print at the full size. Turn off headers and footers and turn on print wide pages. Make sure the printer is turned on. Nice glossy paper.
So I'm using photo paper, so I'm going to go in here and set that so it looks nice. Photo hit print. So that's it. Now what's nice about Explore is that even though on screen it looks like they're pretty small, it looks like 72 DPI, when it gets printed out it will use the original image data. So you're not limited to 72 DPI, it will use the full image size when it prints so you get nice, high resolution photos.
So that's an example of an application that is designed to work with multiple files called from the Image Capture application. And that's the second place that as a developer you can write apps that go in. That's sort of the lighter weight, maybe a little more user friendly, cohesive UI for users. It's to write an app that deals with multiple files and have the user make an alias and put it in that scripts folder. So I'll show you how that works.
The scripts folder is in Library Image Capture Scripts. Now those called scripts, you can really think of it as, you know, applications. It doesn't have to be a script. It can be any kind of application. And you can add to this by just making an alias and putting an alias in here.
I've got this app called Add to Database that Werner wrote as an example of what you might do. So I'm going to add an alias to the scripts folder. And now when I launch Image Capture App, It shows up in the pop-up here. So I'm going to select that. And pick some more images.
You see when the camera first, when the app first launches, it gets all the file names first. And once it has all the file names, it goes through and gets all the thumbnails. So the thumbnails are actually... It's not having to get the whole image file and shrink it down. They're actually EXIF thumbnail tag inside each image, so that's why it's pretty quick to get the thumbnails. So I'm going to pick some images here.
Say these eight. So again, it's going to download. So what this application does is it works with FileMaker Pro, and it adds each file to a FileMaker Pro database. And it extracts and puts in the database the EXIF data that's in each image. Turns out that all the images, all the cameras that we've used are really great because they put this EXIF data in with the image file.
As a user, you might not even know what's in there because a lot of apps don't really show the data in there. But there's all kinds of good stuff in there. Whether the flash was turned on or not, the exposure time, all kinds of really useful information. So this application puts that all in a database.
Once you have it in the database, in FileMaker you can use different layouts to see as much or as little information as you want. Here's one layout that just has the pretty basic information: thumbnail, date, what kind of camera it came from, the exposure, and whether the flash was on or not. There's another layout here that shows more information.
[Transcript missing]
So here's the page that got printed out and of course you can't see it from back there, but it looks pretty good.
So a couple of other things that are kind of nice about Mac OS X from an image capture standpoint is You can change the size of icons. You can make icons really big. So like this pictures folder here. If I put it in icon mode, Open this up and go to View Options. Say, keep range by name.
I can turn this icon size up pretty high. And then just in the finder you get a nice preview of what the image looks like. That's pretty convenient. And then the kind of second tip about Image Capture is that you can customize the toolbar in Mac OS X. Do Customize Toolbar. And you can add, let's see where it is, pictures. There are buttons for your pictures folder, your movies folder, and your music folder. So you can add that to the toolbar.
And then no matter where you are, by clicking the pictures, you can go right to your pictures folder. It's kind of nice. So that was an example of two applications. One of them is an Apple script that we ship, users can use to print out 3x5s. A second one is one that Werner did that extracts EXIF data, puts it in a FileMaker Pro database. And so that, you know, maybe that gives you an idea of some kind of apps you can write that deal with multiple files. So with that, I'll hand it over to Werner.
Hello, my name is Werner Neuprand and I'm one of the image capture engineers. By now, you should have a good idea about what image capture is and you actually saw it in action. And so let's look at what you would have to do in order to use image capture from your So first, I'm going to talk about the application level APIs. That means write an application that uses image capture and gets some data from a device.
Before we actually look at the APIs, just a few words about the whole idea how to use it. So we have an object-based API, which means we are dealing with objects and properties. More on that later. Steve was mentioning we are device independent. That means whether it's a scanner or a camera, scanners in the future, cameras now, we support it. And we are transport independent. So USB devices, FireWire devices, and whatever you can think of.
When we talk about objects, all the ICA objects, it's an opaque structure. So you use that as reference whenever you want to access data, data from the camera. So objects, they do have types and subtypes. They basically identify the object. Objects can contain properties. Properties, the ICAE properties, that's where the actual data is stored. And properties as well have types and subtypes to get identified. So the symbols model is we have one object that may contain another object, reference to another object, and a property.
We always have a very special object at the beginning of a tree, and that's our device list. That's always there. Even if you don't have a camera or other device connected, you always have the device list. In this case, we are showing multiple devices. So two cameras, two scanners are connected. And what will happen? Well, if we have a closer look at camera two, you will see we have three images there.
And we will have some properties for the device. For example, the name. You saw that in the Image Capture application. We had the name and an icon. So the icon, of course, is also a property. Clock could be a property. We could have a whole bunch of properties.
And that's the same thing for the images. So images have properties, like the image data. They have a thumbnail property. They have the image width and height, the image name. All these are properties. So if you look at that tree now, then you will see in order to get to the image data, the only direct access point you have is the device list. So from the device list, you have to go to the device. From the device, you go to the images.
And from the images, you go to the image data. So device objects, you access those via the device list. The image objects via the device object. And once you have device object, you can iterate over all the properties. First you get the images and then iterate over all the properties to get to the actual image data.
All our APIs have two parameters. They all take a parameter block and a callback, a completion block. Good idea to always clear, so do a memset of the whole parameter block, fill in whatever you want to pass into that call, make the call, and then depending on whether you make it synchronous or asynchronous, extract the data.
So that's how an API looks like. So it always starts with an ICA, the Image Capture Architecture, and returns an OS error. You pass in a parameter block and a completion block. Now, for the completion block, that's how you decide whether you want to make your calls synchronous or asynchronous.
If you pass in nil as completion block, then it's a synchronous call, which means you make the call, the camera will do whatever you tell it to do, comes back, and you have the result. Asynchronous calls, you specify your completion block, you do the image capture, you call the image capture API, it immediately returns and then it will call, whenever the data is available from the device, it will call your completion block.
Actually, that's what we do in the Image Capture application. So all our calls are asynchronous. So the nice thing about that is you get a pretty snappy application. So it's working. The UI is updating without having the application to be multi-threaded. So it's a single-threaded application and it still feels like, yeah, it's responsive.
So the completion proc has a single parameter, and that's the ICA header that you have to typecast to whatever call result you're expecting. We will have a look at that in a bit, so I will go to show you how to write a small image capture application, and then we will look at that.
If you look at all our APIs, we can group them into more or less three groups. One is we have some basic functions, then we have APIs that deal with objects, and then of course the APIs that deal with the properties to get to the real data. So the basic functions, we have very important accessor and that's get the device list. So remember that was the top of the tree. That's the only device that's always there even if there's no camera connected.
And then we have an object send message. That's very useful whenever you want to send a message to, for example, the device, like take a picture. You can also send messages to image objects, like delete this image object. And of course, what you want to do-- and you saw that, you will see it whenever you look at the Image Capture application-- if you launch the application and no device is connected, then you connect the device, then it updates automatically the UI. And that's done by using the ICA register event notification. So you get notification, once you register to that, whenever a device is connected or disconnected.
So once again, the device list, the ICI eCAT device list gives you the object for the device list. And you can send a message to a device. Take a picture. The Object-Related Functions. Well, it's quite clear whenever you look at the device object tree, you have to walk through the tree.
And that's basically what the device--what the object-related functions do. So you want to find out how many children do you have for a given object. So if you look at the device list, in that case, it would return a two, meaning we have two devices connected. For this second device, index one-- so we start counting with zero-- index one, we would get back a child count of three, meaning we have three objects, three images in that case.
As I said before, each object has a type and subtype, and we can get to that using the Get Object Info. So the type for the first one would be a device. That's the device list. And then we have the device camera, device scanner, and all those. And for the images or sound files, we would have an identifier saying, well, it's an image, it's an audio, it's a movie.
If we want to get to a specific object, like in that case we already captured the device and we want to get to the second image. Well, first of all, it's a good idea to do a get child count. Find out how many images do we have. And if you want to iterate over all those, then we would use the get nth child. Get nth child takes an index and returns you an object reference. Once you have an object, you can always get to the top of the tree.
To the device list using the Get Device List call, or you can get to the top of the device-specific object by using the Get Root of Object that will always give you the camera device. Get Parent of Object might be useful if you have a directory structure, which we are currently not supporting in our UI. Currently everything is flattened out. Also the PTP and Class 1 drivers currently flatten out everything. But in theory, you could have like a directory object, and all those children can access the directory object by using the Get Parent of Object.
We have Revcons that you can set and get. And then we have a whole bunch of setters and getters for properties. Same deal. If we have an object and we want to get the number of properties, then we just do an ICA get property count that returns us the number of properties. And to access the property, you could do that in two ways. One way would be you can iterate over all the properties. So get nth object, pass in 0, 1, and so on.
Or if you know exactly what you're looking for, like in this case, you could just look for the image width. And instead of now looping over all properties, you would just do a get property by type and pass in image width as the OS type that you're looking for.
You can get information about properties using Get Property Info, and you can get and set the property data. In Mac OS 10.10.0, the set property data is not implemented. But we will do that in the future. So you will be able to modify properties like upload images or sound files. Get property data. This call is basically the way to get to the actual image data or thumbnail data or whatever you have stored in properties.
Then, in order to navigate around a bit again, there's the Get Root of Property, which gives you the device object, and Get Parent of Property, that's the object that contains this property.
[Transcript missing]
We have an application called Image Capture Browser. And what it does is just looking for devices, objects, properties. In this case, I was doing the Get Device list and then get the number of objects for that device list. Well, we have only one. So there's one camera connected.
So I select that camera. And then you see all the information flows in. Now, this is, again, an example of writing it in an asynchronous mode. So as you saw, just clicking on it, it really updates the UI immediately. And the data that's not available just flows in whenever it gets available. So that's the camera that John was using. So with all the images, I can, for example, get the volume label. So it's an HP PhotoSmart.
The T is missing. That's a bug in the 1.0 version where you have to pass in whenever you get to the volume label, not the actual length that you have to have to increment it by one in order to get the null termination. So you can get the camera icon so it shows you the number of bytes. You can click on an image and then it will display. We have currently four properties. That's the image data, thumbnail, file name, and file size.
[Transcript missing]
Switching over to the project builder, what I'm trying to do now is show you how easy it is to write an application that uses image capture. So we start from scratch. And we look, oh, actually, I have a small application already prepared. And you'll see it's almost from scratch.
So I switch over to the resources. So it's a small Cocoa application, and all it has is the main. And then let's go to the interface builder. And then, well, you see everything is just empty. And I could now start switching over and adding an image view to do that. And what this small application should do is just capture thumbnails from the camera.
Well, I could do that, but what I also could do is I can go to Preferences and actually use a palette, and that's the ICAview palette. So you'll see that. And all I do is I track that over. I saved the app. I didn't change anything in here. Now let's see what happens if we run this.
It launches and it shows the thumbprint. Well, actually it does more. So it has a slider and it can now go through all the images that are in the device. And that was without writing a single line of code. Well, not quite. Well, another thing that they can do is like, it allows you to track the image. So whenever you are in the mail, you can just track that over to mail and then insert it there. So let's quit this one and let's see where the magic really lies.
If we go to the classes, we see there's an ICA view, .h and .m. Now actually, if you look at the documentation for Interface Builder and how to write interface builder palettes, it tells you that there are three ways to actually include the code. One is, in your project you just have the sources. That's what I do.
The other one would be you just use the precompiled files, the .o files, and add those. or create a small framework and use that. Well, in this case, since I want to show you how this view actually works, I was including the sources. So let's go to this one and have a look at the sources.
If we do awakeFromNib, that's the call actually that you will get whenever your application is launched, when the Nib file is loaded. What I'm doing here, that's a tricky thing because I want to work...
[Transcript missing]
And initializing the view is actually very simple. Because all I do is-- All I do is I call register event notification. I want to get notified when a device is connected and when a device is disconnected.
Then, well, that's where we are not quite to the standards. So the best way would actually be to get the thumbnail for the thumbnail data, get the size of the property. That's something in the property info. So you would know how big of a thumbnail do I really have in that camera. Well, in this case, I know it's 128 by 128, so I'm cheating a bit.
And then, after allocating this buffer, I'm doing a re-scan for devices. So let's look at the first part, register event notification. So what we do, so we have an ICA register event notification parameter block. And as I told you, it's a good idea to do a BAM set first, so clear it out. And then I set the RefCon.
For the header to the self, well, in this case, it's not needed because, as you can see down here on the call itself, I'm doing a synchronous call. So RevCon might be nice, but not needed. I pass in as object null, which means for the event notification, means notify me on all object changes. So I'm not interested in a specific object, but all. And then the notified type that got passed in, which was the device added or device removed.
So just make the call and that's it. Now the rescan-- like here. So last thing we did in our initialization was re-scan for devices. Re-scan for devices is also pretty straightforward. What it does, well first, if we never got initialized. So remember, I did not have a single line of code that I created for that project that was outside the view definition. So I did not initialize the view, did not do anything. So what I have to do is now have the image view that's in the snip file connected to the slider. That's what I'm doing here.
And then, if we never did get the device list before, actually, then I will get it and just keep reference to it. So, getting the device list. Again, I have a parameter plot that I clear. And then I call getDeviceList and do that again in a synchronous way, so passing null as a completion block. So the device list will be set inside this getDeviceList parameter block.
Now, if we did get a device list, what I want to do is I want to find out how many devices are connected. So I would just go ahead and-- Pass in as the object that I'm interested in, how many children do we have? That's the device list in that case. So pass that in, get child count. If the child count is zero, so we have no cameras connected, then I just call my no device found, which would just dim the slider and bring up a generic icon.
Get nth child. Well, what I do is I want to get the first camera. So this very simple app just looks at the first camera, index zero, and then tries to get the camera. So The device. So if we did get the device, well actually, yeah, this one first.