Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-722
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 722
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 722

QuickTime and the Motion Picture Industry

QuickTime • 1:04:39

In this session, we reveal some of the phenomenal uses of QuickTime in the motion picture industry today. From creation to production, to marketing and consumption, media experts discuss specific examples of QuickTime's contribution to world-class media projects.

Speakers: Glenn Bulycz, Anton Linecker

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning, everybody. I'm Glenn Bullich on the QuickTime team. Thank you for getting up and getting in here. There's definitely a worm for these early birds. I'd like to remind you also tomorrow morning at 9 o'clock, 724, QuickTime in the professional media workflow is a tremendous session. I've been working on it quite a bit.

Today's session is, I designed it so that developers and engineers could get a sense of what's going on in Hollywood, in the motion picture industry, in post-production and get a sense of sort of the bloom that's happening. Clearly, personal computers have had significant impact in print and then on air graphics and television, but over the last six, seven, eight years, they've gone into motion picture creation quite a bit. So, thank you. We're really lucky to have these two speakers. Anton is at Technicolor and we decided whether it was Video Guy or the Director of Technical Services there. And Scott Simmons runs a company down in Los Angeles as well that's doing some incredible work.

So, we'll hold questions to the end and we'll start immediately so we can get through this material and I hope you enjoy it. Anton? Thank you. My name is Anton Linecker. I'm the Technical Operations Supervisor at Technical or Creative Services in Hollywood. What we're going to talk about today is how Final Cut Pro and QuickTime are revolutionizing the HD workflow in Hollywood.

Pretty much with this seminar, we'll go over how high definition works, what the workflow is for it, how to work with a 2398 project. Basically, in the motion picture industry, we're talking about high definition as 24p or some people call it 2398. So that's what we're going to be dealing with with this particular seminar.

We're going to be talking about how to offline cut your show and then go into an online in Final Cut Pro and how to think differently about high definition. What you're not going to learn in this session is how to code anything. I'm a video guy. I'm sorry.

So basically, a little bit about myself. I give seminars at Technicolor to directors, producers, cinematographers, editors, talking about how to shoot and edit high-definition footage for eventual film-out purposes or for finishing in high-definition. And in the process of doing these seminars, inevitably, I talk about the traditional HD workflow.

And with the traditional HD workflow, People have been doing HD for a while, and they've been limited to a certain extent by the technologies that have been available. High-definition equipment is very expensive. People used to use AVIDs almost exclusively, and they set up their whole workflow based on the film model.

So that means having standard-definition down-conversion tapes where you have your high-definition media. Because you don't have a D5 deck or an HD deck in your edit suite, you will have a facility down-converted for you to a standard-definition NTSC cassette. And so in that process, you take 2398 media, and you have to somehow get it to 2997 NTSC. And to do that, you actually introduce something. You have something called 3-2, which is where you take the fields of the video, and you repeat them in a 3-2 pattern so that you add the six frames that are missing between that 24 and 30.

I'm going to use the 24 and 30 interchangeably here with the 2399 and 2997. They're in essence the same thing. So you have this 3-2 pull-down that's added to your tapes for the standard-definition workflow. And then they basically tape. You take those standard-definition tapes and either work in 2997 or remove the 3-2 and work in a 2398 workflow.

And that's a couple of steps in a workflow right there. You have EDLs, edit decision lists. We'll talk a little bit more about that shortly. But you have to somehow get the information from your offline cut, what you've basically been editing for four or five months, until your feature is finished, to the online room. The online room is where you do your mastering, where you take that standard-definition project and make it into a high-definition finished master that you can air or broadcast, or you can take out to film, whatever your project entails.

And then another thing about standard-definition down conversions is that you have a change in aspect ratio. You have your high-definition tapes are -- they're widescreen. They're 16 by 9. Yet when you do a down conversion to a standard-definition tape, you're working in 4 by 3, normal television size.

And you can either work it a couple of ways. You can do a squeezed standard-definition tape where you have -- if you look at it on a normal monitor, people are tall and skinny, and then they get unsqueezed to do a widescreen. Or you have a center extraction where you only take the center information and cut off the sides. Or you can do a letterbox extraction. You can do a letterbox extraction where you shrink everything down and you have the black on the top and the bottom. These particular methods have ramifications for when you go to your mastering, which we'll talk about shortly.

So, one of the things in talking and giving these seminars and giving, you know, explaining the traditional workflow for the 240th time, something occurred to me, is that traditional HD workflow is hard. It's unnecessarily so. The entire idea that you have, that you work at a different time code rate than what your master tapes are, that you have to change this aspect ratio, all these particular parts of the workflow make it unnecessarily hard. And so, I end up having people asking me, why do we have to work with these 2997 cassettes when our masters are 2398? Why are we dealing with 4x3 when it's actually 16x9 that we shot? And so, Thank you.

You then have the EDLs, the Edit Decision List, that's basically 30-year-old technology. Ironically, it's still something we use today, and a lot of facilities depend on it, because it has been that kind of Rosetta Stone that we use to talk between editing products. But an EDL is just a text file.

It's a text file that has timecode values for your source tapes, timecode values for your master tape, and little descriptions, but it's all text. Oh, actually, I can go. They're pretty limiting because they're text files. Depending on the format that you have, you either have two levels of video and four levels of audio. That's an example. Some have two levels of video, two levels of audio.

The titles and effects, transitions, they're only partially supported. You get a little note saying that you have a dissolve there of a certain type, but you don't know if you have an iris dissolve, if you have a fade up or something like that. That's not actually in there. And in motion effects, where you take your video, you blow it up slightly or you move it around, that's not supported at all. So, if you're working on modern equipment, modern edit systems, you have a lot more control.

You take a Final Cut Pro project, for example, you have 99 video tracks. You have 99 audio tracks. If you take a Final Cut Pro project from one system and take it into another, all of the special effects, the transitions, the titles, speed changes, all of those things come across.

And so, if you're working in a Final Cut Pro environment and you want to make an EDL, you actually have to hold yourself back. You have to hold yourself back. And edit in a way so that the EDL will work nicely, that it will play nice with whatever system you end up going to for your online. So, basically, you can do a lot of things with this.

We were there, I kind of came to this little conclusion that there should be a better way to do high definition. And so, with the help of the QuickTime team, the Final Cut Pro team, and hardware support from AJA and Pinnacle, we've come up with a different way to do high definition. And I think it's actually a better workflow. And better for me means simple, because I don't want it to be a complicated thing.

So, with this particular workflow is used now, At Warner Brothers, and 20th Century Fox, and where HD is really prevalent in the independent community, we have several independent filmmakers that are using it, and Showtime is going to be doing three shows starting in about two weeks for their next season, using this particular system. So before I go too much into what the actual system is, I'll tell you a little bit about the basics of HD, and why this change, this paradigm shift is so important.

With HD, pretty much everybody knows the reason why you want to work with HD is that you have great visual quality. If you have an NTSC signal, you're working with 720x486 lines of resolution. If you're talking DigiBeta, it's 480 if you're DV. With that, you can go up to a high-definition signal where you're 1920x1080.

So you have drastically more pixels and lines of resolution to work with. And the picture is so much nicer. What you also have is that NTSC is an interlaced format. And so it draws a single frame with two fields of video that's clocked to our electrical. So we got a 60 hertz electrical. And so we have 60 fields that come together to make the 30 frames of video, or 2997. That's kind of an old thing. So you have these interlaced fields that come together.

And so what ends up happening is that you have the first field draw half of the information. And then the second field comes in like a zipper and completes the picture for you. With high-definition, if you're talking about 24 pixels... ...and you have a true whole image from top to bottom drawn at once.

So it's more filmic in that regard. And so that's the whole appeal of working in 24p HD. That quality comes at a price. The price is data rate. And so depending on what kind of format you're doing, you're talking anywhere from 90 megabytes a second to 160 megabytes a second. So that's the appeal of working in 24p HD.

But that's a lot of data to get in and off your drives. And so this is not the type of thing that you can do off of a firewire. It's not the type of thing that you can do off of a single IDE drive. So at our facilities, to get a speed like this, we use an XSERV RAID maxed out. 14 drives RAIDed together in a RAID 50 to get the data rate fast enough to handle HD.

And not only that, you also have the fact that... The data, since the data rate is so high, you use up a lot of hard drive space very quickly. And so, you take a look at this difference right here. Here you have DV, it's rounded up, but you have DV, which is a compressed SD signal.

The native format as it's recorded on tape. Then you have SD 10-bit. If you take a DigiBeta tape and you do a 10-bit uncompressed, you're working at 27 megabytes a second. So, when you do the jump to HD, you see uncompressed HD actually jumps quite a bit. You have 126 megabytes a second for 1080p.

And then for 1080i, 2997, you're talking 160 megabytes a second. So, you're chowing through hard drive space. And so, uncompressed HD is not really very useful or practical for offline editing. Because if you're a traditional feature, motion picture, in Hollywood, you'll shoot about 60 hours, 70 hours of footage. And so, at HD, uncompressed HD data rates.

You're talking somewhere on the lines of 60 to 70 terabytes of storage that you need for 5, 6 months while your editor and your director is whittling down that 60 hours to an hour and a half. And so, that's not a practical thing to keep track of. It's very expensive because you need to buy XRARs or like large SCSI arrays. So, working in uncompressed HD is not practical at all.

So because of that, the traditional workflow had this down conversion mentality. Since working in uncompressed HD isn't practical, and there was no other solution available before, down conversions was the only way to go. But you have all the hang-ups that come along with that standard definition workflow. At Technicolor, we were thinking that we wanted to stay digital.

We didn't want to go down to a Beta SP. We didn't want to go down to a DVCam. We wanted to give them QuickTime movies delivered on FireWire. And those QuickTime movies are direct descendants from the HD. We'll take D5HD, HDCam, and bring it in and transcode it on the fly to an offline format that the company requests.

And that offline format can be PhotoJPG, which is very good quality but a very low data rate, very nice for offline editing, particularly if you have a lot of hours of footage. We can go to DV and we'll do it a 16 by 9 DV. We'll go DV50 or-- Since about two, three months, we can go to DVC Pro HD with the new Final Cut Pro HD version. And so we'll talk a little bit more about that and show why that's an interesting workflow.

So with the Final Cut Pro HD media, we are giving high quality QuickTime movies. The time code that you see in your Final Cut Pro system corresponds exactly to the master tapes. And so there's no difference between time codes that you're dealing with. And so when you go do your mastering of your movie, all the time codes line up and it simplifies things there. You don't have to change the lists, go through any kind of conversion processes.

Like in Cinema Tools right now, we have something where you can go from a 2997 list to a 2398 list. You don't need to do any of that. So the time code comes across exactly. The data rate is low. And so it can be as low as 1 50th, the original data rate.

That's in particular with the photo JPEG. And it preserves a 16 by 9 aspect ratio. What's nice about that is that you see everything that you have on the HD tape. You can see if there's... There is... A light stand in the shot on this side, whereas if you do a 4x3 extraction, you might have missed that.

You can time things so that they happen properly when someone walks out of frame that you know exactly when they leave, whereas if you have a 4x3 center extraction, they would seem to leave the frame far earlier. So, the first show that we did this with was a Warner Brothers independent movie called Around the Bend, which is coming out, I believe, in August. And it was shot on 35mm 4-perf, negative. We telecineated, meaning transferring to high definition, to HDD5. We picked D5 because it's a 10-bit media.

Then we captured it on the fly from the D5 and we went to half HD res. And so the resolution that they were working at was greater than standard definition. It was 960x540. It was perfect 16x9. The average data rate was 2 megabytes a second. So we went from 126 megabytes to 2.

And that two megabytes is less than DV, by the way. It's not quite half, but it's close. They edited the whole show on Final Cut 4.1, so it is possible to use this workflow in 4.1. Right now, we are on 4.5 or Final Cut HD, so that's where we -- this was a couple months back.

And to view their footage, they had multiple options. When they were editing in Albuquerque, they had a DVI projector hooked up to their edit system. And so when they looked at their footage, they actually went through the DVI and projected it onto a 20-foot screen. And so they actually edited the movie, and it looked like they were in a theater, which is very nice.

They also were able to go out to their 23-inch cinema display, which was quite nice, high resolution. And the look is far better than what people traditionally have worked with in an offline situation. Traditionally, offline media looks very blocky, very low res. This was far better. They could see focus. They had very good color saturation.

So, with that particular project, we've now finished it. By the way, we also gave them a CinemaTools database, which has all the key code information of their film, so that we're able to do a key code cut list. But when we went online for their preview screenings, they did HD preview screenings.

I don't know if many of you are familiar with Los Angeles. There's a fantastic theater at the Grove in the Fairfax district of Los Angeles. They had preview screenings in HD. To do the preview screenings, the editors gave us a FireWire hard drive. On that FireWire hard drive, we had a project file, which had each of their reels.

Media managed so that all of the excess media was clipped off. And so when we go into digitize as high definition, we're only capturing the stuff that we need. The timecode of that media was exactly as it was on the HD tapes. So Final Cut, we edited this on a Final Cut HD system with an AJ Kona card at 10 bit uncompressed. So it would take in from the D5s exactly the clips that it needed.

It then also took in all of the transitions, filters, color effects, speed changes, even titles came in perfectly. And the titles were exactly spaced where they were or placed in the proper space on the screen. So, we also did the--we also had them give us a QuickTime movie of each reel. And that QuickTime movie had a timecode burn of the original D5 burned into it. And we then did a picture in picture while we were editing it.

So you see right here, we have one screenshot from the movie where you can see that we have exactly the right spot visually. And if anything should go wrong, which nothing did by the way, but if there was a discrepancy, for example, you know, sometimes we have productions where they shoot and they have two tape ones and they didn't name it differently. And so, maybe we put in the wrong tape one and digitized from that. We would have seen a difference in the top right-hand corner.

And that we would have been able to see, okay, the timecode that we captured was right, but perhaps the tape was wrong. And so, we would be able to go in and address the problem. So, if we switch to-- number two, I'll show a little example of having the picture in picture. And we can have it playing so you can see each and every edit comes in properly. This is some San Francisco footage that the QuickTime people gave me.

And so this is a really nice way to just go through and you know that your edit is proper. You can also go through And just step through each shot and just go to the edit points. And you can go back and check. So for going through and making sure that each and every edit is correct, it's a very nice way to work.

Previously, what you would have is you would have a chase cassette. And you would have a beta SP, very low res, that you would run in parallel and watch on two screens to see if everything matched up. This was nice because it let us go in and we could see on the edit machine right away if we had any problems. I'll just advance a little bit here. OK, we can go back to slides.

So, in terms of a workflow, it was very streamlined. They had to hardly ever make tapes. When they needed to make a tape, it was quite easy. They dumped, in this case, they dumped their media to, they bought no extra hardware, by the way. They just used a Dual G5 and a cinema display. They had no extra hardware. They did have an XR braid, but they had no video cards, no nothing.

So, in order to make tapes, what they did was they took the photo JPEG sequence, dumped it into a DV sequence. It resized automatically as a letterbox image. They did a render. It was less than, actually more than real-time render. And with Final Cut 4, when you play out a 2398 sequence, it adds in the 3-2 again as necessary to go out to tape when you're DV. So, they were able to make. So, they were able to make cassettes for their sound editors, for their negative cutters, all that quite easily. So.

So, in terms of this workflow, how could we make it better? Well, one thing that's really nice and makes it better is with the introduction of Final Cut Pro HD, we have access now to DVC Pro HD. And DVC Pro HD is really revolutionary in terms of how we work with HD. It's the same impact as DV had four or five years ago, six years ago.

Because before DV was introduced, editing video was really hard. And you would always have kind of so-so quality. You would have a little capture card with RCA inputs and you'd capture it in, you'd get your little QuickTime movie, and it would be really hard to edit. And DVC Pro HD was low-res and all that.

Then DV came along, you had FireWire, you were able to plug that into your computer, you didn't need a video card, anything like that. And you could edit in either iMovie, Final Cut Pro, Final Cut Express. And that changed a lot of things. You had now something that looked pretty much like Beta SP quality, which had been a professional video product. And you had easy accessibility to it.

You could have it on your internal IDE drives and all that. Glenn Bulycz, Anton Linecker So, with DVC Pro HD, you had a lot of things that you could do. You could have it on your internal IDE drives and all that. With DVC Pro HD, you had a lot of things that you could do. You could have it on your internal IDE drives and all that.

With DVC Pro HD, you have the same thing. DVC Pro HD allows you to edit in true HD. This is not half-resolution HD, it's true HD. But you don't have to get hit with the uncompressed HD data rate. So basically, with 24p DVC Pro HD, it runs at 5.8 megabytes a second.

As compared to the uncompressed, 10-bit uncompressed, which runs at 126. So you see a massive data rate difference between the uncompressed and the DVC Pro HD. The reason for that is the DVC Pro HD is the same format as the camera, the DVC Pro HD camera writes to tape.

And the heads on the DVC Pro HD camera can't write 126 megabytes a second. So they had to find a way to kind of shoehorn it down to a more manageable... data rate. So in their normal state, you have this DV100, where it's 100 megabits. That's with the small B, not the big B.

Megabits. And of that, we then only take 24 frames of it. It's set for 60. We take 24. So the other 36 frames get lopped off, so we save that data rate. So, If you take a look at the chart of the differences between HD, you take a look at the DVC Pro HD at 24p, it is only slightly more than DV is.

Now, for an offline format to traditionalists, having an offline format that's 5.8 megabytes is still high. Okay, that's granted because you are going to use still quite a bit of hard drive space. But the advantages are so great. You're now editing in complete HD res. This is not some fake HD. It's real HD. You have real-time effects. You have HD playback through a cinema display or DVI projector. Okay, which if you go back to the computer too.

I got a little bit of black before this, but... So what I was showing before was actually the DVC Pro HD. Oops. And so with this format, I'm actually playing off of a FireWire drive. A little-a-see bus-powered drive. So, it really changes the rules for HD. You know, I didn't pull out an XRV RAID. It's just a tiny little firewire drive. And we can go back to the slides again.

And if you have that ability, you can go out with a DVI projector, project it on a 20-foot screen, edit it, perfect HD. You can lay back to the new Panasonic 1200 deck. It has FireWire in, and so you don't need any special cards, video cards or anything. You literally plug a FireWire 400 into it, and you can lay back your tape with the same efficiency as you do DV currently.

You can use Compressor to make standard DVDs, which is very nice. They're pristine in quality. And the minimum system requirement is one, PowerBook or iBook. That has a gigahertz processor and a gig of RAM. That's it. I was editing HD flying back from NAB at 10,000 feet on Southwest on battery power. That's really cool. And so that's the future of HD editing now. So that's it for me.

Can everybody hear me? I guess so. Good morning. I'm Scott Simmons. I'm the visual effects supervisor for Live Wire Productions. We do feature films and large format films as well. I'm just gonna poll the audience real quick. Who has heard of the term digital intermediate? Okay, there's a few of you. Okay.

This is Roar: Lines of the Kalahari, which is an IMAX feature that we worked on last year. It's been released this year and it's doing very well. I'll give you a little bit of a background. It's a giant screen film produced by National Geographic with acclaimed director Tim Liversedge. Tim Liversedge is a very well-known wildlife documentary filmmaker.

It was shot on location in Botswana. That's where the Kalahari is, or part of it. Nine months of post-production. That's not all. The DI process, that's also music and the final part of it. And digital work, the DI work was done on desktop systems. These weren't put through very expensive, proprietary black boxes.

So what is a digital intermediate? And as a producer asked me, why isn't it called digital advanced? Well, it's an intermediate step. It's a middle step before you go to a release print. So that means that every frame of the picture has been touched by the computer. That means things that were used to be done optically or in the lab, such as dissolves or color timing and whatnot, are all being done on the computer now.

Basically, what you end up with is a new color negative, and that gets output to an answer print. Music gets added, obviously. Um, previous IMAX pictures, everybody's heard of IMAX movies, I'm sure, uh, were conversions. They weren't done, you know, from frame one to the end credits as a digital intermediate. They're conversions of other pictures that are finished ahead of time. So this is pretty cool because this is the first time that, you know, something this large has been put through this kind of pipeline.

So let's talk about the workflow a little bit. All the film was scanned by Imagica. Imagica was a large format film service bureau. Each frame was 4096 by 312 resolution. We just call it 4K. Each frame is a 10-bit Cineon log file. For those of you that don't know what Cineon is, Cineon is a format that Kodak developed that basically describes the gamut, which is the luminance, the color range of a piece of motion picture negative.

[Transcript missing]

So you can imagine by the end, I think for a portion of the show, there was about three terabytes worth of data.

Get that data in, we database the scans so we know where they are, where they came from, you know, what the lengths of the shots are, which is really important. And now here's the interesting part. So we took those really huge files, I mean obviously you've got a throughput issue when each frame is 50 megabytes. And what we did is convert them to 8-bit log QuickTime.

So there is a logarithmic conversion of the 10-bit Cineons to QuickTime file format. Obviously QuickTime file format gives us the ability to scrub through images, we can output proxies, get things reviewed fairly quickly, as opposed to, you know, a sequence of Cineon frames which you can't really scrub through very easily. easily.

Then we had to look at the frames and clean them and de-grain them. We'll get into that a little bit better. And then we had to examine the shots to see if the pipeline that we're putting them through was actually going to benefit the shots. And in all but four cases, we were using QuickTime movies. So imagine that. We're going from a file format, the QuickTime movie standard. It's traditionally used in video or even high def to something that's much, much bigger. Something that's projected on a 70 by 100 foot screen without artifacts.

And we're able to do that with QuickTime because we've used it for other feature films. We know we're used to working with it. We know how to get the best results of it. So the QuickTime movie is basically a very good Sub-sample of those Cineon files. It's the best of those 10 bits, now an 8-bit.

After examining it, then we had to do color grade and effects. We'll get into that a little bit more. We'd render to FireWire drives, and FireWire drives would get shuttled off by hand to CFI, which is the film recording lab, and it's now part of Technicolor. And we'd get those back and project them in dailies. The dailies would go to an IMAX theater. You've got to see an IMAX movie in an IMAX theater, see what it looks like.

And before we really got into doing takes, or final shots, we did some tests. So we would do Cineon versus QuickTime tests, and we would compare what we were doing to the motion picture to the original scans. And we knew right away we were on the right track. Everything was looking really great.

So, there are a lot of challenges from doing a giant screen picture. Obviously, you're dealing with throughput, you're dealing with file sizes, you're dealing with labs, possibly, that aren't used to working with QuickTime. We have to do color grading, and color grading is more than, you know, the tint knob on your television set.

Color grading is... Fixing the shots, if the shots need to be fixed, and then creating a look for those shots that makes it look warm or cool or has something to do with telling the story, time of day kind of things. Then we added a lot of visual effects, believe it or not, and we'll get into the technology too.

So, challenges on this particular motion picture is the director didn't know it was going to be an IMAX feature. He thought it might be an IMAX feature. And then he started shooting 70mm, which is the traditional format for IMAX. And then realized, man, these cameras are heavy, these cameras are loud, and I can't go within 20 feet of the lions.

with a 70 millimeter camera because it just, they don't like it. They will move away. So he ended up shooting, oh, 75, 80% of the motion picture in 35 millimeter. So there's a problem right there. How do we get the 35 millimeter to look like the 70 millimeter? Because it's going to end up in IMAX. He also shot with different film stocks. Don't ask me what they were. He doesn't know and we couldn't figure it out.

So we had to kind of balance things and make things come to a center. And then we have to conform a motion picture that was not necessarily shot for an IMAX screen to the IMAX specs. And then because of the filming conditions, a lot of what he did was he'd be shooting in the middle of the day in the summer. Summer in Kalahari can get up to 150 degrees. and he'd have his film sitting in a cooler, basically.

And he just grabbed whatever he could get. And if he was running out of films, the PA would go get some more film, who knows what it was, and stick it in there. So he's not shooting in the best of conditions. So there's a few cases where we had to do some restoration work because the film had been damaged or, you know, the three layers, the red, green, blue layer of the emulsion was actually starting to separate. Fortunately, that didn't happen too much. So let's get into the challenges. 35 millimeter film is four perforations. Runs up and down through the projector. And let's say that's a 35 millimeter still.

"Seventy. That's a big difference. In fact, I like that a lot. Let's go back. That's 16 times the area of 35 millimeter. That is a 50 megabyte frame versus a 5 megabyte frame. Four to five is usually what 2K is. And we have to make... Look like that.

That's a big challenge. So how do we do something like that? First of all, we have to reduce the 35mm grain. We have to identify what's grain versus texture. We don't want to get rid of grain and all of a sudden the lion's coats or the antelope's coats just disappear.

They don't want it to look like solid browns. You still have to have some texture. Then you also have to preserve edge detail, which is the real trick. Because if you can determine what the edges are, then you're preserving that resolution of the image versus noise or grain in this example.

So here's an image cut in very, very close of one of the shots. You can see the grain. You can see the red, green, blue. And believe it or not, this has been color corrected as a first step. And what we had to do is, you know, determine what the antlers are, the antelope, clear up that sky because we've got a solid sky. I mean, you're really going to see the grain. And this is what we ended up with.

It's a much cleaner picture. You can see edge detail. You can see that there are antelope there. This one actually has been color corrected and color graded. The difference between color correction is, in this example, if we got film that was sort of blue, because he would shoot without a color correction filter on the camera.

We have to bring it to gray. We have to create what's called a gray balance look. That's important because now we show it to the director and the producers and say, "Okay, this is what you shot. This is what you saw when you were on location." They say, "Yes, it is." Then you color grade it as a second step. Color grading it is basically, "Well, we want it to help you tell the story." It doesn't look like Mutual of Omaha's Wild Kingdom necessarily. We want it to look like a feature film so everything is sweetened a little bit.

Conforming is another issue. This is what's called a sacred master. A sacred master basically says, "This is the area you've got to play with in your film." So if you look at this, this is a field chart. There's two things. That area in the red is credit safe.

So if you imagine going to a movie, like Next Door, and you watch the credits, they're going up and down or dissolving, they're filling the frame. You can't do that with an IMAX picture. Because where that plus sign is, is the viewing center of viewing. It's not the middle, it's the center of viewing.

So the center of viewing in an IMAX picture is about a third of the way up. So that's where your eyes are looking. And you've got all this extra space over your head that just makes you feel like you're there. It just sort of surrounds you. But where the audience is looking, it's about a third of the way up. So credits obviously can't go all the way to the top. So we're looking for that sweet spot in the center. Like I said before, he's not necessarily shooting for that. He's not necessarily framing all of the lions or whatever the action is for that sweet spot.

So, the center of interest in a picture, on an iMac or large format picture is not necessarily the center of the frame. So what do we do? Here's a couple of examples where on the left we have the guinea hens and their focus spot is actually about halfway up. And on this shot, this helicopter shot, obviously the center of interest is way up at the top. So what do we do? We switch to demo.

So for the guineas, we have to lower the shot in the frame, and here's where the effects come in. We create a digital extension. We have to do that a lot. Same thing for the helicopter shot. In this case, we're completely replacing the sky. Everything has to be tracked. So it becomes not just a reframing issue or a composition issue, it becomes a full-blown effect.

Restoration, we have a lot of dirt and scratches. I think you can see a little bit up there. I'll go ahead and play the movie. This is a pretty good example. Again, this is just a little part of it, a little part of that image. This is much, much bigger. It's probably eight times larger than the part that I'm showing you right now.

Like I said, you have dust and scratches. You've got the emulsion starting to separate. We have to correct all of that. This is part of the digital intermediate process. We're just not coloring it. We're just not correcting it. We're making the frames as perfect as possible because that's what you're going to see.

It's not happening in any other place. It's happening in the computers and it's happening to fix these kinds of things. Just for fun, when we had the wrap party for the crew, I had a little contest to see who could figure out how many pieces of dirt got cleaned.

We had a little gift bag of stuff like that. The software actually keeps track of that because I'm sort of masochistic. I wanted to know how much pieces of dirt that we cleaned up. We cleaned up 583,000 pieces of dirt. Some of that, let's say 100,000, was removed procedurally. Procedurally basically means we had a computer analyze the image and sort of fix it. It's not that simple, but that's what we did. The rest of it had to be done by hand.

Over 400,000 pieces of dirt had to be done by hand. And here's the reason for it. Why isn't there software? Yeah, there's some software, but it doesn't work like the human eye does. It'll make mistakes. And if you make a mistake at 4K, it's pretty obvious that your mistakes are getting magnified pretty largely. And there's also dust in the shot. There's also heat waves, there's mirages, there's distortions going on.

So you have to use your eye to get rid of that stuff. And that was the major production part for us. Can we go back to the other? Color grading. Okay. Part of color grading, I put stabilizing in there because I'm not sure where else to put it.

He's shooting on location in difficult situations. Not every time is the camera locked down and also you've got heat distortion going on in the image. So the image is moving around a lot. If he's flying in a helicopter, the camera's strapped underneath it with bungee cords. Sometimes the bungee cords aren't tight enough and you get some jitter going on in the frame. So that has to be stabilized because you want to want the shots to look as perfect as possible. Color correcting, like I said, we're balancing from gray first and then we're creating a look.

Okay, here's a good example. This is a frame from Original Scan. It's a little hard to see, but it is dark, it is green, it is muddy, and this is what we did to it. So we gray balanced it, created a color look for it, and now you can tell it's a lioness and her cub. We also output to film quite a lot.

We had National Geographic come in to see what we were doing, and they saw the before and afters and just gasped. They just thought, you can't salvage this. There's no way you could do it. Well, we did. The reason we did is we started off with the 10-bit Cineons, full range of the film negative, and we selected the best part of that to work in QuickTime.

We also created color boards, or color panels. This is basically what you have to do when you're working in a sequence. You have to create a color look that will work from shot to shot. You can't work on a single shot and say it looks good and the client says yeah and you're signed off on it, it doesn't work like that.

Every shot has to cut to one another. In this case, we're trying to preserve the color of the animals more so than the sky. The sky is pretty close, but if you turn around and you look at the sky behind you, it's just going to be a different color. What we have to nail down is what the texture and color of the animals looks like.

Then on top of that, we do visual effects, shot enhancements. You saw a little bit of that before where you just have to create extensions or whatnot. And we also did some all CG-ish shots. The first shot of the movie, which was a little disconcerting to us, was a complete CG shot of a star field. Star fields are really hard to do. I mean, you have the landscape and the stars above it, and we're tilting down to the Southern Cross.

Stars are hard to do. It's really dark. If you're going to see compression at all, you would see it in something like that. But we worked it out so you don't see that. We did a lot of CG maps, and we have a cosmic zoom. Let's go back to demo, please.

Here's another reason you have why digital media is great. This is an entire sequence, but it's one shot dissolving into the next. So all the color grading has to work from one shot to the next. And here's our cosmic zoom. Again, center of interest is down towards the middle.

This looks amazing on a 100-foot screen. It feels like you're flying in for a landing. And speaking of large, we had to get satellite imagery for that part of the sequence, and we ended up with a texture map that was 60,000 pixels wide. That's the Okavango. We're sort of orienting ourselves. Look at the nice sky. Look at that nice sky. And that sky is all digital. We do a matte dissolve into another portion of the satellite image. And we fly past the salt pans and land right at the watering hole. Go back to the slides, please.

Here's another special effect. If you notice, the line is facing right. Don't know why. The line's supposed to be facing left. The line wasn't cooperating. Flop the image. Life's going to be good. But this is right after the opening shot of the stars. So this should be dawn, but it's not dawn. It should be dawn. So let's combine it with something like that. Let's put them both together. Let's go back to demo.

That's a cool shot. I like that one. So completely, again, this is part of color grading. It's an effect, but it's also part of color grading. It's also part of the digital intermediate process. You've got a director that says, you know, I want a shot that I couldn't get, you know, and so we created it. Can I just go back to slides, please? So let's talk about technology a little bit. Some surprising things we learned on this show in particular. We've done IMAX feature work before. We've even done stereo work before. So we weren't really surprised about that QuickTime could hold up.

I mean, we've done a slit-scan stereo graphic, and if you're going to see compression, you're going to see the 8-byte grid that you see in typical QuickTime compression. You're going to see it in something like that, or you're going to see it in something like the Starfield shot. But we were surprised by some simple things. The Apple 17-inch studio displays were the best monitors we've ever used.

It blew us away. Don't know why. You're better than the cinema displays we're using. Don't know why. But what we saw on those little 17-inch displays, we saw on the iMac screens. We're looking at a 17-inch display, and they're looking at a 70-foot tall screen, and they matched.

You know, we obviously, we did a lookup table, hardware lookup table for the monitors so that they all stayed consistent, and also kudos to CFI for making things look like they're supposed to, and our film outputs were pretty good. Firewire drives, absolutely a necessity. We were rendering to the firewire drives and checking them, and then boom, they go out the door, go to the lab, they get recorded, they get filmed out. So these large firewire drives were just a godsend, because this show was about throughput.

Dual-proc machines. Software, After Effects, believe it or not. Combustion. There were two other companies that worked on this, Imagica and Sassoon Film Design. Imagica used some shake. And procedural methods. And I can't stress enough how important procedural methods are. Because procedural methods make you use the tools that you need. And I can't stress enough how important procedural methods are. Because procedural methods make you use the tools that you need. And I can't stress enough how important procedural methods are.

Because procedural methods make you use the tools that you need. that you have and come up with a new solution. Procedural methods are what we use to define what is grain versus texture. What is an edge? What is an antler? Stuff like that. And we have to work outside the tools. You're not just using one tool.

You're not just using one layer. You're not just using one composition. You're using several that are analyzing, doing different things to create a whole. Now, can software be written that does this? Absolutely. I'd love to see it. I'd rather, you know, I'd love to see a plug-in that would do all these steps. Define the grain.

Define the edge detail. That's, that kind of thing. But there's always going to be a place where you get to a shot and you see, you know, like I said before, the emulsion starting to separate. You know, those are pretty esoteric and bizarre things and you have to use a procedural method to correct them.

Okay, the expense. Obviously, we're on the desktop. The expense wasn't so horrendous, and that makes it pretty attractive to producers. If client wanted us to do real-time 4K color grading digital intermediate work. You're talking about, with the Cineon files, you're talking about terabytes of throughput. Is it possible? Yeah, actually, a couple systems have actually been made that have been able to do that. The cost for the hardware is around $300,000 or $400,000. Software's on top of that. You start getting a very big bill very quickly.

Input/output is critical, so I have to figure out how to move stuff around and work with different file formats. And then, as I mentioned before, understanding the medium. If you work smart, you can work faster. You're not just dealing with a problem. And part of the digital intermediate issue is who is owning it. Because before, it was different steps. It's like labs doing this part, color timers doing this part, DP's doing this part, DP's used to using traditional chemical methodology. And who's really owning it? And that's the big question right now, because right now, DI's just a big shotgun.

We've got colorists who do nothing but color the film, create a color look that don't stabilize the shot, that don't dustbust, that could. But they're real-time. Two-camera. They're in the UK, usually. Playback and color correction while the client's there. The client's not going to sit there and watch you dustbust. It's just not going to happen. But that's part of it. So, the current technology that's out there in million-dollar suites is only doing a half of what we had to do to create a digital intermediate.

Okay, so future, what we need, obviously large files require a large pipeline. You know, 4K is becoming a new standard in digital effects work. Spider-Man was, all the effects shots were done with 4K plates, not 2Ks. Right now, the architecture of file throughput is a threaded architecture. That means if you're using Cineon scans, if you want to stay with the full range, dynamic range of that film, you're working with Cineon scans. They are a sequence of frames.

They are not digital video format. They are not QuickTime. I would like to see QuickTime become threaded, which is basically why a sequence of frames is necessary. Because a sequence of frames can be addressed over the network in parallel. The more power you throw at it, the more real-time effects, the more throughput that you're getting.

Possibly the new hardware. The new high-def standard that is merging with QuickTime. Seems like it can skip around quite a bit. I would love to see it used in production in a parallel pipeline. So you really are getting low data rates consumption with high throughput. You can get multiple effects, color gradings applied to those things. That would be fantastic.

Standards 3D Color Lookup Tables. What's a 3D Color Lookup Table? A Color Lookup Table basically says, "Alright, this is the amount of colors you can use to display an image." There's software, there's hardware lookup tables. A 3D Color Lookup Table is saying that all the values are interrelated, and that's how film works. I mean, getting it to a high-depth finishing station is great, but if you really want to work with the data the way that film works, you have to work with a 3D lookup table.

And it just means the red, green, blue affect each other. If you affect the brightness of red, you're In a film 3D lookup table, something's happening to green and blue because that's how the film emulsion works. Kodak has created a 3D lookup table fairly recently, but it's only available for Kodak stocks. So there's no 3D lookup table that I know of for Fuji or other film stocks.

Also, because it's new, it's not a standard. It's being adopted for a lot of color grading systems yet. And then you're faced with the arcane chemical technology, which is how I started. I started off in an optical lab. I know about optical printers and keeping the right temperature of film suits and all that stuff. And there are no direct correlations between traditional color correcting methods and film to digital.

Now, some software will say we have printer lights, we can match what's going on in the lab. That's not necessarily true because traditional color correction, if you say you're adding 10 points of red or adding some red to it, you're not cranking the red. That's not how film works. But in most, the software that's available now, you say, well, the DP wants you to add red. Well, you add red to it. That's not correct. That's not how film works. You're overdriving the image.

You know, traditionally in a DP, you'll say, I want more red, I want you to add some neutral density, I want you to do this or that, and you can add a list of criteria for his recipe, and still the film will hold up, it'll still look good. But digitally, you can seriously overdrive the image. If the look that's being required is pretty severe, you can overdrive the image. There's a lot of things that need to be correlated. Film bias, you know, how do we do that? There's many issues.

So there are a few new technologies as I wrap up here that are desktop systems. They're not black boxes that just do colorist work. Lustre by Discrete, it's kind of a black box. It's proprietary. It's a licensed technology. They don't, you know, do they really own it? Only runs on Windows XP.

Luster is sort of an all-in-one color grading system. You do your color grading and you get the results. So you twist the dials, you can get a result. You can't get in backwards and see what the dials were set to. Final Touch is a new program, it's a couple years old, runs on Mac G5s, incredible throughput. They're adding more and more tools to it. It renders in floating.

The Big Key of QuickTime is the ability to create a script for what its color correction just did. Instanciably, that's being used to send out to other speed-grade systems so they can go, well, I can help you render, I can help you make the color corrections. But I see as a real advantage once color standards are set, if scripts are written from these color grading applications that can be sent to other applications that do more effects kind of work, such as Shake, then that would be fantastic.

So if you can, that way the effects people or the DP can decide where does this happen. Do we do effects first and then color grade or vice versa? Which is really important. Because if you're doing an effect shot on something that color graded and made it really spooky and dark, well, you know, you got what you got. You don't necessarily have tracking points anymore because the image is too dark. And I think that's it for me.