Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-704
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 704
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 704

Preprocessing Principles

QuickTime • 1:12:36

Preprocessing is widely considered the key to making excellent video for digital delivery. This session teaches you the general principles for video preparation, including how to pick appropriate cropping, scaling, noise reduction, and image adjustment parameters for optimal quality.

Speakers: Dennis Backus, Ben Waggoner

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

And we're here today to talk about pre-processing principles. And we're very, very lucky to have, I think, one of the premier folks in this area, Ben Waggoner. He's from Ben Waggoner Digital. Everybody knows Ben. Ben's been around for a long, long time. We're going to go for about an hour. I'll hold all questions to the very end of the session. And then when we do the questions, please come up to the microphone so we can catch your voice on the recording. Thank you. OK. Let's dive in. So how many people saw this session at the 2003 show?

Not too many of you. OK. It's good. Because I only thought I was doing this about five days ago, so it's the exact same slides. But the content will be new. So I'll bring my own spin to things. So it works now. It's you. And there's me, and it's all good. That's who I am. Wonderful. OK. So today we're talking about preprocessing. And you can think about preprocessing as everything you do between the source frame of video and the frame that actually goes into the codec. So pretty simple definition here. And let me walk over here so I can see some stuff.

It's so big, this stage up here. The focus stage is going to be on web video. I'll mention MPEG-2 stuff here and there. Who cares about pre-processing for DVD? They care. I'll be talking a fair amount about pre-processing for DVD now. I have nothing but interactive. Who cares about pre-processing for web? For CD-ROM? Anyone doing pre-processing for high definition?

Oh, got one here. OK. That's cool stuff. We'll talk about that. I spent far too much time working with damaged D5 tapes in HD last fall. So the first half of the session is going to be me talking about slides and showing some screenshots and that kind of stuff. The second half is I'm going to be doing some demonstrations. So I got a lot of source clips here. I got a lot of tools here. So I'm going to let you guys pick the demonstrations you guys want to see. So the more extroverted among you, start planning what your questions are going to be. And the focus here is just hands-on stuff that's going to let you get better quality video up on the web or your DVD or however you're doing it. So we're trying to increase the bang for the bit out of our digital media.

So we're going to define preprocessing in a little more detail, explain why it matters, talk about some of the core techniques, I'm going to do some demos of some cool stuff on the Mac, my power work over there, and mainly web video, but some about DVD as well. And I'll do more of that as planning on, because you guys care about it a lot. There are actually some pretty cool tricks for DVD you can do these days. Like 704 wide encoding, we'll talk about that. So preprocessing, I mentioned before, you think of your video stream as a series of You've got a DV, you've got a whole bunch of 720 by 480 frames. Who here mainly works in PAL? Any of you guys? OK, so 720 wide by either 480 or 576 tall frames. Your web video, it's a 320 by 240. You've got a DVD, it's probably the same size as your video source. So you're doing everything to transform the source frame into the optimum output frame for the codec.

This is both the most artistic part of compression, because stylistically you have to make some stylistic decisions off in the process, trade-offs, that kind of stuff. It's often the hardest part. I mean, knowing what the data, you want to get the right data right, you just type in the data rate value you want to have. Knowing what's the right Luma level for having it to look good, that requires some thought. I probably spend, on a typical project, where it's going to kind of a high-volume stuff, maybe 90% of my time in the compression process on pre-processing for challenging content. Just because the other stuff's pretty easy. You know what codec you want, you know what data rate you want, you know what the audio should be, but you've got to kind of tweak and tweak and tweak pre-processing if you're incredibly quality obsessive, which I recommend you all become, because there's too much bad web video out there. So, why it matters. It's all about maximizing the bang for the bit. You want to get the maximum communication value per unit data possible to your end users. And it really does matter a lot. I mean, correctly processed video at 200 kilobits can be way better than badly processed video at 1,000 kilobits. You can always think of it as you're buying your customers more bandwidth by treating your bits better.

So you want to make sure every pixel has data that matters, every bit is something you care about, and you're not wasting bits and pixels on things that aren't actually communicating to the end user. So let me just cut a couple frames here. Let's see if this is scaling correctly. OK, so this is just from a movie trailer. This is from the Biker Boys movie trailer. Anyone see Biker Boys? Me neither, but I had the trailer. Oh, there we go. Any good? No, I didn't think so. That's the problem. But I had the movie trailer. It's often good to work with not very interesting content when you're experimenting with things. You don't get distracted by plot and that kind of stuff. This is from Biker Boys. Let's go with what the source frame looks like. Pretty typical interlaced frame. Projectors kind of squish in a little bit. They get the idea here. And because it's interlaced, you wind up having the two different fields, where there's motion, all that kind of stuff. Now, if we pre-process it correctly, we'll go from that to this.

And the shape of the frame should be changing a little bit. If we take the source frame and modify it at all, encode it, we get that. This is 800 kilobit source in 3 Pro. And that looks terrible. That's the word, terrible. Because codecs, modern codecs, based around DCT or something else like that, do really good with gradients. But sharp edges take a lot of bits to encode. So when you're using, say, the thin horizontal lines of fields, you wind up having that be very hard to encode. And almost all your bits wind up trying to encode the lines you don't care about, as opposed to carrying the content, the content you do. So same data rate, pre-process the frame, we get that instead. So is that perfect? No. But obviously, from an end user perspective, that is a lot worse than that. Same bit rate. So for the end user, they just get a better experience, no sacrifice.

And that's not really an exaggerated case. I mean, I see a lot of people trying to do this on the web, and I wish they would not. One of the reasons why QuickTime has such a reputation for having high-quality delivery technology is really because the people at Apple who do the QuickTime movie trailers are so much more confident than the people who do the movie trailers. You can find it at windowsmedia.com and realmedia.com. It's much more about pre-processing than the codecs. They give you quality. The most critical feature at Mission Tier is deinterlacing.

You've seen video before. Your even lines and your odd lines contain information that's temporally separated by half a frame duration. So if you're at 30 frames a second, the two fields will have an image a 60th of a second apart if it's interlaced video. Obviously, it's progressive video. You don't have that going on at all. Anyone here not grasp interlaced video? Who's mainly a video person here? Mainly a computer person? All right. Does anyone not grasp interlaced at this point? All right, okay. You'd be too embarrassed to tell me if you were, but at least I can now claim I asked. And what happens is that one frame you saw, where when you have a lot of motion, areas where there's a lot of motion, you wind up with this kind of crosshatch effect. In areas where there isn't a lot of motion, it looks normal.

Computer display back is always progressive. So if you're going to a computer device, it's always progressive. Projectors are pretty much always progressive. There really aren't very many truly native interlaced display devices that are being designed at this point. We have legacy televisions and that kind of stuff. But clearly, everything is turning into a progressive display device. And if it's interlaced content, it's just getting converted to progressive on playback. So the future is progressive, is my feeling. And if you're delivering for any kind of web codec, you need to deinterlace it because none of the web codecs we care about support interlaced mode.

If you leave it interlaced, one, it just looks stupid. Even before you compress it, it looks stupid because someone's throwing a baseball. Instead of seeing one baseball, you see two baseballs that have half their lines to it. That's very confusing. Because the codecs find those sharp lines on the edges of the moving objects so hard to encode, almost all your bits wind up getting spent on the stupid part of your image and very few are left for actually making the image look good. So big degradation in quality. Of course, progressive content doesn't need to be de-interlaced because it's already progressive. And if you're delivering on DVD with MPEG-2, because that's also a fielded medium, you're just going to keep the same field mode. So if you have interlaced source, you're making a DVD, you're going to keep it interlaced throughout. If you have a progressive source, you're making a progressive MPEG-2 file for the optimum results. And these days, on most modern Macs, the DVD player will automatically de-interlace on playback of interlaced content. A lot of older systems, putting your graphics card doesn't always do it. I'm not sure what the actual rules on that. But it used to be that the Mac DVD player couldn't play interlaced content very well at all. But it seems to be a lot better in more recent versions.

The basic method of de-interlacing, if you will, is just, OK, I've got my even lines and my odd lines have a different image in them. Well, I'll just throw out all my even lines, I'll throw out all my odd lines, and then process the image from there. So if you have a 720 by 480 db frame, essentially you're just throwing out half the lines and you're left with a 720 by 240 db frame. And then it gets stretched or squished or whatever, processing like any other kind of normal Photoshop style image processing.

And that works. The problem is you're throwing away half your image data. And if you're doing, like, little small web video, that's not a problem. Like, QuickTime Broadcaster does that internally. You know, if you're doing 320 by 240 or less for broadcast, not a huge quality drop. But if you're trying to go to bigger frame sizes, you can actually wind up with a lot of compression artifacts. So if you're going from, because your skidding has to get stretched, so if you're going to do a 480 line out from what's internally a 240, you wind up doubling the height with the typical scaling blocky artifacts there. So if you're really doing deinterlacing, what you want to be doing is what's called adaptive deinterlacing. And all the tools we care about these days support some flavor of adaptive deinterlacing, lots of different names. Basic idea of adaptive deinterlacing is to detect the parts of the frame that are moving.

deinterlaced those, but the parts of the image that aren't moving, so there's no temporal difference in them. And if something doesn't change spatially from field to field in that time, it doesn't need to be deinterlaced. So the adaptive deinterlacer will leave those parts alone. So in the case where someone's throwing a baseball, yeah, you'll lose half the resolution of the baseball.

It was half the resolution in the baseball. But if someone's throwing a baseball, and they've got a big static background, the fence or whatever the background, the fence won't get the interlaced, and it'll remain sharp. And that works well for our visual system. We can either detect motion or detail, but we can't really see fast-moving detail. So it's a nice setup. Occasionally, it'll guess wrong, but most of the modern implementations, 99% of the time, is going to give you the right result.

Sometimes, like scrolling credits, you might see some results of the adaptive de-interlacing. It's probably the worst case. So try not to have scrolling credits. I've actually sometimes just gone through and re-implemented the credits, just typed them in again and re-rendered them in Progressive just to get around that problem.

Now the most important thing for you NTSC folks, for film content, and you PAL people can happily and pridefully ignore this part because you don't have this problem, is inverse telecine. So film runs at exactly 24 frames a second. Video runs at not exactly 30 frames a second. It really runs at 29.97, and each of those frames has two fields in it. So when film gets converted to video in a telecine machine to NTSC video, what happens is what's called 3-2 pull-down. So the first frame of film becomes three fields of video. The next frame of film becomes two fields of video, 3-2, 3-2, and it's 3-2 pull-down. And that basically works to get your 24 images, it becomes 60 fields per second, and you're good to go. And it works about as well as you'd expect. Of course, the motion is never quite smooth, Because some source frames will last three fields of video, and other ones will last two fields of video.

So motion that would have been very smooth in the beginning would have been a little bit jerky because the duration of each frame playback would be a little bit off. And that's why if you watch movies on PAL, a movie on PAL with horizontal motion and pans and that kind of stuff, always looks a little bit smoother than the same movie would look in NTSC. The way that PAL conversion is done is it's just the 24 frames a second sped up 4%, 25%, and just remains 25 frames a second progressive. And that's so easy, and I should move to London, because half of my life is dealing with NTSC weirdnesses like this. But we've got to do it right, and it's hard to do, and that's why we can make money charging for doing video work. So what you wind up with when you've created a file with this-- and you've seen this a lot-- is a file where you'll see three progressive frames and two interlaced frames repeating. sample that in a few minutes. And that's an easy way to test. Just go through, in QuickTime Player, go through frame by frame in a section with motion. You'll see three progressive frames, two interlaced frames, three progressive, two interlaced. That's what you'll see. And the nice thing with that is if you have a tool which hasn't inversed TELUS in the algorithm-- and we have several on the Mac-- it'll be able to reverse that process. Instead of having to deinterlace it and throw image data out, it's just able to restitch the original source frame. So it's able to reassemble both fields into the original 24 frames a second video. And that's great for two reasons. One, we keep the full image data. And second, we're able to restore the original time base. So when we output, we can actually encode at 24 frames a second instead of doing the 3.2 pull down thing. So actually we get smoother motion, same source on computer playback than we would have had on the video playback, because every frame will have the same duration of exactly 24th of a second. Now one complexity is when you do the transfer, the film is really slowed down to 23.976 frames a second. To match the way that 60 compares to 59.94, details don't really matter that much. But for most tools, if you're encoding using inverse telecine, it doesn't support changing time. You have to actually go to 23.976 frames a second. It's a magic number. Other tools will let you change the time base.

You can easily switch that around. And... If you have film kind of content, that's pretty much any music video, movie trailer, feature film, prime time drama, those are all going to be content that was created either in a film or the 24p high definition camera. And if you have content that's like that, that winds up with 3D2 pulldown, you absolutely want to have this available. This pays off hugely in terms of output quality.

One complexity is when someone has source that's done on video, but then it's edited A source that came out as 24p, it gets edited in a fashion where it's not keeping track of where the original frames were. Now Final Cut is trivial. You just have a 24p project, take a 24p source, it'll do it frame accurate. And when you export it with 3.2 pulldown, it'll keep what's called the cadence of the 3 and the 2 consistent. However, if someone just takes so much and tells us they need video files, puts them in a Final Cut, add a video frame rate project to an interlaced 29.97 file, when they do edits, they're not going to wind up putting edits exactly where the edits would have been in the film originally. And then when you take that into a tool, you can wind up with issues where you just can't figure out where the frames were because of what's called the cadence. So instead of getting the 3-2-3-2, you make it 3-2-3-2-3-2-1-3-2-3-2-3-2-3-2-2-2-3, kind of weird patterns of that, where video edits get dropped in. And some tools just completely fail, like After Effects, when given that kind of content. Other tools, like cleaner, deals with it pretty robustly. Ideally, you just have content where it's done correctly.

Cropping. Cropping is a place where I see people messing up a lot. Especially, you'll see a lot of web video out there where there's a few pixels of black on the left and the right, or on the top or the bottom, something like that. The reason about that is video monitors, televisions, don't show the edge of the video signal, by definition. I mean, a consumer TV does not have an underscan mode. You just wind up with a safe area around the edge that you know is going to get left out. Obviously, when you play back on a computer, on a screen, it's going to give you every single pixel.

The upper left-hand corner pixel is going to be shown on the screen, or else it's horribly miscalibrated. Because of that, when you're converting from content composited for video, there may be junk around the edge of the screen that would never be seen on a television, but that would show up when you took the same frame on a computer. And the simple thing there is to remove that, crop it out.

And at the very minimum, you want to crop out edge blanking. That's going to be with the thin black lines at the top of the bottom or the left or the right. Often, DV source has no blanking at all. But typically, analog source is going to. And sources being captured at 486 lines tall is almost certainly going to have a few black lines at the top.

And the lower the resolution you're going to go to, the more you can crop. Because when we shot the safe area, which is the region that's known to work in all televisions, no cinematographer is going to put critical content within 10% of the edge of the screen. Because they know that plenty of systems aren't capable to-- consumers are going to have TVs that are going to either give you a distorted image or no image at all at the very edge of the screen. So they're not going to put anything critical there. Your lower third text is not going to be intruded into the 10% boundary around the screen. The actors' heads aren't going to be in there, all that kind of stuff. So the lower the resolution you're going to, you can actually crop pretty aggressively into the safe area, making the foreground objects a little bit larger.

Another thing you always want to do is you want to crop out letterboxing. There's no point in sending black bars to a customer for most stuff. Certainly not for web video because you can make your web video any size. If you leave any kind of black bar in there from letterboxing, you're just sending--you're spending bits and spending CPU cycles on playback on nothing. Much better to just crop those out and call it done. And also because many codecs, especially low bit rates, give you artifacts with sharp lines, very sharp black line, the letterboxing winds up messing things up a little bit. You often get distortion around that top. When you go into DVD, of course, if you're doing-- DVD only supports up to 16:9 anamorphic. So if you have any kind of film source that's more than 16:9 in aspect ratio, like most films are like 1.85:1 or 2.35:1, you're going to need to leave in some letterboxing, and that's inevitable with DVD. But for web video, you never need to have letterboxing in your video.

Just a reference there. So this is the 5% boundary here called action safe. That's the 10% called title safe. So the general rule of thumb is for the action safe area, motions in this area should be able to be seen as motion. Anything out here is fair game. It may or may not be presented. Anything in here should be able to see it kind of moving, but if it's text, you might not be able to see it because it'll be distorted around the edge. So the title safe area, the image should be pretty clear. I mean, you're not going to get any distortion at all theoretically. So text should be visible, all that kind of stuff. So there's never been anything critical outside the action safe. You should assume everything inside title safe is going to be critical. And, you know, depending on the content, somewhere in here is kind of the range between where things get critical or not. And you can just kind of look at that. If you can imagine--let me turn the laser thing-- there we go. Ha-ha, I got laser pointer. If you look at the bounding box there, if you're doing, like, a little, like, 160 by 120 web movie for modem use or GVP kind of stuff, The difference between having a frame that shows all this other stuff-- and the drum's kind of neat, and his head. But if you just crop down to that, you can see his hands better, his facial expression's better, and can help out a lot.

Then scaling. Scaling is taking what's our-- after we've cropped our source. We see crop is telling don't take these pixels outside this box into consideration when you're doing your scaling. And then the bitmap gets changed in size to the output bitmap that actually gets handed off to the codec. And two things. One is, especially for web video, you need to make the video smaller to play back on the web. And two, you're also going to be correcting for aspect ratio in here. One thing that just typically freaks out web and computer people is I start talking about non-square pixels. Because from the computer world, the idea of a non-square pixel is like talking about a square wheel. But the video world, all the professional video formats use pixels that are rectangular in shape. So 720 by 480 could be either a 4 by 3 aspect ratio or a 60 by 9 aspect ratio. But if you just looked at 720 by 480, a square pixel, that actually be a 3 by 2 aspect ratio, and that never actually occurs. So any kind of DV file is always distorted, either squished or stretched horizontally, depending on what the format is. When you go into a web video format that's going to be square pixel, and almost all of them are going to be square pixel, you need to be able to correct for the fact that the source is non-square pixel. You have to make it two square pixel. You have to either stretch it or squish it in order to make sure you correct for that. The basic goal is if on the video monitor there was a circle, on the computer monitor on playback after compression, you want there to be a circle as well.

And that's pretty straight. That's the goal. You see a lot of stuff where things are stretched about 10% too wide on the web. Quite common. People figure, OK, with 720 by 480, I'll cut it in half size. It'll be 360 by 240. And there we go. And I'll put it up on the web. And you can assume any time you ever see a 360 by 240 web video file, someone did it totally wrong, because they did not get the aspect ratio correct. If you have 4 by 3 source, you want to have your output resolution in square pixel also be 4 by 3. So 4 by 3 source, a 320 by 240 would be good, because 320 divided by 240 goes down to 4 by 3. 6 by 480, 512 by 384. Anything where you have 4 units wide by 3 high is going to work.

If you do 360 by 480, everything's going to be 10% too wide. If you deal with actors very much, when you make them about 10% more fat, they complain a lot. So a bad win there. And your circles are ovals, all that kind of stuff. So you just want to make sure that you're going to a square pixel output format. You want to make sure that your output resolution matches your output and your source frame aspect ratio. So two great examples for web video are you're coming from 720 by 480, typical DV content. You want to have a 320 by 240 or 4 by 3. And if it's a 16 by 9 source, 432 by 240 works just fine. So you're picking the output.

The only difference here is the aspect issue of the source file, the resolution is 720 by 480 in both cases. And if we're doing PAL, these are both good numbers as well. Because again, PAL is 576 by 720, but it's also either 4 by 3 or 60 by 9. So you need to have a 4 by 3 or 60 by 9 output frame size. Some codecs don't like odd numbers and that kind of stuff. As a rule of thumb, as long as height and width are both divisible by 16, you're in pretty good shape. With Sorensen and MPEG-4 and that kind of stuff. MPEG-2 has a few very specified frame sizes it supports for DVD stuff. You just have to pick one of those. Typically, when you're doing a DVD encode, you don't have to worry about this at all. You're not going to do any scaling at all. If it's a 16x9, 720x480, you're going to go to a 16x9, 720x480. There are a few cases, like if you have a 720x486, for example, you need to crop from the 486 down to 480. It's really important when you're converting from a 486 source to a 480 line source, you don't scale it, but you crop it. Because the 486 is actually grabbing six more lines out of the video signal than the 480 does. So if you scale it, you'll get a little bit of a distortion and a little bit of loss of image quality as well. If you have a 46 source, like an SDI cap or something like I want to make a DVD out of it. I want to crop four lines off the top and two lines off the bottom. You may be tempted to crop three and three because that sounds the right numbers, but if you crop an odd number of lines, depending on the tool, you may or may not wind up reversing your field order, so as the odd lines become even lines, and then when you play back, things go higgledy-piggledy. So much better to just pick four at the top, two at the bottom. You don't have to worry about those changes happening. And also the quality of your scaling algorithm matters as well. Professional encoding tools will use, you know, sign or by qubit kind of scaling, if you just do like a QuickTime export, you know, for a QuickTime player going from a really big file to a really small file, you'll often get a little bit lower quality scaling just because, you know, QuickTime's not meant as a professional encoding tool in the player itself. Compressor will give you a much better result than the QuickTime player will with the exact same settings because it uses a higher quality scaling algorithm.

OK. So when we're authoring for web video, or anything really, our goal is if we're going to scale, we only ever want to scale down. Because any time you scale up, you're interpolating. That's like going into Photoshop or After Effects and trying to make something bigger than it was. And it always gets soft, and it will often get blocky. It's not a good experience. When you're shrinking down, you maybe lose some detail, but the resulting image will at least be sharp. So let me just kind of walk through a scenario here. If you have a 720x480, you're doing a non-adaptive deinterlace, or if you have content where the entire frame is moving, and if you have a case where the whole video frame is moving at once, adaptive deinterlacing does not pay off at all, because all adaptive deinterlacing does is it doesn't deinterlate the parts of the frame that aren't moving, the entire frame moved, you're just going to have to eliminate one of the two fields. If we did a safe area crop of 10%, that would take us down to a 648 by 216 size. And then we would convert it from there to a 320 by 240, which seems like going from a 720 by 480 by a 320 by 240 should be scaling down by a lot. But after we've been interlaced, after we've cropped, we're actually scaling up vertically, even though we're going that way horizontally.

which leads to one of the tricky things about pre-processing. When you're working with interlaced, vertical is vastly more important and tricky than horizontal is, because you have all the horizontal you could possibly want, but the vertical is really what you're trying to preserve. So it gets down to figuring out what you... Typically, you're going to try to preserve as much vertical detail as you can. You typically don't want to crop even one extra line of vertical you don't need to, and then just the horizontal is necessary. So when you're going to 320x240 or higher from NTSC, or 384x288 or higher with PAL, you want to crop as little as possible. You're definitely always going to crop out any head of blanking or letterboxing because there's just no data there. But you don't want to crop any extra stuff in the safe area. And you want to use adaptive deinterlacing if it's a true interlace source. And if it was a film source with 3-2 pulldown, you definitely want to use inverse telecine and restore the true progressive mode. And with inverse telecine, even if you have a frame that's fully in motion, because it's just restitching the original source frames, that's going to work for you just fine.

Okay. Next is luma adjustment. And luma is basically but not quite the same thing as brightness. And I recommend you read Charles Poynton's book if you want to care about what the difference is. You can sort of think about video filters in two classes. There are filters that affect luma, brightness, and filters that affect chroma, or color. And best to think about them separately. And typically you're going to do a lot of work on luma for a lot of cases, but typically chroma you don't really mess with very much because it tends to survive the process a little bit better. And also, we see mainly luma, so that's where it pays off. And the classic luma filters are contrast, brightness, and gamma, or the levels filter in After Effects and tools like that are also just luma.

So this is a complex issue now. If anyone's been doing QuickTime for a while, you probably have it in your head that you have to raise the gamma a bunch when you're encoding on a Mac for PC playback. Anyone doing that? Do you know that rule of thumb? You guys all new? All right. You know what I'm talking about. So if you were doing all that, and if you didn't know about that, you can not feel bad because you largely don't have to do that anymore. So there are two classes of codecs in QuickTime, the ones that will correct for gamma on the fly and ones that won't. So, for example, if you use the Sorenson Video 3 codec, if you code that file on a Mac and play it back on a Windows box, it'll appear darker on a Windows box than it will on a Mac. If you use the MPEG-4 codec, it'll appear the same brightness on a Windows box as it will on a Mac, because that particular codec will automatically correct for the gamma.

This is confusing, unfortunately, so you have to know which codec you're going to do. That's one of the reasons I recommend that if you're doing QuickTime for Mac users, use Sorenson, because it's a better codec, all that kind of good stuff. stuff. But if you really try to make a QuickTime file for cross-platform audience, MPEG-4 codec has the advantage that you're able to--it'll correct for us gamma for you. It won't appear too dark on the--on PC. And it's also possible to use QuickTime movie alternates to make different versions of the file, different gamma for Mac and Windows, and movie alternates automatically switch between them on the web. But that's for different class. And so the two--let me kind of walk through the filters there. First, we have brightness and contrast.

And these are often grouped together. Brightness basically exaggerates how different the thing is from gray. So you have an exact middle gray-- sorry, that was contrast. Brightness just shifts the entire luma range. It just adds x amounts. If you do a brightness of plus 20, every pixel will become 20 units brighter. So pixel value 0 will become 20. Pixel value 200 will become 220.

People get in trouble with brightness because people say, "I want my video to look brighter, so I'll turn the brightness up." And it's actually almost always the wrong thing to do. You virtually never want to actually raise brightness as a filter itself. If you want to have the video seem perceptually brighter, you're going to use the gamma filter we're going to talk about next. Because brightness just adds to the entire range, what was black, well, it can't be black. If you add just one unit of brightness, what was black of zero becomes one. all of a sudden your black becomes sort of a dirty gray.

So if you're using brightness filters, you're almost only ever going to use it with negative values. And the goal of using the brightness is to make elements that should be black, like white text in the black background for title card or fade to back, that kind of stuff, you might turn the brightness down a little bit just so that it becomes all the way down to zero. Now typically we have rendered graphics, like you're rendering out from Final Cut or whatever with a black background, it's going to stay black throughout. You don't have to worry about that. But if you have any kind of analog source, those little luma values for each pixel will be a little bit randomized. So, you know, even if when it was rendered, they're all zeros, it goes out to beta SP and comes back again, you get some zeros, some ones, some threes, some fives, that kind of stuff. So you can just use brightness a little bit, say brightness minus five, the fives go down to zero, the twos still go down to zero, the zeros stay at zero, and all of a sudden, sort of a noisy background becomes all the way black. So, you can, that's a negative brightness, a slight amount, can really help make it crisper and add some more vibrancy to it. And also if you have a case where you have a black background that's really noisy, but making it really black, I mean, a big rectangle with a number zero over and over again is very easy to compress. If you have totally random analog noise, that's actually kind of hard for a codec. So it actually will encode better by doing that.

Contrast, what it does is it basically exaggerates how different a pixel is from gray. So at a total mid-gray, contrast has no effect on. Absolute black, absolute white has the most effect on. And the closer you get to either black or white, the more effect contrast is going to have. Now, a few years ago, when you're doing encoding with QuickTime for the web, you always had to add a plus 27 contrast value in order to get your blacks to come out as black, coming from a video source to an output source. The good news is now QuickTime handles all that in the background. So I still see people who are still doing this, and they wind up getting a double contrast effect, and they get really crushed blacks and whites. So again, with really clean digital content, you normally don't need to add contrast anymore. But that is used for analog source, because again, that helps push the blacks a little bit more to black. And if you have a little bit of analog noise, it can make the whites into a flatter white, and that'll just seem a little more vibrant and can encode better. So when I'm using these filters, it's almost always because I'm trying to get my blacks blacker. My general rule of thumb is I'm going to use one unit of brightness for every unit of contrast I'm going to have. So if I have a choice between using a minus 10 brightness to get my blacks black, or a minus 5 brightness plus 5 contrast, the combination of plus 5 minus 5 will give me the same black effect, but it will leave whites about the same. So it won't make the images as dark if I use the brightness filter overall. So I'll take a look at showing some of that stuff later on. But it's a rule of thumb. You don't want to use only brightness or only contrast to crush your blacks. You want to use a combination of them. And that'll leave the rest of the luma range a little more intact.

Other LumaFilter we care about is Gamma. Now, I give them in this order because if a-- all truly virtuous processing tools put the GammaFilter after brightness and contrast, 'cause when you laboriously use brightness and contrast at your blacked-out at zero, you want it to stay at zero. If you have Gamma after that, Gamma's not gonna leave it at zero. You have the GammaFilter before brightness and contrast, you wind up having the GammaFilter changes where the zero lands, and it gets much more complex. So, anyone out there making compression tools? Gamma is almost the inverse of contrast. Gamma has the most effect at the middle grade, and has no effect at the extremes.

And Gamma, people say they want video to look brighter. Gamma is really what they're talking about doing, because it makes mid-tones brighter, but leaves blacks, black on whites, whites. So if you're just trying to make a video clip look brighter or look darker, the Gamma filter's the place to start. Now the complexity here, I mentioned before. Max, by default, use a Gamma value of 1.8. Video, by default, uses a value of 2.2. Windows machines use something between 2.2 and 2.5, and it's not really defined. It used to be a big problem. You had to make different files for Mac and Windows for all this kind of stuff. The codecs inside QuickTime that use what's called the 2VUI color space will automatically correct the local gamma. So if you play back the file, it'll store it internally at 2.2. If you play it back on a Mac, it'll just assume it's 1.8. Play it back on a Windows machine, it'll assume it's 2.5 and play it back correspondingly. The complexity is it's not actually reading whatever the gamma value it is. If you've gone into your monitor's control panel and specified a different gamma value, it's going to ignore that value. It just assumes every Mac in the world is a gamma of 1.8. If you told the system differently, it doesn't care. Same thing with Windows. It has no way to actually get the real value. It just assumes it's a 2.5 value. But still, it's a good thing in general. And so the good news is if you want to make a single file that looks the same on Mac and Windows, use a 2VUI codec, which MPEG-4 is the best distribution option right now. If you were using Sorenson, you want to make a Sorenson file on a Mac that will play back on Windows Write, you do need to add about a plus 30 gamma for it to look identical across.

The next kind of filter we look at is noise reduction. And noise reduction is kind of like a smart blur filter. The idea behind that is to try to take out parts of the image that aren't image, but are noise-- I mean, grain, or random analog stuff. Blur those out, but try to keep the actual content, like sharp lines, text, all that stuff intact. And these are hard things to do. I mean, even the best algorithms will blur more than you want them to, but it's better than just throwing a Gaussian blur over the entire image.

The way different tools implement this vary a lot. You've got, you know, some have things called grain killer or grain suppress, which can work for some kinds of noise. They have also temporal noise reduction filters. It varies a lot. The thing is, if you have source that's got really bad analog noise, you're pretty much hosed. These filters can take you from mediocre, from, like, bad to, like, nearly mediocre in quality, but you're never going to actually be able to get good quality output by using noise reduction if it's noisy. It's better than nothing, But it's much better to have clean source to begin with, and you can't ever fake clean source. Kind of like you can never make high definition for real out of standard def.

Next up is audio normalization. For the most part, audio does not require a lot of preprocessing, especially if it's something that's mixed for TV or DVD, something like that. That's going to encode pretty well for the web at reasonable bit rates. The one thing I often want to do is audio normalization. If you have a clip where the overall level is just too low, what normalization will do is it will find the loudest single sample, and then it'll either raise or lower the overall volume, keeping the dynamic range intact, but just changing the overall amplitude to a specified value.

And typically, minus 3 dB is good for most modern codecs. Some QuickTime tools used to default about minus 6 dB, because the old QDesign 1 codec had trouble with things that were near peak. But you don't care about that anymore. So minus 3 is just fine these days. You don't want to go all the way to 0 dB peak, because there are some codecs that will just-- codecs at approximation, you can wind up having to try to give a digital value that it's not possible to express with the codec, which can give you a little audible distortion. So it's a good rule of thumb to go to minus 3 dB.

You might use some other filtering. And typically, these are going beyond the realm of audio processing and pre-processing off to like you're doing some kind of cool audio work to clean up bad source audio restoration stuff. Doing compression, basically making the quieter parts of the audio louder, or even the loud parts loud, you're targeting a 3GPP kind of playback device. A playback on a cell phone, subtle dialogue is not going to be audible for most people. So often, limited dynamic range there can help a lot. Notch filters, we've got like hum and that kind of stuff in the background can work quite well. Same with noise removal. Your compression tools aren't going to have this kind of filtering in them for the most part. So if you have clips you need to apply this kind of stuff with, you're diving into some kind of pro audio tool.

I got a cube thing. I like that. So that's kind of the wordy overview. And now let's go over to the laptop here. Can we do that? That didn't look promising. Oh, OK. So this is the interactive portion of it. We've got about 36 minutes left here. I've got FilmSource, I've got NTSC, I might even have a couple of Palo Clips here. I've got every compression tool known to humanity.

Anyone have a pre-processing kind of project they're working on with a particular tool they want to see how to do, or that kind of stuff? Someone got a question for me? Have you demonstrated one of these techniques? So we've got microphones. Let's go to the microphone back there so we can get it recorded in audio.

Hi, I'm Scott Thompson from NewTek. And I'd just like to see some DV source, maybe, recompressed down to maybe 320 by 240. Okay. Do you have a picker tool you use for doing that kind of processing? Nothing in particular. I use MediaCleaner once in a while if I do that. Okay. Something like that.

Sure, we can dive into Cleaner. This is Cleaner 6. I don't know if anyone from Discrete has been around to the show at all. We haven't had a new... It's like a beta of Cleaner 602. It's about a year old now, but that's the last thing we saw of it. I've actually had a real release of Cleaner, even a point release for over a year now. I fear Cleaner may be done, but it's still useful for a lot of stuff. They actually have a pretty good design for tram preprocessing off. This clip here is called NASA.move. It's a pretty interesting one here. Kind of goes a little bit here. So this is-- OK. So a couple things to point out here. So this is a DB file.

It has blanking on the edges here. See the little black line there a little bit? And it's also letterboxed. So the first thing we want to do is we want to be able to crop out that stuff. Do I have a... So this is a 4x3 source. If it was mostly just a 720x480, we'd get an image like that. I wanna say about cleaners, let us, we could tell it to, okay, show it to me as a 4x3 source clip and it'll show it to me correctly. If it's 60x9, you could flag it like that as well. I want you to assign a preset to it.

First thing I'm going to do is crop it. This little crop filter here. All I need to do here is grab it and draw a box. And it's good to go. Now one thing you need to watch out for in clips is if they come from a lot of different source files, you often need to wind up scrubbing through it and looking to see if there's any other frames that wind up not matching there.

OK, looks good there. Especially if it's some archival footage, often you'll have things where the frame will vary a little bit. That was a little good there. There's a spatial-- see right here in this frame, even though all the other frames were good, there's actually a little bit more letterbox in this frame here or there. So I actually have to go in a little bit tighter here.

One thing you have to watch out for, and you can see this monitor here, is off at the very edge, you see a little bit of distortion. You can see that the first line of it is a little bit wider than the line before it. So you don't want to crop at least one pixel from the top and bottom just to get outside the distortion range. It'll look a little bit funky. Your top and bottom lines are often a little bit off. So here we've about cropped in about one or two lines from the start of it. And that's grabbed us our whole thing there. All right, so.

I'm cropping unconstrained because I didn't-- yeah, you can also just do a 16 by 9. If I was going to add to DVD, I would definitely do a 16 by 9, but I don't know if this is quite-- yeah, it's about 16 by 9 there. Yeah, you can do it either way. So in this case, if you know you want output to exactly 16 by 9, you can set it in that value. Or also, you can even do a custom aspect ratio if you happen to know it's a particular thing. In this case, we know it's 16 by 9. And we can just do a box like that.

And in this case, we wind up actually cropping out a lot of other frames that we don't need to. And most compression tools only let you do global settings, setting overall. If you really need to do a lot of tweaking per thing, I tend to dive into After Effects and do all those filters there if I need to do different processing on a per frame basis, which winds up being pretty labor-intensive and expensive. So that's how we set the crop up. And the processing side is pretty straightforward. Let's just say I'm making a-- I would say, of course, deinterlace. I want to make sure I have the adaptive mode on. That gives me the adaptive deinterlace. Image size, 3200 by 240, like that. Cleaner, by default, has a sharpen filter turned on. You do not want to use a sharpen filter for most preprocessing. Sharpen makes it a little crisper before you encode it. But sharpen adds noise as well. So it'll typically give you more artifacts on playback, because it'll tend to exaggerate any noise as well as edges sharper. So always leave sharpen off. The adaptive noise reduction on Cleaner is pretty good, so you can leave that on. Also, you have these filters here, look at gamma, brightness, and contrast. And the default settings are somewhat random. They were designed for Cleaner 5, and Cleaner 6 is a different processing mode, probably does video, so actually you wind up having different values. One thing I can do, digital collector, okay. There's a great little utility you get with Mac OS X, the Utilities folder.

I use all the time for this kind of stuff. It's called Digital Color Meter. If you've got Mac OS, I think it's from 10.1 on. This comes free with the OS. or And what it does is you just point at a place in the screen that will tell you the RGB value of that point. And Cleaner really ought to incorporate this built into it. It would be very useful, but it doesn't. So what we can do here is we can just go over here and look at our values and see how they look and all that kind of And we can pick a frame like this.

Get a preview window. Hold on a second. Did I not change it to-- Let's just preview under here. Hello, did I not? Where are we? What's our problem here? Quick time. Oh, I didn't apply it. Yes. I've not been using Cleaner as much lately. It's like the dominant tool of my career for a long time, but it's getting so buggy these days, it's kind of hard. I wind up not as good as I used to here. Well, I had applied, didn't I? Maybe it was one of the bugs I was talking about. Yeah. Setting default. It's still showing me a-- eh, all right, OK. It's just not keeping that setting for some reason. Oh, I know. It's just unconstrained. Because it's always getting a piece of the crop setting.

Do it now. There we go, okay. So, and this is where you can pick up some of these black level issues. So if I look at this here, you can see these values here actually aren't all the way down to zero. And our goal is we want to have the black levels down to zero. I mentioned before, I mean, it's gonna show how we'd do that a little bit up.

These are the default random settings. I'll turn those off. So by default, we get 8's and 9's and that kind of stuff. And a good starting point I tend to use is minus 5 and plus 5 actually is not a bad starting point. Those values, 0's and 2's, are quite a lot better. Maybe I'll target minus 8, and the contrast of 8.

1 to 0 is around. OK, let's go to layer 10. That should get us pretty much there. A couple green values getting out of there, but it's close enough to live with it, I suspect. You're doing that kind of processing there to get your blacks good. You also want to make sure that you're... Whoops. Thank you. you're going back to and making sure it's going to look for further frames. Because Cleaner and other tools like Compressor only give you a global setting-- did you forget my crop box now? You're just killing me here.

Oh, right. We're not going to do 13 by 240 at all, of course, because we're doing 60 by 9. It's actually 320 by 180. Yeah. I confused myself by giving you a-- OK. All right, that's pretty sure there. And before and after here, the AB slider shows the effects of image processing. So right there we're seeing the effects of having thrown in that brightness and contrast. You can see it makes it a little bit darker overall. Ideally, if you're doing image processing, it shouldn't feel like you're modifying the video. It should feel like you're kind of peeling a layer of grime off it. That's the effect you're trying to go for. It shouldn't seem overly dark and that kind of stuff. So before and after. Too bad. It's a little bit darker. It gives a little more richness to it. The original video had kind of weird black levels. All right. Looks pretty good like that.

And then, so I'm pretty happy with the image settings there. And then the audio side, I could just do a-- normalized 90 and 90 in cleaners about minus 3 db they're good to go and that's pretty much all we need to do to do a pre-processing in cleaner so did that look like what you're talking about wherever you were you need any questions about that or specifics there all right cool uh you got a question over there yeah We are aware of the normalized function in Cleaner. We're also aware of it in Logic, for example. But we have a lot of video coming in to Final Cut Pro, and our users do not really know how to handle normalization. I'm getting some echo right there. Can we get a little closer to your microphone? Yeah. Are you aware of a plug-in available, an audio plug-in for normalization that we could plug into Final Cut Pro? Hmm. Because we have a lot of dirty audio coming in and video, and our users don't really know how to handle normalization manually. - I can't imagine that someone hasn't done one of those, but I can't, off the top of my head, name one. Does anyone know of a normalized plugin for Final Cut? Say that again? Oh, Waves. Yeah, Waves. But is it really a normalized plugin? You know, like as simple as this? Because Waves is usually too good, too many variables. Yeah.

Mm-hmm. Is it-- Yeah, you can definitely do a pure-- you may say when you're doing a normalization, it's like a compression where you're not touching the dynamic range at all, effectively. So if you have a compression filter that you can deactivate, it has a slider for how much dynamic range you can adjust to tell it to leave it alone, they'll give you the same effect. Yeah, and Waves does good-- they've been great stuff for years. What's the disk size of L1? L1, okay. Is it FIESTU or out of your units? Bottom.

It's also pretty trivial to just take the audio into another audio tool and just reprocess it and then import it into Final Cut at that point too. That's typically what I'll do. I mean, something simple like... I mean, the free version of Peak that comes with Final Cut, it's bundled with Final Cut, can certainly do a normalize just fine. So, yeah, Final Cut, you have that at least. Hi, my name is Daniel Benner from the University of Texas. I recently had a project where we had a bunch of source video that was shot in 1985 on S-video. And I wanted to know if you could suggest some best practices for importing that in, in a way that it can be edited in Final Cut without having to render all the time. OK, so you've got SVHS tape.

I mean, it's pretty straightforward. I'm in love with the AJA I/O systems. They have like for a grand or so, it's a full one that is SDI and analog. You can buy a $1,000 one that's small, that just is analog. So you can get an AJA I/O that'll take your S-video and put your XLR audio. So if you have a professional or industrial SVHS deck, like I've got a AJ555, something like that, Panasonic, and that's a device controllable XLR audio out SVHS deck. Plug that straight into the AJA, it'll put the firewire into your Mac to run Final Cut, and it'll be able to just give you device control, 10-bit uncompressed capture, balanced audio off your SVHS tape into Final Cut. And then, you know, just in a G5, you'll have multiple real-time effects on the G5 just in software from that capture using the uncompressed codec. Would it be bad to take the S-video from a S-video deck and then go into the back of a DV deck and then capture it via FireWire that way? Yes, yeah. Because the DV codec's only 25 megabits. It uses only a very limited 411 color space. DV is a fine acquisition format. So if you're just shooting things in the world with DV, it's fine. But when you try to convert anything to it that's already got analog noise in it, or if it's got motion graphics, that kind of stuff, the DV codec's really not robust enough to be the second generation of anything. So I mean, yeah, if you do it, it would work. You'd get video out of it. But the write-on I'm talking about will give you a substantial higher quality. And also, if you have a good VHS tape, and you have a good time-based corrector, and all those analog things which most of us have fortunately been able to forget about come back into play, right?

a good analog, which means you spend a bundle on cables and proc amps and all this kind of stuff. But it's not too bad. In the AJA system, actually, I'm very pleased that it's a straightforward, cheap, you can plug it into a laptop, all that kind of stuff, set up for doing that kind of work. Does that convert it to DV? No, it can if you want. It can leave it as uncompressed, 8-bit or 10-bit. It can convert to DV50, convert to DV25. Whatever you want, and motion JPEG, I assume. Thanks. Great. Thank you.

Hey, Francois, I'm using a cleaner. I've been using a cleaner for a lot of time, but I'm a bit concerned about its future. Yeah. What kind of product, equivalent product, would you advise to... The two actively... The products that have active engineering on them for Mac encoding tools. Obviously, you have Apple's compressor, which I'm not sure if it's-- Compressor's pretty good for some stuff. It's got some limitations. It can't do like two-pass VBR with Sorensen encoding and that kind of stuff. As the 264 codec becomes dominant in QuickTime over next year, I would expect that Compressor will be relatively more useful because it'll have access to a codec that's a lot more competitive than it does right now. Sorensen Squeeze is a major, major new version that's been announced, just went into beta, should be in a few months, squeeze four, pretty much a whole new tool really aimed squarely at the cleaner space, and Sorens is working hard to make that work, and they've got 264, and they've got all kinds of stuff in there, and that should be, that looks quite promising as well, and a few months, and there's PopWire's Compression Master, that's out now, there's really good, it's mainly an MPEG-4 tool, it can make great MPEG-4 inside,.mov files, and MPEG-4 files, and also 3GPP files for doing 2Pass and all that kind of stuff. So if you want to use the MPEG-4 codec in almost any flavor of it, Compression Master is my favorite tool right now. But if you want to make really good.mov files, Cleaner is still the best thing out there. It's got some unique features only it has as far as peak data rate for 2Pass VBR. It can automatically do an audio sync fix for B-frame content and that kind of stuff. So I expect as long as I need to create Legacy cleaner content, I mean, legacy QuickTime content, I'm gonna keep on using Cleaner for keeping around on the hard drive for years to come, even though it seems unlikely at this point they'll ever see any more releases for it, or even bug fixes, I don't know. I mean, Discrete says they got some engineers working on something related to Cleaner, but they won't say what or how or even which version of it and that kind of stuff. And the beta of 6.0.2 came out almost a year ago, and they haven't even released it for real yet. It's still in beta for all this time. And that's required for Panther compatibility. So we're talking about Tiger now, That's not a very good sign. Thank you. Yeah, so we're getting there. I just had a comment about the VHS to DV business. There's also the early Sony decks and things that have head-to-head transfers from cableless, and it has TBC and some other things on it too. So it actually does some sweetening of the signal as well. Yeah. So there are ways you can make it work with DV. Just the DV bit stream itself, I don't-- I mean, if you're on a budget and you just need to get the in any form, it can work. But if what you care about is really high quality, that's a limitation. I actually got a project here called VHS Ugly File, which I decided, what is the worst, nastiest, analog garbage you can imagine? which is hard. So I had some guys like a fourth generation EP mode VHS dub.

About as bad as you can possibly get, as you can imagine. So a problem, too, because I have a professional SVHS deck. And I can only play SP mode tapes. No one ever uses professional LP modes. I had to go find an actual VHS deck, which I hadn't used for years, and drag it out of my basement and capture it. And capture off composite just to make it all extra special. I can talk a little bit about what we can do to make this thing better, which is not a whole lot. I think here. Yeah, this is a bad clip, isn't it? It's got some good music with it, too.

One thing about video, it's important to realize, is that no matter how bad the frames are, the motion's always really good. Even if it's a really bad, horrible generation of SVHS thing. So the thing you really need to try to do it is when you have bad quality video like this, is make the frame size small, keep the frame rate high. Because you've got 60 fields a second of motion here. So make a 320 by 240, 60 frames a second video out of it. Because that'll give you, you know, when you shrink it down a lot, It'll help average out some of that stuff. So let me drop into After Effects, and I'll show you what you could possibly do to make this interesting clip.

And After Effects is overkill for a lot of compression stuff. But you need to do weird kind of video processing stuff. It's still kind of like a Swiss Army knife tool. And the new version's pretty good here. It actually comes with the Synthetic Aperture Color Finesse plugin, which is kind of beyond pre-processing. But if you need to take stuff that was shot badly and really clean it up, it's a really wonderful plugin. It's also available for Final Cut, I believe. They're not bundled. Let me just open up this horrible piece of tripe here.

Is that a question in this room or no? I think this is a lower field first. When you're working with content inside After Effects. Preserve edges, best quality only, means adaptability interlace. So it's not a clear defined thing, but if you are using A for preprocessing, you definitely might have that turned on. You also do 3-2 pull-down removal in a very nasty UI here where you have to guess it. And 6-5 is better in the past, but After Effects totally cannot deal with any kind of thing with a cadence break. So you have a two-hour movie, and there's one field that's off in order in the middle of it. It can't do inverse tele-sending. So the entire clip you work with has to have a perfectly straight key inside the entire thing. You guessed it? No, it doesn't have any. Let me see if you see something like that. So let's say I was going to make a really small version of this. I could just make a-- sorry, the 640 by 480 timeline's not a bad place to go.

Now one neat thing you can do is you can actually make a composition that is 60 frames a second. If I have interlaced source video, that's a 29.97 interlaced, that's actually 60 fields a second of data. And I can make 60 frames progressive out of that on output, or really 59.94.

So when I've done that, I'm actually able to produce an output that's really all the way there. So there are two ways you can handle doing things in After Effects. You can do it at full res in the comp, and then you can do a nested comp at the final output resolution or set it on the output resolution. It's often most informative to actually make your comp the size you want to have it at. So if you're making a little web video thing like that, work it at the lower size. You just need to go through and obviously deal with the scale right. And scaling in AfterMix is kind of weird because you can't really crop per se. You just kind of scale and position it right to give you the crop you want. So, well, 50 is not good enough. I'll do 51.

Another thing about this clip is it's got what's called tearing in VHS mode. All this stuff at the bottom is kind of not all to an angle here. When you play it back it just looks stupid. When you're in VHS you want to crop all that stuff out of the bottom.

So I'm going to zoom in a little bit here. So 50 and go 52. So we'll just go down a little bit. Pretty straightforward. And we'll get something like that. The feature that people don't steal enough from After Effects is the all-important levels filter. This is probably the best visual processing filter in the history of humanity here, because it gives you an integrated histogram of black and white points. So you have a video clip that's not all the way to black or all the way to white. You're actually able to just see the histogram. Like, OK, this clip here is full range. It's not a problem. But you can-- OK, my mid-tone, you know, like that, all that.

okay, well, I want, it will actually be like there, whatever, if there's analog noise, maybe I want it to be there, all that kind of stuff. It gives you some nice feedback. This picker clip here is full range, so I wouldn't necessarily need to use it much. Um, But generally for kind of video like this, often Electric Gamma can do you really good. So just a little 1.1 Gamma.

gives a little more presence. It depends on the clip here. So we got what we can there. Now, there's really little you can do on this to make it looking good. But actually, 100% size there, scaled out, it isn't that horrible. It really isn't as horrible as you can imagine.

I just got to do a little preview there. And the one part you have is you do have a lot of difference per frame in terms of temporal noise there, because it varies from frame to frame. You can sometimes get some better results by using-- you've got the Pro version, your Remove Grain filter, some kinds of video noise you can actually do a semi-decent job with. I won't belabor you the incredibly complex set of things you can do on it.

You can see inside the preview box there, you get a little bit less effect there. And you can tune it in more. And where that really comes in is because grain is random for every frame. By using the grain suppression filters, it's going to try to find errors that are totally different frame to frame and suppress those, leaving with the actual underlying motion is. And again, it's not going to be perfect, but it'll give you a little bit better quality and take out some of those errors there.

And so if you have-- and this actually will give me a full 60 frames a second output. So if you do have that old content trying to put it on the web, emphasize the frame rate, because you've got so much frame rate, and just shrink it down to the point where the artifacts can kind of disappear where you can. But also, it's always going to be garbage in, garbage out.

For a really bad source, you could make it, again, just make it less bad. Even mediocrity is typically out of reach, unless you're going to go in and rotoscope the whole thing. Treat it as a source to paint over, effectively. the only way to do it in a lot of cases. Okay, next question. Is it up there? Someone-- Yeah, come to the microphone.

Hi, Ben. Cliff Wooden here. Just a quick question. You mentioned MPEG-4 a couple of times, but were you meaning MPEG-4 part two there, and specifically using H.264 to refer to part 10? Yeah. My parlance right now is I say MPEG-4, I mean MPEG-4 Part 2, and if I mean MPEG-4 Part 10, I'll say 264. That'll probably change over time as kind of MPEG-4 becomes... Five years from now when we say MPEG-4, we're going to mean 264. Because Part 2 never really got all that much traction. So clearly, the entire MPEG-4 industry is waiting for 264, and that's going to be the mainstream implementation of it. Because it's just so much better. But today, QuickTime is only Part 2. Part 2 is simple. So that's what we're using for stuff right now.

I didn't get the fields right there. Let me turn this off. Because it's distracting when I play that video and I'm trying to talk. Yeah, the real... Now, one thing to bear in mind is that the... Nothing personal to the Apple people. QuickTime's built-in MPEG-4 encoder is, to put it charitably, more speed-optimized than quality-optimized. So even if you want to make a.mp4 file or a.mov file with MPEG-4 codec, there are other tools like Squeeze and Compression Master They'll give you a lot better quality at lower bit rates than Apple's encoders. Even if people have had kind of a question about MPEG-4's compression quality, there are other tools making compatible bitstream that can give you better results. Both Squeeze and Compression Master have a two-pass encoding mode. You can tell it to go really slow and really high quality, while the QuickTime encoder is really tuned to just like, you know, massive action in real-time, real-time broadcast kind of applications, and it works great for that. But it doesn't have a slow and sweet mode, which some third-party products do. So you can make a QuickTime compatible in big four file, but a lot better quality than a lot of the stuff we see out there right now. You just use the Apple exporter. Thank you. OK, who's next? Someone's got someone. I'll just have to do some demos, otherwise. Anyone want to see some stuff in Compressor? Oh, here we go.

Can you show an example of using a mask to do some specific compression where you're masking out? A mask? Like-- Give me an example. Yeah, maybe I'm not saying that correctly. But to soften a portion of the image field and compress it differently. That almost winds up--it's almost never worth it in the end. Because you're trying to do compression, you have multiple frames, if you have an acycle image, I mean, you can use motion tracking to like-- I honestly haven't had a case of a video where that was worth doing for years. Because typically, any kind of error in it is going to be overall. Let me think. Sometimes I'll do per frame processing, but I'm actually doing a masking for it.

A good example might be if you had a video of a talking head and they shot it against a moving background, like leaves blowing in the wind or something. It's a constant shot. You might use After Effects to mask the person out and blur the background so it looked like it had a shallower depth of field or something. Yeah, you do that like you would do it in After Effects by-- Well, actually, with After Effects, I did have a case last year where I was doing this high def project where it actually was a damaged D5 tape. So there are some-- they actually have macro block errors sometimes. I don't like the-- so you have like a 16 by 16 block of the frame where the video wasn't-- We're basically the only one field to be intact kind of stuff. And actually, the 6.5 actually got a rotoscope. So, I mean, she can go in and hand paint and use the clone stamp tool on a per frame basis. And it's surprisingly powerful. I mean, like 6.5 on a dual G5. I mean, you can do really good real time rotoscoping kind of stuff with it. So that's definitely in there, I'm sure.

I use 5% of After Effects a lot, so I don't really do much mask stuff. I have a question. I was wondering if you could pontificate on this. As content creators transition to using HD and much higher quality cameras and acquisition, like the Vericam working in 24 progressive square pixels, giving us much higher quality content to start with, will we be out of a job?

Pre-processing in HD is much easier than for standard definition because HD is always analog, I mean sorry, always digital. Analog is half the problem with pre-processing and non-square pixels is a big issue as well. And DV is almost always square pixel digital which makes it a lot easier. But you still get some complexities. You know, there's converting from 720 to 1080.

Computer playback, I mean, everyone who's shown up high def computer based playback today, it's all 24p. So if you have 60i source, typically you have to get converted or something like that. Today's codecs don't do a great job. Computer codecs don't do a great job with interlaced content. And MPEG-2 will work, but it takes a pretty beefy machine to do real time MPEG-2 deinterlacing on computer screens to 60p playback, and it's a nightmare.

So it's still a fair amount you need to do with it. Often times you get a little bit of letterboxing. know if it's more than 60 by 9 aspect ratio. It's just occurred to me that the difference between what a less experienced user might get from compressing a really high quality source from HD down to web with default settings in some of these tools versus what we might be able to get, that difference is much less with much higher quality source. There are subtleties.

There's also HD and SD's different color spaces. Bear in mind. So HD uses the 709 color space. And SD, in almost all computer playback, uses 601. So you actually have to do a transform in there to get the colors to come out matching perfectly accurately. And it's handled transparently in a lot of tools, but not always. You have to get that right is an issue. Another thing that happens is typically, a lot of HD for computer play gets compressed horizontally. some of you can code at 1440 by 1080, just to make it a little bit easier to do and some stuff going on there. But yeah, I mean, HD is massively easier to compress than standard def in the real world. You just need a lot more computer for it, but, you know, that's coming along as well. But yeah, I mean, it's amazing to me. I mean, the part is, I only work on hard stuff pretty much, but, I mean, so I've had some HD projects require a lot of preprocessing, but kind of in general, yeah, you just kind of, like, drag it in and say go, and you're done. And it is almost always progressive, and it is almost always full frame and square pixel, and all that kind of stuff.