Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2009-623
$eventId
ID of event: wwdc2009
$eventContentId
ID of session without event part: 623
$eventShortId
Shortened ID of event: wwdc09
$year
Year of session: 2009
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2009] [Session 623] The Technol...

WWDC09 • Session 623

The Technologies that Empower Podcast Producer

Mac • 1:13:33

Speakers: David Kramer, Nathan Spindel

Unlisted on Apple Developer site

Downloads from Apple

SD Video (266 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Hello everybody and thank you. I'm David Kramer and welcome to the final session of the Podcast Producer Track at WWDC 2009. I hope you all had fun last night at the Beer Bash. I'm glad you all made it here this morning. We've got a really cool session here today. Because I'm going to get to talk about the technologies that empower Podcast Producer and so usually most of these session are about features and even if it's about some new technologies, some API, they're talking about the features of the API.

And with all the other sessions you've heard this week about Podcast Producer, have shown you the new features of Composer, of Capture, of the server, of the library. What we're going to talk about here today are the technologies that we used to create Podcast Producer and so this is like, this is my passion. This is what I do for a living is working with these technologies to create Podcast Producer.

I don't actually use Podcast Producer in my day to day life, but I do use these technologies every day and so I'm just really excited to be here to tell you about them. So, it's me, David and so the description for the session was that Podcast Producer leverages several industry leading Snow Leopard technologies.

Which to me sounds like someone from marketing wrote. So you know, how do I internalize that to mean something to me? Well that Snow Leopard rocks. I mean you've seen Podcast Producer hopefully this week, you've seen what it does. I mean, the fact that we can use Podcast Producer to do all that is amazing. Snow Leopard enables that for Podcast Producer. The other side of that statement is that by understanding Snow Leopard, you're going to understand Podcast Producer.

So you know, some of the session is going to be about development techniques, but some of it's also going to be about administration techniques, to show you by understanding the technologies behind Podcast Producer, you can see how to you know, deal with issues with Podcast Producer. So Podcast Producer uses a lot of technologies and I'm not going to have time to talk about all of them today and even on this list, this is not complete at all. One that's notably missing here is GCD. I don't know if any of you had a chance to see the Grand Central Dispatch talks this week.

At Dave's view session, he talked about how everyone at the company who's been starting to use it has all reported back how much fun it is to use the API and I can, I will admit I'm one of those people who thinks this API is incredibly fun. So I'm not going to talk about GCD today other than just to tell you that it's really neat technology. I'm not sure that really the overview does it justice.

Once you start using it and programming in it and really getting your mind around the difference in programming style between what you're used to and GCD, it just opens up a whole new set of possibilities for you. So that's just my little pitch for GCD. We're going to talk about just a few technologies today.

On Podcast Capture side we're going to talk about Core Animation and using QTKit to capture video. In the middle, in Podcast Producer Server, we're going to take a look at Ruby on Rails and at Xgrid. And in the third section we're going to look at Ruby Cocoa and Quartz Composer and we'll also take a look at QTKit in code.

Podcast Producer is like a factory. So I'm sure Podcast Producer's been described to a lot of you many time this week, I want to describe it a little bit differently. It's about a mass production factory and what's going on here is that you get to define the sequence of operations once and then use purpose built components to repeatedly apply the sequence of operations to each raw input to produce finished goods.

And yes, I did have to go to Wikipedia to figure all that out. It so that's the factory. So Podcast Producer Server is here in the center and it's going to need some sequence of operations. So Podcast Producer, Podcast Composer comes along, let's you create the workflow and install it in the factory.

Podcast Capture is the provider of the raw inputs and that's where those submitted input videos come from, either recorded from a camera or just submitted as a file. And finally, once the workflow munches on it for a while, out comes the finished podcast, your finished goods and they get put to the library or sent off to other locations. In a real factory you don't ever really see what's going on inside. Especially like a contract factory.

You're going to send your spec to them and they'll make the stuff. They're probably going to figure out where to get the goods. Maybe you'll tell them where to get the goods but they're actually going to receive them, they're going to put them together. The widgets, the machines, all the different, the purpose built components, you're not going to see them.

They're behind the factory doors and so Podcast Producer is sort of like that. We're hiding a lot of complexity from you but that's not always what you want. So I'm going to contrast mass production to craft work and sort of the message that drives that home for me is I like homemade buns right, I assume most of you do to.

They're better than store bought in almost 100% of the cases because you can taste the love, you can taste the time and they're fresh. So in the same vein I think, crafted podcasts might taste better than mass produced podcasts. If you take the time to really set up all of your intro videos and your credits and your titles and you know, when a new person shows up you put a little badge over it with their name and you have multiple camera angles and you switch between all of them, great that's going to be a high production quality, it's really crafted. It's going to take a lot of time and your customers are really going to like it.

But that's a lot of work and so manufactured podcasts are going to be a lot easier for you and that's why we made Podcast Producer, is to make it easy. But the point of this session is that maybe Podcast Producer isn't always right for you. Maybe you do want to craft your podcasts, but you'd like to use some of the same techniques that Podcast Producer is using.

So we're going to show you some. The two things, main things you're going to learn today are, how to leverage the same technologies that we're using in Podcast Producer and how to be more effective when using Podcast Producer. It's not going to be as convenient to leverage these technologies. You're going to have to write your own code but it's going to be a lot more flexible.

We've chosen a specific path for Podcast Producer, which might not be the path that you guys want to go under. So the other side to being more effective is that I'm going to show you some debugging techniques and also talk about some places where you can start customizing your workflows beyond even what Kjell and Eric were showing you with Podcast Composer.

Let's step back a second. There might be one or two people here who didn't go to any other Podcast Producer sessions and so I want to just describe the system real quickly. You've got the capture side where using Podcast Capture, you can submit files from other sources or use a camera to record data. It then gets sent up to the server which processes it through a workflow and finally distributes the process information out to the library where it can be consumed by your clients, your mobile devices, computers, what have you.

And you've see a few demo's of Podcast Producer this week probably and I'm going to show you my own version of the demo using the command line. Because that's where I live. It turns out I can type faster than I can use the mouse most of the time.

So I like to just sit down here. And so we've got this wonderful podcast command line tool and in it's simplest form, you can just ask it to connect to the local host computer using your current credentials and get info from the server. So let me make this bigger for all of you.

There's a lot of data here and it's all in XML and we don't need to spend a lot of time seeing it but we have a list of cluster members, you can see what authentication types are supported on this server, what it's kerberos service principle is, a unique identifier and the version number. So if you ever need to inter-operate with Podcast Producer, this using the command line tool you need to know what version it is. This is a great way to get that information.

You might instead of listing the info, want to see a list of the installed workflows. So in this case this is an authenticated operation and so you're going to need to enter a password and you get a list of all the workflows. Again, you can see there is a lot of description here.

Here's the montage workflow. Here's the dual source workflow. Every workflow has a unique identifier. It specifies the kinds of input types it can take, who created it. There's a lot of information here and we don't really need to look at it right now. It's also going to tell you who has access to those workflow. As an Admin, I have access to all workflows right now.

So that is listing the workflows. We can also list the cameras and I don't have any cameras bound to this machine, so we get an empty array back. You'll notice this is all XML property lists coming back from Podcast Producer and you can even do things like start recordings or bind cameras or submit files from the command line. So let's take a look at the man page.

This man page is really complete. We've put a lot of work into making sure all of the options are documented with examples where appropriate. The server is optional. So is the username and password but you can pass them all on the command line. You can also specify the authentication type. It's often more convenient to use Kerberos authentication. And then there's various commands. We saw list workflows, list cameras, feeds, catalogs, servers, info.

And then for the agent you can bind the camera, unbind the camera, you can get it's config, set some preference values. For instance, the recording quality or the which device. So we've got a few, few things here. On you can then get status about the camera, start, stop it and cancer recording. And then there's some details here about doing a multi-source recording from the command line which is a little complicated.

But I urge you to read the manage page for information and then submit file and this is the one that I'm just using all the time. Because like I said, I can type faster than I click and so rather than using Podcast Capture application to submit files when I want to test the system or test changes that I've made, I just use the command like and it's very easy.

You submit, you specify the file path, which workflow you want to go to and you can even specify the title and description here, making it really easy to get content into your system from a command line tool. And so I want to encourage you to think about how you could integrate this command line tool into your own software, to inter-operate with Podcast Producer. So let's see if I can show you Podcast Capture now. We were having a few hardware issues earlier with this demo, but it may be working now.

So all those things I showed you in the command line. You can see if you log in, if you where to get videos. We don't have any cameras so it won't let me do that. I could do a screen recording. Or if I do a file submission, this list of workflows is the same list that came back from the command line tool and again this title and description are the same thing that got passed on the command line for the submit tool. And this isn't just oh well, it's the same. It looks the same, it's the same name. What we're actually doing here is using the command line tool from Podcast Capture.

So all of the functionality in Podcast Capture is in the Podcast command line tool. So the cheat here is that one of the technologies behind Podcast Capture is the Podcast command line tool. We sort of designed our own technologies to support ourselves. That is the Podcast Producer from the prospective of the command line.

I did want to show you a couple more things, but like I said, we had a few hardware issues so I wasn't able to get a camera down to this machine and show you that. So you can clap if you want but I didn't really show anything impressive.

But that was the demo.

[ applause ]

[ no speaking ]

Like I said, Podcast Capture harnesses the command line functionality and puts it into a simple interface and that's really the key. Is that the interface has been made simple. Most people don't like using the command line, I know I'm in a very, very small minority of people who prefer to work down there and so created Podcast Capture to really be the face of Podcast Producer.

And because it's the face we want it to be pretty and we want to make a good impression on users. So my seaway here is that we are using Podcast, we're using Core Animation to do a lot of really nice effects and make the user interface look good. I was going to show you some of those in the demo but I was unable to so instead, I'll bring Nathan up and he will show you Podcast Capture as well as how to use Core Animation.

So, Nathan, please.

Hey so as Dave said, I'm a Cocoa developer on Podcast Capture and Server Admin Podcast Producer type stuff. And so we wanted to show you a bit behind the scenes of how Podcast Capture uses some Snow Leopard technologies. As Dave mentioned, Podcast Capture is just a GUI wrapper around the Podcast command line tool.

Everything that Podcast Capture does, it harnesses the power of Unix to execute the Podcast tool, look at it's output and present an intuitive interface to the user And so you guys too can build your own custom interfaces as you've done with the Leopard Podcast Producer and you can continue to do with Snow Leopard Podcast Producer. So what I wanted to talk about today was a little bit of Core Animation and QTKit capture. Those are two very cool technologies that Podcast Capture uses.

How many of you guys have used Core Animation here? All right, a handful. So Core Animation is a technology that's for graphics and animation programming. It was new in Leopard and it's available in Leopard, Snow Leopard and also as a central component of iPhone. And you can do the same kind of similar types of animations across all three platforms.

So some basic concepts is a layer is the kind of building block of core animation. It's really similar to views in AppKit, although layers also have 3D and 2D transformation that you can apply. Another really cool and powerful thing, probably the most powerful thing about Core Animation is it's animation engine.

Whenever you change a property on a layer, you get automatically an implicit animation if you move it or you change the opacity or you change it's rotation, you automatically get a quick, short animation. You can also do more complicated animations called, explicit animations, where you set up a bunch of steps to do in a row. Another really powerful aspect of Core Animation, which is much more powerful than the similar technology in Cocoa is the consummate of layout managers.

In Cocoa, the way you set up how a view moves or resizes as it superview changes, is called, stress and strings, and Core Animation is called Layout Manager lets you do much, much more complicated and flexible, powerful layouts. Two things to note when you use Core Animation on the Mac and on iPhone, is to use restraint. You don't want to have every single part of your UI animating for three or four seconds. It just becomes annoying on the user.

You want to make it meaningful, so where something slides in to attract the attention of a user or fade out to make them where it's no longer there are the types of places where you should use it. Basically, if you study the iPhone or how the animation technology is in Mac are built, you should model your use of Core Animation off that.

And you should refer to the Apple Human Interface guidelines, they have a great section on animation, where it should and shouldn't be used. So what I want to show you today is how we built the reflection in the dual source, the new feature of Podcast Capture. The reflection is also seen in CoverFlow and iTunes and the Finder.

And so the way we built it is like this. So we have our root layer which will be black and then we have a content layer which is the image. We'll create another layer and flip it upside down, called the reflection layer. Then finally we'll add a gradient layer on top to make it look shiny.

And then we'll put them altogether in a container layer, so that we can do rotations. And so I want to show you a quick demo on that. So I have a sample up here where I haven't done any code yet, I've just set up a NIB, which I'll show you.

[ no speaking ]

And it's building. So this is the window. In the main area we'll have the black area and show the reflection. But you'll see we have three controls to turn off and show certain aspects of the animation. So, let's get started. So first we're going to create some instance variables and properties for the four layers that I showed you in that slide. So, we create instance variables, properties and then we'll synthesize them.

This is all the standard ObjectiveC 2 technology. Then in our, away from NIB, we'll set up the root layer. So we're going to set up the entire UI programatically. That's generally how it's done in Core Animation. So we'll have our black root layer, then we'll create the content layer. This is the layer with the static image.

So we'll just use the lotus image. Then we'll create another layer, the reflection layer and as you can see we copy the bounds and the contents from the layer we just created above, but note we put a transformation on it which flips it upside down. And then finally we'll create the gradient layer, which as you can see, goes from 70% black to 0% black, this little passed the little, and actually the CA gradient layer is new API in Snow Leopard and iPhone OS3. Then finally we'll set up the layout constraints so that the content layers on top and the other one's on the bottom and we'll put them in a parent layer.

And then we'll set up our three actions that I mentioned for those three controls. So as you can see, these actions are really simple. Enable reflection layer, just change the opacity from 1 to 0, same with the gradient layer and then changing the layer rotation is just one line of code to apply rotation, getting the value from the finder. And this EnableSLOWMO is a macro for setting the animation when you hold down the Shift key, like you have seen in parts of OS X.

So now I'll build and run. And so here we go. So with just a little bit of code we've set up our black layer with the image, if we check this box we'll see the one upside down. If you notice it had a really quick implicit animation. There was no code to do animation, that was the implicit animation I mentioned before.

If we hold down the Shift key, it does two seconds. And if we enable the gradient layer, here we go. That's the standard Apple colorful, shiny look. And with that one line of rotation code, we can rotate. If you notice, it can go from any point to any point and it takes the same amount of time. If I hold down the Shift key, it will do it slow.

And another cool aspect of Core Animation which is pretty unique among animation frameworks is that, if you change the value mid-flight in the animation, it automatically knows what to do. So I can go back and forth and it doesn't get all choppy. So that's pretty cool, but that's not what Podcast Capture does.

I'll show you what Podcast Capture does. So you guys have all see this before and Snow Leopard we now have the Dual Mode and the Dual Mode as you can see, it has a Quick Time thing on the left. So you see I have Live Preview, it's a little choppy because this is a Macbook Air, which as you know is not our fastest Mac.

That you can see there's a reflection below me and there's also a reflection below the screen capture preview and they all update live. You can also see that there's a slight rotation on it. So how do we build that? Well all we're going to do to get live video feed, is to change that one content layer from an image to a Quick Time capture layer.

So in order to do so, I'll use QTKit. And so QTKits is a framework that was new in Leopard and it's been enhanced significantly in Snow Leopard with Quick Time 10 stack and it allows, it's a Cocoa API to do audio and video recording and it's really simple.

All you do is you add inputs and outputs. So we'll set up a QT capture session and some outlets to start and stop recording. So we have a couple more instance variables, properties and now I'll hook them up in the NIB file. And it's really a small amount of code to do this. If you guys ever used the Quick Time API's before the Cocoa framework, it took hundreds of thousands of lines of code to do similar functionality. So I'll unhide these buttons and hook them up.

[ no speaking ]

And so all the start and stop recording buttons are going to do is start and stop the movie recording which will go to Quick Time movie file. So we'll hook it up to our record and stop actions. Then backing away from NIB, we'll set up a QT Capture session. This is not much good at all to create the capture session, find a default video device which is going to be the built in eyesight and add and movie file is the output.

And then we'll change the content layer, which is before we had that static image, we're going to comment that out and add a QT Capture layer and set it's bounds to the size of the eyesight. And that's all you have to do to get live preview. And then finally, we'll add the actions for starting and stopping. As you can see, starting is one line of code and stopping is also another one line of code. We'll build and run.

And we'll see that now we have live video feed. Notice that I didn't have to change the reflection code at all, but it still works live. Which is really cool and I can rotate it as I speak. And we can start recording, rotate if we want to, you know do all this live and then stop recording and we'll get our movie file. This is the same exact stuff that Podcast Capture does.

It uses Core Animation and it uses QTKit capture to do all this kind of stuff. And so really the point to drive home is that we've created Podcast Capture as the ideal experience we saw would be for Podcast Producer, but as we've heard from you guys, there's a lot of custom installations, custom deployments where people want to do a web app or they want a more streamlined interface. And you can build the same on the Mac or you can do it on the Web.

And now I'll hand it back to Dave.

[ applause ]

Thank you very much Nathan. That stuff is really cool. So what we saw was Core Animation and I don't think we mentioned it, but the little highlight also at the beginning of Podcast Capture, sort of when you hover your mouse over one of those squares that doesn't quite look like a button, it puts a blue, a blue glow around the button so that you realize that it's a clickable area. And the Dual Source Picker as you saw, is implemented using Core Animation and we've just given away all of our secrets, so please go ahead and write your own version of Podcast Capture that meets exactly your site's needs and still looks just as great as ours.

And you saw QTKits Capture. It's a really simple API, they've done a great job at simplifying all of the details and making it really easy to just start capturing from a built in device to a file right away, with very little code at all. We talked about Podcast Capture, now let's what does Podcast Capture talk to? It talks to Podcast Producer Server. And the two aspects of Podcast Producer Server that I would like to talk about today are the HTTP application server side and the Xgrid client side and these both live inside Podcast Producer Server.

[ no speaking ]

The way Podcast Producer Server works is it's running an HTTP application server so it accepts requests coming in on the network and Podcast Capture is one of the clients that makes these web requests and actually it's the Podcast command line tool that I showed you that is actually the process making these web requests. When a request comes in, for instance a submit file, what ends up happening is that that server saved some data into the database and then the Xgrid client aspect of Podcast Producer Server notices hey, there's some new video that needs to be processed through a workflow.

So it generates an Xgrid job and submits that off to Xgrid. Xgrid runs this work and the workflow as it's running, will want to publish data back to the library and to notify the Podcast Producer Server that the library is ready to be updated, it also makes a web request back in to the application server. And finally the application server is also serving up the Podcast Producer library, so when Safari or iTunes comes along and requests the content, it's also hitting the exact same application server.

The application server itself is built on an open source web application stack. We start with Apache as our web server and then we use mod_proxy_balancer, to distribute over single threaded application servers running in the back end. Mongrel is our application container process and it loads the Ruby on Rails environment.

Podcast Producer Server is written almost entirely in Ruby on Rails. And then finally, we're using the SQLite 3 database adapter to save all of our data into a SQLite database on the disk. So the web request comes in and it hits Apache. Apache then hands it off to the mod_proxy_balancer plug in, mod_proxy_balancer has been configured with multiple Mongrel instance addresses and depending on which one is available, hands off a request. If one of the Mongrel's is busy, it'll hand it off to another one. If one of them crashes, it'll move on to the next one. So we get a little bit of redundancy here and it's all hidden behind the mod_proxy_balancer.

So clients just see a single website. The Mongrel's are actually listening on a local port at a local IP address, that is not advertised on the network. So people coming in remotely have to go through Apache and so we can use Apache to do certain kinds of access control or logging, because it's our single funnel point.

Finally, as the request comes through here, the Mongrel immediately passes the request off to Ruby on Rails and which then executes our application codes and ultimately most requests to our server end up talking to the database, if only to verify your cookie. The part of that stack that I want to talk about, that's interesting to me, is Ruby on Rails. This has been included with Mac OS X since Leopard I think.

It's still there in Snow Leopard. It's got an updated version and the sort of special sauce with Mac OS X server is that we've added user interface, Server Admin, to make it easy to deploy a Ruby on Rails application using exactly the same stack that I just described that Podcast Producer is using.

So we haven't actually used Server Admin to deploy Podcast Producer but all of the same techniques that we're using are being used in Server Admin. The details of how to set all of that up are in the Server Admin Help, so you can just go to the Help menu. But you know, why read it when I can show it to you. So let's do a Ruby on Rails demo.

[ no speaking ]

All right so, let's say, yeah we got the web server here and let's make a Rails directory and the we'll run the Rails command to create a new application. So that creates the sort of Ruby on Rails basic structure of an application and we can cd in there and if we run the server script right off the bat, it tell us that it's listening on this local port and local IP address using Mongrel and if we come in here and try to hit it, we actually get to our web server running on that port.

And so this is a static web page that we served up, but if we click this link it's going to make an AJAX request and we say hey, Rails is working and we're in the development environment, we have the SQLite 3 database adapter and we're running Ruby 1.8.7, super. But I'm pretty sure you don't want people to see that page when they go to your Ruby on Rails site. We certainly didn't want that for Podcast Producer.

So it's actually pretty simple to do stuff. My preferred environment for doing coding is Textmate for Ruby on Rails and to get rid of that main page, that welcome page that we saw, we're going to edit the route. And so there's a bunch of comments in here sort of describing what you can do and the default routes match your URL to the classes and methods that you had defined.

So based on the URL it'll go to the object with the control with that name, it'll go to the method with the action name and then it'll pass the final part of the URL as a parameter to that method. That's sort of the default way Rails works, but you can customize it however you want. What we're going to do is set up a welcome controller. So we're going to map the route of our site to the welcome controller and Rails helpfully reminds us that we need to remove this static file.

So now I've set that up and I can start my server running again and if I come back here I get an error because I haven't actually implemented the controller yet and it spit out a huge Ruby exception. So no problem, what we're going to do is run a script to generate a controller. So we're going to generate a controller called Welcome with a single action called Index. Let's pop back to Textmate and take a look at what it created.

So it created for us a welcome controller with an index action and more importantly it created a view and step back for a second Ruby on Rails is designed around a version of the Model View Controller Paradigm, so you're going to have Model classes which map your model to your database, you're going to have view classes which are actually HTML templates and then you're going to have your controller classes which handle all the input parameters and massage them into a format that's ready for templating into your HTML.

So if we go here we see that it created an HTML file, it's actually a template because it's a .erv file and so you can put Ruby code in here, it's very similar to active server pages or JSP or even cold fusion like that where you can template in a little bit of code within your HTML.

So if we now just pop back and run the server after running, after creating stunning design, all right, we run the server, we see that the page that it created has been created for us as is now shows up when we go to the route. So we're almost there, I just want to show you that we can make this a little dynamic. So I say well what time is it now, when I make the request and so this, that's the little escape sequence to tell Rails hey, I want to execute some Ruby code in here.

And so we're just going to say, take the instance variable now and HTML escape it. How about that? And so well, where does this instance variable come from? All views in Rails inherit, sort of have access to the instance variables for their controller class. So over here, ever time an index request comes in, I can just say, let's say the now variable to be the current time.

And if I just refresh this, I don't even need to restart the server because we're running in development, Rails will reload all the classes every time, I get the time. And if I refresh, I get four seconds later. So that's a simple little Rails demo. But not really what I wanted to show you. What I wanted to show you was the Server Admin integration.

So let's take a look at how that looks.

[ no speaking ]

Great, all right. So we don't have Web set up yet, so let's enable that. And the way this is going to work is we're going to take our default site and I'm going to turn off the other web services just because we don't need them for this demo.

And we're going to use this proxy panel to set up the proxy. So actually before I do this, I need to set up my servers to be running Mongrel instances persistently so that whenever Apache is running, it has access, the mod_proxy_balancer has access to these Mongrel instances. So normally how you would do that on another system is you might use Mongrel Rails and let's see if we can get some help on this.

It's a pretty simple tool. Let' see what's up. Yes, so start, stop, restart. What we've done with Mac OS X server is create a tool called Mongrel Rails Persist that creates a launch DP list for you that contains the correct commands to use Mongrel Rails. So that if your server crashes or if you restart your server, the Mongrels are going to come back immediately and always be accessible.

So the command to deploy is going to be Mongrel Rails Persist and we're going to go in the production environment instead of the development environment in this case and we're going to use port 3000 for the first instance and we're going to start this and we're going to do a hazard. All right, so that created one instance and I want to create another instance too because the point of view is in this balancer to cluster and have both fault tolerance and load balancing.

So now I have two instances running on port 3000 and port 3001 So come on over to Server Admin. I enabled reverse proxy and then I'm going to add some workers here and we've actually done something to make it really easy to add the workers. It's doing a Bonjour registration up from Mongrel Rails Persist. So I can just pick the ones that are already here, very easy.

[ no speaking ]

All right, now let's see if that all worked. So whereas before we aren't going to the site at 0.0.0 Quam [assumed spelling] 3000, now if I 0just go to local host, to the default port of the web which is port 80, I come back and I got my Rails app.

And so that's how easy it is to deploy a Rails app using Server Admin and that's almost how we did with Podcast Producer. Finally, what do I want to tell you about. So that was pretty cool and the one last thing to show you is how Podcast Producer is using Ruby on Rails.

[ no speaking ]

So you may have seen the web application that we showed off. It looks almost identical to the desktop application. This is an HTML web app, written in Ruby on Rails. So it's and because it's written in Ruby on Rails, the source code is there, you can peak into it.

I don't recommend you read too far into it or make too many changes and expect them to continue working over time, but definitely if you want to be inspired about how to use Ruby on Rails and how to do things similar to what we've done and especially some of these CSS effects are pretty impressive considering we don't have Core Animation in the browser, it's still pretty nice. So you can do a lot with Ruby on Rails on MacOS X server to make a very rich application. So I encourage you all to look more into Ruby on Rails for your own personal use. Thank you.

[ applause ]

The other side of things, after Ruby on Rails has accepted these requests and in the case of a job submission or a file submission, it's going to them create an extra job and send it off to Xgrid. What is Xgrid? If any of you have been coming to WWDC for a few years, you'll know that I've given sessions about Xgrid in the past. We haven't had any here recently because we've been focusing on Podcast Producer and the information hasn't changed a lot.

But let me just give a quick refresher for people who aren't aware of what Xgrid is. It's a general purpose distribute computing system. In some ways it's about parallel computing but not exactly. It's more about doing computing somewhere else and if you have a lot of somewhere else's, then you'll be doing it in parallel.

But in it's basic form, it's really just about moving the data processing that you want done, somewhere else, so that your client can be turned off, can go to sleep or whatever. And so there's a three tier architecture here. We've got the clients who have the work that need to be done. They submit jobs to the controller. The controller breaks these jobs up into tasks and a task in Xgrid is just a command line utility invocation.

So it's just what I've been doing in terminal, a command, some argument, hit Return. That's basically what Xgrid's going to run for you out on the grid. So anything, any computing project that you have that could be expressed as a command line tool that takes some inputs, you can adapt to use with Xgrid pretty easily.

Once the controller has split these jobs up into tasks, it distributes the tasks out to the available agents, in this case it's a rack of exerts and the job, the task run out there and when they're done they send their results back to the controller, the controller notifies the client that the data is available and the client can retrieve the output.

If the agents go offline, the controller detects this and it reschedules the work somewhere else. So we're using this in Podcast Producer to give us a sort of a reliability, because we can just send the jobs out to Xgrid and then we don't need to worry about who's going to do it or when it's going to get done.

We know that Xgrid controller is going to make sure that it will get done eventually, as soon as possible. And the Podcast Producer workflows that Podcast Composer makes or that you can make your own self, the .pwf bundle format, actually contain a file called template.plist which is an Xgrid job specification template that contains substitution variables. So if you understand how to set up an Xgrid job, you're about 80% of the way to setting up a Podcast Producer workflow.

Because all you need to do is add in the substitution variables that you want to have vary depending on the input. So substitutions can include the title or how long the recording is or the description. Rather than talk more about Xgrid, let's just take a look at how it works.

And again, we're going to take a look at the command line, because that is my preferred, preferred environment. All right, so let's, so the actually command line tool, it's got some options, there's a Man page, great. OK. How do you use it? You specify your host name, which is really long on this computer for some reason and you give it an authentication type. I like using Keberos because then I don't have to keep typing my password.

And at the simplest level you can just say hey, what jobs do I have? Well we don't have any jobs. So another simple thing you can do with Xgrid on the command line is just run a command and so if I were to just run the cal command here, it prints out the current month calendar. Great. So if I just prefix that whole bit here now with Xgrid, just like with job or run, it's going to do exactly the same thing but out on the grid.

So you know, it may not look like it did in anything different, but it actually submitted this job off to the server, which then submitted it to the agent, the agent ran the task, captured the output, sent it back to the controller, came back to the client. But rather than just running it directly, we can submit it, which is the asynchronous way of using Xgrid.

So now we've gotten a job identifier back and if I take a look at the job list now, sure enough, there's the job. So I can take a look at the job attributes and see that it was submitted with a command line tool, when it was submitted, when it was finished, that it actually finished, it didn't fail. This might also be pending. For instance if there's a lot of jobs in the queue that were already running, then this would still be pending. And we can see the name of it and how done it is.

There was only one task and it's done. So it's 100% done. I talked about this specification for a job. It's an XML property list and we can actually retrieve this specification that the command line created for us. So in this case I printed it out in a TXT format rather than XML, but it's still a property list and what you can see here is that it's going to run a task with no arguments and a single command.

So if you, we can actually crack open a, where did I keep that, we can crack open a workflow here, let's take a look at this one. Here in the resources there's this template.plist and this is an extra job specification. It's a lot more complicated than the one we just created but this is, this is what Podcast Producer's using to generate the jobs for Xgrid. So you can see here we're substituting in some global variables. We also have dependencies in Xgrid. So in this case the edit core master task is dependent on the import plug in movie generate task and it won't get started until this task finishes.

Anything else I want to show you? So a lot of these are just static because the task are taking input from a previous task and then generating them for the next task. And because we've created all of these tasks at the same time, we've coordinated the output files names and the input file names. So we don't need to substitute those.

But back here at the beginning, as far as edit core annotator was concerned, well somewhere here we've got our input file getting passed in. So here we're actually getting substituted in the file name and so ever time this job gets submitted to Xgrid from Podcast Producer, a different value is going to be put into that substitution variable. So it's pretty complicated.

We don't need to understand all this right now. But I just wanted to show you that Podcast Producer is in fact using Xgrid at a very basic level and by understanding how to create an Xgrid job, you're really well on your way to understanding how to create a Podcast Producer workflow. You can dig through those PWF bundles to see what other details are in there. One of the easiest things to do is just to take an existing PWF that works and then start modifying it by hand and hopefully it keeps working.

You won't be able to open up these workflows with Podcast Composer once you've edited them by hand, so make sure you've done everything you want to in Podcast Composer before you start doing that. Let's see, so that's the extra client and that's the command line tool, but there's also the Xgrid Admin application and we've improved this for Snow Leopard as well. It supports Kerberos.

Give you a list of our agents, a list of the jobs and we've added auto-logging here so if I quit and I open it back up, it remembers the servers I have and what's available. Thank you. You can add more controllers clearly, I got Bonjour browsing and you can add grids. My favorite second grid is Ygrid.

And so well what else can we do here? We can see a little bit of information here. These are the job attributes we saw in the command line tool. But another feature that we've added is the job log. So if I double-click on this job here or show the log, I get a list of everything that happened and when it happened for this job. So we see when it was created, when it started getting scheduled, when it was submitted, who it was submitted to, when it started running, when it finished, and when the job finished.

So you've got a lot of tasks here, you'd see all of them. They're all going to have their identifier based on what's in the job specification. So in the case of Podcast Producer, if you looked at a job log, at a failed job you might see edit core master as one of the tasks and edit had failed. And so then you might say hum, there's something wrong with in coding today. More likely though the kinds of things that you'll see fail in a Podcast Producer job are using external services.

So you might have a mail action and you maybe typed in the mail server wrong or the credential is wrong or maybe the mail server is down. Whatever the case though, if that task is set up to be required as part of the job, if it fails the whole job's going to fail and so you're going to want to know why aren't by Podcast Producer content getting posted to the library. So if you come into Xgrid Admin and take a look at the job log here, you can see oh, it was the mail task that failed and so you might guess that well, maybe I typed in something wrong. So you could go start checking.

But you don't have to guess, you can actually get more information about what happened and unfortunately, I also don't have that demo set up due to our hardware failure. But what I did want to show you was that on the command line if you get the job results for a job, so back here if I get the results for job one, which you remember was cal, you get the standard output and standard error. And so what we've done with Pcast action for Snow Leopard is add a whole bunch of logging to describe exactly what's going on and what's failing.

So in the case of a mail failure, you'll definitely see that it tried to contact the server, either the connection failed or the authentication failed and you'll have a way of diagnosing what's going on. So the workflow here really is look in Podcast Producer, if your jobs are failing come in to Xgrid Admin, look for the job. You'll see a little red dot for the failed jobs. Take a look at the job log, figure out which task failed and then come back to the command line tool and ask it for the results.

And so that's actually a lot of work. We've been telling people hey, this is how you debug Podcast Producer and nobody really wants to do all that. So we're like OK fine, we can do it ourselves and what we've done is create in library logs, a new location under pcast server D called Diagnostic Reports. So I haven't had any Podcast research off scale here today, but what we've done is whenever Podcast Producer notices that a job has failed, it will actually retrieve all of the results for you and then put it into this folder in a file.

So you're using consult.app, you can browse in here and see exactly why your jobs are failing and hopefully be able to solve them. So that is it for the Xgrid demo. One more point about Xgrid is that if you have Xgrid available to you, if you have Podcast Producer running on your system, you also have Xgrid running. So you can start thinking about how to use Xgrid independently of Podcast Producer or how to use your knowledge of Podcast Producer to help how you use Podcast Producer.

So the two things we saw here were Ruby on Rails and Xgrid and Podcast Capture Web is an HTML based web application and this is the standard use of Ruby on Rails. The 37signals guys have their website, it's all running on Rails. There's a lot of sites out there using Rails to serve up really rich user experiences through the web. And Ruby on Rails makes it real easy to do.

AJAX and real modern style web programming. But the rest of Podcast Producer Server is also Ruby on Rails and we're using it in this case as an XML based web service. So the Podcast command line tool is making web requests to the server and getting XML back, not HTML. And it's not rendering the stuff on a web page, it's using this information to make choices and likewise the Podcast Capture application gets these XML plists back from the server and then converts them into a rich user experience in the UI.

And this is a really powerful technique, it can also be used for iPhone applications. Ruby on Rails is perfectly suited for setting up the server side of an iPhone application and it's network based. And as I said, if you're using Podcast Producer, you have a grid, go ahead and start using it and think about what other things you have at your organization that could benefit from a batch queue system where you can send it a bunch of work and know that it will all get done eventually on the available resources. And finally, now that you've seen how to use Xgrid, you have a better idea of how to dig into failed jobs when your Podcast Producer workflows aren't working as you expected.

So I've been talking about workflows, sort of from the perspective of submitting to them or the factory uses them or the capture or the composer creates them, but you know this is about the technology so let's really dig in to what these workflows do. On the right there you see the screen shot from Composer and the stages in Composer, the five middle stages really map on directly to what the workflow is doing.

So in the generic sense of what does a Podcast Producer workflow do, is it's going to import the source materials into a single video and the dual source case, that's a picture in picture effect, in the case of the montage workflow, that's a number of documents or multi-page documents that you're then going to convert into a single video.

You then edit the video, you add titles, transitions, overlays, credits. Then once you've got your edited master, you're going to encode the video to your target devices, formats, iPod, iPhone, audio only, AppleTV and then distribute the video, send it to the library, send it to the Wiki Server, send it to iTunesU and finally you want to notify people, send an iChat message or an email letting then know that new content is available.

I want to talk about the montage workflow specifically here because it's pretty interesting in what it does. We're using the Quick Look technology to covert documents into images. You've seen, the Wiki Server also does this and of course the Finder does it any time you press spacebar on a document.

The montage workflow then, once it's gotten it's images out of the documents, it uses Quartz Composer to composition from Quartz Composer to render these images into a movie and the movie that we get out of that is a reference movie that only works on the machine that we're running on, so what we then want to do is use QTKit in code to flatten and transcode the movies to device format. So this is exactly what the montage workflow that is installed by default with Podcast Producer does. All of our workflows and pcast action tool and the Podcast tool and pcast server d, they're all written in Ruby and we want to access some ObjectiveC APIs from them.

So we're using Ruby Cocoa almost everywhere. And what's really cool about Ruby Cocoa is it allows you to take a Ruby shell script and use ObjectiveC APIs. And why does this matter? Well because almost every technology that's interesting in Mac OS X, is exposed through an ObjectiveC API and even the ones that are only exposed through C, are actually still available through the ObjectiveC APIs because C is just a subset of ObjectiveC, so you get access to a whole bunch of stuff from Ruby once you start using Ruby Cocoa.

So Podcast Producer's using Ruby and Ruby Cocoa to glue together all of these technologies that we've talked about into a solution and in the case of the montage workflow, that's what we've done that we're calling into Quick Look, calling into QTKit, we're calling into various APIs to make everything happen in the end. Someone once said that PERL is the duct tape of the Internet.

I don't know what that makes Ruby, but it's pretty cool. So the demo I'm going to do for you now is to replicate the functionality of the montage workflow, without using Podcast Producer. And the basic idea here is to start with a multi-page pdf and then generate a series of TIF images and then also an XML file describing the location of these images.

We're then going to take a Quartz Composer composition and this XML and these images and run them all through QC to movie which will generate a reference movie file and then we're going to use the QTKit encode APIs to covert this movie into an MPEG4, MP4 iPod format movie that we can then transfer to any mobile device. And we're going to do all this using Ruby Cocoa.

So I'm not going to open up Xcode for this demo, I'm going to use Textmate and I'm going to show you how to use all of these APIs yourself, just the way we're using them in Podcast Producer. All right, so here's my empty file. We're going to call this script mantage.rv, let's save this someplace.

[ no speaking ]

All right. And I've got my EZ Bake recipe over here where I've already created it, so I'm just going to copy this stuff over and talk about it as I do. We're going to require some Ruby libraries here and then we're going, by doing require OS X Cocoa, we have loaded Ruby Cocoa and we can then use the OS X class object to require frameworks and in this case we're going to load PDFKit and QTKit, because we're going to use PDFKit to convert this PDF into images and we're going to use QTKit to then encode the videos. I've chosen to use PDFKit here as opposed to Quick Look, just for simplicity, to make it a little bit clearer for you all. So next thing we're going to do is this script is going to read some input.

So either we're going to take the first argument or we're going to exit and then we're going to set some variables. So given the input path, we're going to figure out what the name the file was without the extension. We're going to create a temporary directory that we can put our trials that our intermediate steps and then we're going to output the file, the movie that we generate, to the same folder that the input came from. And then this line here just tells Finder to open up this temporary directory that we just created and I'm doing this for the demo so that we can see the files as we get created.

Let's see, so I mentioned we're going to create an XML document to contain a list of all the paths that have passed the Quartz Composer, so here we use the RXML class from Ruby and then we're going to, we don't have any XML to write yet, so we'll just leave this element alone.

And we're going to then read in the input path to create the and once we have the URL for the input path, we're going to read in the PDF document and then we're going to have a page count from that class. So these are just the ObjectiveC classes that we're using directly from Ruby.

It's very easy. So here's the real meat of this script. And what we're going to do is extract TIF images from each stage of the document. So we're going to go through this loop page count times, for each page index, we're going to find the documents page at that index, we're going to ask the page for it's data representation and what this does is it basically creates a PDF document of just the page that we're looking at.

So now we have this page data and we can ask NS Image to turn this PDF data into an image and we can ask the image for it's TIF representation. So this is sort of a round about way of doing this, but it's very easy to do from Ruby Cocoa.

We are going to need to save this image out and since we're saving pages out we're going to need unique file names, so here the index is 0 based. So we're going to add a 1 to it and just save out the file name page 1, page 2, page 3.

We're going to write a little message out so we can see what's going on and then finally once we have the TIF data, we're going to write it to the file that we just created, the file path that we just generated and these are all just the methods that you would see in ObjectiveC, but used from Ruby Cocoa. And now we're going to use the XML. So that now that we have a path to write to, we're going to add an image path element to this XML and set the text of that element to be the path.

All right, next. We're going to save the XML document. Pretty simple. Again, this is a little bit round about, there are Ruby methods for saving strings but when you're living in a Ruby Cocoa world, it's usually easier just to stay there. So here I take the XML that document object from Ruby and I ask it to turn it into a Ruby string and then I pass that into NS string so that now I have an NS string containing the TXT for this XML document and I just use NS strings write to file atomically method to write it out. So let's stop there and see what happens when we run this script.

So, montage, OK it doesn't do anything because I didn't give it a file name. I have a file in here called french4paris.pdf. So let's trying passing that one. That was quick. This machine's a lot faster than the one I tested on. So what did we get? We got some images from the PDF. Can Quick Look them. So this is the multi-page PDF you may have seen in a previous session and then we've got this XML file.

And well it doesn't look too nice there, maybe we can... this bundle XML...tidy maybe? Yes, there we go. So as you can see, we saved these images out to some temporary location but, we've got an array of image paths and each path is a string. So we're going to use this from Quartz Composer.

So how am I doing on time? All right, I better speed this up. We are going to create a Quartz Composition now and this is always fun to do. Because you get to stop thinking about text and start thinking about rounded rectangles. So, let's take a look, here's where I've sort of set myself up.

So we're going to have input and then we're going to determine the path of the current image to display, then we're going to load that image from the path and then we're going to display the image. It's actually easier to start, well we'll start with the inputs first.

So the easiest way to set up a splitter is with an input splitter, set up an input is so I'm going to call this one XML location and I'm going to publish it's input, except I can't do that until I set it to be a string. All right, impose XML location. All right and I'm going to have another one which is the image duration. So here the idea is that we're going to do a slide show right and so we want each, each page of this PDF to be displayed for some period of time.

So I'm going to take this as a number and we'll call this guy image duration and again, we're going to publish same as administration. OK great. So those are the inputs. So now let's head on over to the end here, because it's a little bit easier for me to think about this from this side.

So we're going to create a bell bar, because bell bars are how you display things here and let's take a look at the settings and parameters. I'm going to do the width too, so it fills up the whole area the way Quartz Composer addressed the coordinates face works as that it goes from 0 to 1 from the center. So -1 to 1 is the full screen, so width of 2 gives you full screen wide, that's what we're going to do with the billboard. So the other thing then is we're going to work backwards from here.

Well, OK. I have the XML location and we're going to load the XML with using the XML importer. Pretty simple. We just hook up the location to the location. All right and then what are we going to do with that? Well we need to get the element out of it and the way to get elements out this structure is using the Structure Index Number.

So the parsed XML is going to be a structure that we can pass in here and well what index do we want, well we want all of them but we want the first one for a few seconds and then after image duration we want the second one and then we want the third one. So let's come back to that in a second. Once we've gotten the image path out of this XML, we're going to want to load it.

So use the image importer patch and we just path it in image location and then the image we're going to pass on to the bell bar. So that's pretty simple. But what about this index all right? Well like I said, we're going to base this on the time and the duration but let's work backwards here and start with a mathematical expression that we're going to use to determine the current one to do.

So in this case the expression I'm going to do is well I'm going to take the current patch time, whatever it is, 0 seconds, 10 seconds, however long has elapsed through this event and then multiply that by the image duration or rather divide it and we're going to just floor that so that I get an integer value back. So this is going to be 0 until image duration has elapsed and then it's going to be 1 and then it's going to be 2 right.

Real simple. And then because the duration might be longer than the number, the total duration of the movie might be longer than the number of images I have to show, I'm going to do this all modulo image count and I apologize if you thought there would be no math.

So QC has already detected all of these sort of infra-variables that I gave it, so the patch time comes from the patch time, speaking of time I'm running out of it and then we've got a the image duration comes from the input and the image count is actually to get from that structure, struc count.

So we're going to take the same XML structure, pass it in, take out the count, stick it there, good to go. All right, so that is the entire patch, which should do everything we want. Load the XML depending on the time, display different image to the bell bar.

So let's see what we can do with that. Heading back here, what we're going to do is use QC to movie to generate the movie file and so I'm going to go real quick here, we're going to use that montage, that QTZ I just created, we're going to create a QC.mov, reference movie. We're going to hard code some width, height and duration and the movie duration we're just going to say the page count times the image duration so we don't do any wrapping, the modulo wasn't necessary, but I just did there for completeness.

Now we're going to run QC to movie using Ruby system command. We pass in the XML location and the image duration as inputs that we, those are the ones we published and then the standard QC to movie arguments are the composition path, the movie path, the width you want to use and the total duration of the movie. OK? And then finally once we have that QC movie file, we're going to export it and so this is where we use QTKit.

I happen to know that the iPod format is m4vspace, so I just use QTOS type for string to get that as a string. We then create an export attribute dictionary, QT export, movie export, true, QT movie export type, the iPod export type. We give it a file name and then it's really easy to export this movie.

You just say you load the composition movie, movie with the file and then you take that same movie and you just write it back to another file using those export attributes. Couldn't be simpler. And then finally what our script's going to do is open up Quick Time Player to show us the movie that we generated. So cross fingers, let's see what happens. Why don't I try saving the script first.

And let's see what happens. All right, generating the intermediate movie, exporting file movie, so here you can see we got it, yes that was quick. Wow this is great machine. You guys all should get 16 core duo processors, I tell you. Watch it all right, a few seconds here and oh I did, all right. Thanks guys. You could have told me a little earlier I think but all right, so actually what I'm going to do here is just to cheat, as I will show you the final one, just do and no, no such luck.

[ no speaking ]

All right hopefully it'll switch, yes hooray. So that's it, that's seventy-five lines of Ruby code to completely replicate, almost what we've done in Podcast Producer with Montage workflow. You notice we didn't get the page called transitions, or the intros or the exits, but if you need to do something like this you can and you don't need to use Podcast Producer.

So the stuff I've been talking about, there's various sources of information about it. You can get that coordination sample code, not the one that we saw but basically what we based it off of from developer.apple.com, also in your developer folder on your computer. The QTKit Capture sample code is also very similar to what we showed you and is available on the website.

The Deployment documentation for Ruby on Rails is available from support.apple.com as well as the Server Admin Help. I recommend reading the Xgrid Administration and High Performance Computing guide, which is a PDF available from support.apple.com and finally for Ruby Cocoa, there's an external website you can take a look at that's got a whole bunch of links to a whole bunch of resources about how to use Ruby Cocoa.

I have never found a problem that wasn't more fun to solve with Ruby. So I love using Ruby. Anytime I have a problem to solve, if I'm typing the same thing more than twice in the terminal, maybe it would have taken me three minutes to do in the terminal, I'll spend ten minutes writing a Ruby script to do it, just so I don't have to do it that third time. That's just how I roll. I love using Ruby so, I recommend it. More information we've got our IT Evangelist Mark, the training and certification website.