Information Technologies • 59:32
Automator, AppleScript Studio, and UNIX shell scripts can make managing your Mac OS X servers and clients a breeze. Learn how to create your own time-saving utilities for repetitive tasks using Automator, AppleScript Studio, Perl, shell scripts, and other languages accessible from the command line.
Speakers: Joel Rennich, Brian James, Nigel Kersten, Timothy Perfitt
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Scripting for admins. Thanks for coming. We got a lot of good stuff, a lot of good people, a lot of good projects to hopefully get you excited about doing some scripting things and going out there. So I'm Joel Rennich. I'm a dude from middle of nowhere, Illinois. I've done some, woo, Illinois!
Savoy! Woo! I've done some scripting in my time, but we got some other great people that we're bringing up today to talk about some very cool projects. Cool projects to get you motivated, cool projects that will show you a little bit how a little bit of work can go a long way. This is the third year we've done this session, and that's what we like to show, that scripting is very easy to get into, and hopefully we show you a bit of some sexy stuff that you can get excited about and hopefully bring back to where you were.
What you won't learn, though, is how to script. So if you came here to learn how to script, just leave. It might be easier at that point. We thought about how we could do it in 45 minutes to an hour, and we just couldn't come up with anything. So they wouldn't give us any more space. Anyway, so a little bit of recap. In 2004, we talked about AppleScript, shell, and Perl there, the camel.
Then in 2005, we came back to this session, we talked about AppleScript, we talked about Shell, and we also had this thing, which I didn't realize was the Python logo. It looks kind of snaky, I guess is what they're going for. And Python, so a couple things added Python in the new hotness at the time, and good stuff.
Well, this year, AppleScript, Shell, Perl, Java, PHP, Python, expect didn't have a logo. It hasn't changed in a little while, and so we tried to do as best we could. But so a lot more things coming together here. We're talking a lot today about how we get multiple scripting languages to work together. Dogs and cats living in harmony.
Python and Perl and camels and coffee cups all working together. So hopefully some good stuff. Sometimes we pick on Perl, but there's a lot of Perl coders out there, and they're usually bigger than I am. So today we'll pick on assembly coders, all right, because I don't think too many people code in assembly anymore. They're usually older and can't run as fast as I can.
So hopefully this session, we'll get to that. So this session is more fun than coding in assembly, all right, so that you get an idea of how to do some scripting, some cool stuff, using technologies that we already have in place. So we got three different kind of projects that we're talking about. The first one is going to be an automated self-service method for imaging a large group of machines, all right. This leverages Netboot, ASR, and some other stuff with a very, very cool web front end.
All right, and this is Brian James that's going to be doing this. It's something that we use internally at Apple. It's not really a product. It's not something we can give you. But certainly we can give you an idea of what you could do by gluing all these things together because none of the individual pieces are proprietary or secret or anything else like that.
Then we've got Nigel Kirsten coming up. He's going to talk about log-in hook magic, all right. A lot of times when we talk about different administration topics, we talk about fixing things that are in there or customizing your environment or doing some stuff that's a little bit beyond what anybody ever thought you should be doing.
Sometimes good, sometimes bad. Those are typically done with login hooks. Nigel's going to talk about that. And then finally, Tim Perfit's going to come up and talk about manipulating boot camp. So you can actually go ahead and clone your images and do some fun stuff with that. So with that aside, let me introduce Brian James, a dude from Cupertino, who's going to come up here and talk to you about some projects.
Hi, everybody. My name is Brian James, and I'm going to be talking to you about autorestore.apple.com. It's a web application that I wrote to simplify OS installation. So everyone out here has probably installed Mac OS X more times than they care to remember. You know doing so involves making choices, making a couple selections, you then wait, wait, wait, wait, wait. You hear our little jingle and then you have to answer more questions. It's a pretty manual process, but if you're only doing one machine, it's not a big deal.
If you're doing a couple machines every once in a while, it's not a big deal. But if you're like me and you're stuck doing it to hundreds of machines almost every other day, sticking CDs in drives gets a little bit old. So after doing that a couple times, I couldn't help but feel that There had to be an easier way to install an OS on a whole bunch of machines.
The very first optimization, the obvious one, is to use this tool called Apple Software Restore, or ASR. It's a command line tool that efficiently lets you clone two volumes. I'm sure everybody's probably heard of it and used it. So the very first optimization I did was to install and configure once, set up one machine just how I wanted it, and create disk images.
You can do that using Disk Utility, and you can basically preserve one user. So I stuck all those disk images on a FireWire hard drive, and I would walk from machine to machine doing the restore. So it was a good optimization, but I still had to pick a new startup disk.
I still had to come back and reboot the machine. So it was still a manual process that involved me walking up to each machine. So I figured there had to be an even easier way. So enter scripting. Scripting is a great way to perform repetitive operations. I'm sure everybody knows that. So I made my situation a little bit better. I took all my ASR images, and I put them up on a server. And I wrote a simple script which mounted that server, performed the restore, blessed, and rebooted.
A script might look like this to do that. Basically, it would take the volume in the image, and it would mount an AFP volume, do the restore, bless the volume, and reboot. Pretty basic. So, it saved me lots of time. I could walk up to each machine, run a single script, and the whole process would be done. When I came back, the machine would be rebooted from the new image.
So, better but still annoying, because I really prefer sitting in my office and the lab. So, SSH. Why not sit in my office and SSH to the machine and run the exact same script? So, I took the script, and I put it up on the server, and I stuck it in the document root, the web document root, and I turned on the web server. I would then SSH in to the remote machines, and I would download the script and run it that way, just like I was sitting at it.
So, I could now sit at my office. An example SSH session might look something like this. SSH in is a local user. I would go to my server, at the remote host, curl down the script from my server, save it in /temp, make it executable, run it just like I did before, passing volume and image arguments.
That made things better. One shot per machine, one, and they were done. I could do it remotely. But what about passwords? It was kind of annoying that I had to enter a password each time. It kind of prevented me getting any more efficient. Well, lucky for me, I'm not the first one that's had this problem. So all of your Macs out there come with this really cool tool called Expect. Expect is a program that talks to other interactive programs according to a script. So when SSH expects you to be sitting there entering passwords, Expect can be there in your place.
Expect has a man page. You can read it right now. O'Reilly even wrote a book on the topic. Good news for everybody here is that the Expect script to talk to SSH is really simple. This is actually the only working piece of code, and it's the actual Expect script I use for my tool.
You pass it the host, user password, and the command you want to run on the remote machine. It does the SSH, and it even checks for potential error cases, like wrong username and password. So don't try to write it down. A quick Google search of Expect and SSH will yield a result very similar.
So given this Expect script, I can make my life even easier. I write another shell script, takes the same volume and image arguments, and a host, user, and password. I wrap the three commands I would do interactively-- the curl, make it executable, reboot, and I pass those to the Expect script.
With that script, I can now just write a simple Perl looping script. Assume that you're past arrays of machines and all the associated information. You can now run a single command, which can loop and fork off one of those processes each time. So, I'm now able to SSH into my server and run one command, and I can restore a whole bunch of machines simultaneously.
So, I was really proud of myself, and I walked into my boss's office and I said, hey, I've totally solved the problem. I can now restore one machine, five, ten, a hundred in the time it basically takes to do one. And he said, oh, great, you know, let me see how it works, because if it's good enough, I'm going to fire all of you and just do it myself.
So, I opened up Terminal and I SSH'd into my server and I said, "Oh, you run this command, you pass it these 5, these 10, these 15, these 25 arguments," you know, and his eyes just kind of rolled back in his head. And he said, "That works for you, but don't expect I can't use this. I feel much better with text fields, buttons, pull-down menus." And really, this did. This worked for me, but it would have been too complicated for anyone else to use it.
It took way too many arguments in a required terminal. Well, wouldn't it be great if it had a nice interface? So I sort of stood back and looked at what I'd done so far. So I'd written all these great scripts. I had totally solved the problem from the server standpoint, but I just had a tool that was really too hard to use.
So embracing scripting, I decided to keep what I had, keep this core part the same, and just figure out a nice way to put a clever interface on top of it. And what I really wanted was something easy enough for anyone to use so that you didn't have to read a manual. You could just go to this website, and it would be intuitive.
I wanted it to work from anywhere, and I wanted it to scale just as well as the server version did. And in addition, I needed a management interface because we were making tons of these ASR images, and I wanted other teams to be able to -- I didn't want to have to get involved if anyone else wanted to use it. So I built a little interface so that people could, could keep track of their ASR images.
So the complete solution that I ended up with does an interface in HTML and JavaScript. The server runs PHP, which kind of acts as the glue between the web and the shell commands. MySQL is used to store data about the images. And the SSH expect and shell script pieces pretty much remain the same.
So this is sort of what I ended up with, or this is the current working version. And I'm going to take you through a basic usage scenario of how someone would use it to put an OS on a machine. So first thing you do is type a host name. In this case, I'm going to do bjames8, which is a machine in my office.
And press the Add Target button. It's going to send that stuff to the server. And the server's going to send me back a little piece of UI with an image selection, a volume, username, and password. The same fields, or the same arguments that the shell script took. You'll notice the red dot. And if you can read that text, the red dot says that that machine is sleeping. So you might ask, well, how does this website know that machine's sleeping? There's the red dot, sorry. So how do I know that? Pretty easy. PHP can easily execute shell scripts.
So web browser submits the data to the server. PHP gets it, executes a simple ping command. Back and forth, we have the state of the machine. In PHP, this code is really simple. You've got your host, you've got your command. You just executed three lines of code. You know the state.
Well, in addition to ping, we also need to know whether SSH is turned on, because that's a requirement for this tool. So also, there's this tool called NC. It's just like ping, but instead, you can just ask for a specific port. So SSH runs on port 22. So I'm going to run NC, ping 22, and if I get a response, SSH is enabled.
That kind of sucks that the machine's asleep, because the whole point of this tool is I don't want to walk up to those machines. Well, using the existing infrastructure that I have, why not try to wake the machine? Waking an Ethernet-connected machine over on connected to a, you know, Ethernet port is pretty easy.
If you do a Google search for wake on LAN, you'll get 50 Perl scripts that all send the magic packet to wake the machine. So why not offer the user a wake service? Easy enough? So, same process. Send the data back to the server, this time execute the wake on LAN script. So, what do you know? It responded, B. James 8 is now awake and ready to be restored.
Next, image selection. So my database on my server keeps track of the URL to the image. It's completely distributed. The images can live on an NFS server, AFP server, wherever you want them. And my database just keeps track of that URL, along with a nice user-readable string, so you can decide which image you'd like.
So you pick your image, gonna pick a volume. Well, how do you know valid volumes if you're physically nowhere near the machine? So again, using the same infrastructure, let's give the user some UI to make volume selection. So I've got a Collect Volume Information button. To do this, we're actually gonna have to SSH to the machine using our expect script. So you need a user and a password. So fill in those fields, press the button. I'm gonna show the user a nice little Aqua progress bar, compliments of the Apple online store.
And-- And so that process is running and browser's waiting. Exact same procedure. Submit the values, execute the shell script. The shell script SSHs to the machine. And a script is run locally on the target machine to collect volumes. Volume information is easy to get. The mount command, HDI util command, system profiler, there's any number of commands. That data's collected, back to the server, stuck in my database, and a response is sent.
Again, PHP, executing shell scripts. This is an example of how you would run that command on the remote machine. You have your host user password. The command curls the script down, makes it executable, runs it. We do a shell exec, passing those four variables to our expect script. And that's really all it takes. So we return with a nice list of pull-down, a pull-down menu with a nice list of volumes. User can select their volume.
Basically, the sky's the limit. On that target machine, I can run whatever commands I want. So as another convenience, why not collect hardware information? So a dump assistant profiler lets me know that BJames8 happens to be an iMac core duo. So you can do all sorts of fun stuff.
[Transcript missing]
And when it's done, the machine tells the server, I'm done. The next time the browser updates the status for BJames8, you know it's done. So a word about Ajax. I'm sure Ajax is asynchronous JavaScript and XML. I'm sure everybody's heard it, heard the hype, heard the buzz.
I didn't pick it for that reason. Like way back when, it was a standard submit-based page. Ajax actually made my life a lot easier. And if you have web-based tools that you'd like to put nice interfaces on, Ajax is a great way to do that. It's really easy to learn. There's great, powerful frameworks, Prototype Dojo.
They wrap some of the basic functionality and make it even more powerful. Most importantly for me was the fact that it was asynchronous, which means I could submit a request independently of any other. And all these machines, I might have a G3 900 megahertz iMac or a quad G5. They're going to respond at different times. So Ajax allowed me to basically thread everything starting at the browser.
And everything can respond independently. So my server code is really simple. It only needs to know how to perform those actions on one machine. And when all these web requests, web submits hit the server, these are all executed basically simultaneously. So JavaScript sort of made my, or Ajax made my life really easy.
That's auto restore, and that's basically what it looks like to restore one. And because of Ajax, The work it takes to restore one is the same amount of work it takes to restore as many as I want. Code on the server doesn't need to know anything about many. It knows just about one.
You may look at this and say, whoa, but I still have to fill in all these text fields, pull down menus. Well, because it's HTML, JavaScript, I can do all this cool UI stuff. Like, for instance, if you have host names that have numeric ranges, which is typical if you have a large group of test machines, you can enter a prefix followed by a numeric range, and it'll add all the numeric values in between.
That's one example. Another example is what I call a default row, and that basically means you can give an image value, a volume value, user password, and any targets that are added will automatically assume those values. So you can do all these tricky things to reduce the amount of typing. So that's my tool, and that's a basic recap. It does lots of other stuff because the basic infrastructure of submitting things from a browser, executing shell scripts, you can do all kinds of really cool stuff. Sorry.
So in closing, it was really important for me to pick the right language for the job. So I let shells do what shells do best, and I let JavaScript do what JavaScript does best. And I was guaranteed that, like, you know, shell scripts can execute scripts in other languages and vice versa.
So do the pieces in the languages that make sense, and you're guaranteed they're going to integrate well down the road. And just in closing, for me, this project started three and a half years ago and really isn't something I've done full time. It's just something that evolved over time based on need.
So if I tried to think of the problem, like, all at once, I probably would have never, it would have just overwhelmed me. But I started small, and I optimized the pieces that I could, and it just, my solution just progressed over time. And that's kind of, you know, people call it the Unix philosophy.
Small pieces solve one small problem at a time, and you chain those pieces together to solve more complex problems. So it's not, I didn't read. I didn't read this before I started. It just ended up that way. So, and now to talk about login hooks is Nigel Kirsten.
Okay, so I'm a sysadmin at a university in Australia, as you might be able to tell. And I'm going to be talking about login hooks. Login hooks are really, really useful to kind of fill in the gaps that we might have from MCX management or whatever we want to put in our directory service. They're really useful for managing parts of our SOE, and I kind of feel that they don't get the publicity that they really deserve.
So what I'm going to cover is what login hooks are, how they work, how we implement them, and then I'm going to go through some of the login hooks that I use day to day. I was sort of thinking about doing a demo, but a login hook being an invisible script that runs when someone logs in at the login window isn't really compelling.
They're pretty simple. They're a script that runs whenever a user logs in at the login window. So remember, it doesn't happen over SSH. It doesn't happen over AFP, SMB, any other method. Whenever any user logs in at the login window, if a login hook has been defined for that computer, it'll run. They get executed after authentication is successful, but before the finder appears to the user.
So they're pretty much transparent as far as the user's concerned. And they run as the local root user. So this allows you to do local privileged operations. And you can also use sudo to run commands as the user. You can write them in any scripting language. I don't really care what you use. I'm pretty language agnostic. But whatever you're happiest with, write your login hooks in that.
Technical details are pretty minimal. The first argument passed to the script is the short name of the user who's logging in. If you're using bash, it's $1. If you're using Perl, PHP, it's the first item of the argv arrays. One thing a few people used to do was use return codes to deny login, but that kind of stopped working, I think, around 10.2 or something. So you can't actually do that. As we'll see, though, there's a few ways you can be a little bit more brutal and just kill the whole login window session.
I should mention that logout hooks exist. They work pretty much exactly the same way. The only difference is when a user logs out, the script runs, and when it finishes, then the login window appears for the next user. They're simple. If you can enter the terminal, type a couple of commands, you can write a login hook.
What are they good for? Well, a word that seems to have come into pretty much semi-official terminology is crappy apps. I think that's the Mac Enterprise crowd laughing in the crowd there. I don't know about you guys, but I have a lot of applications that are per seat licensed, per seat licensed. But they put all this crap in application support in the users folder, preferences. And if that stuff's not there, the app doesn't run. So log-in hooks are pretty good for copying all that stuff in.
They're good for enforcing templates or preferences that might not fit into the MCX model. Maybe you don't have a directory service. You can work that way. They're good for enforcing stuff at log-in. And I find they're really useful for modifying the SOE for a certain conjunction of this kind of user on this kind of machine. Stuff that the MCX user group model doesn't always fit in nicely, too.
And finally, what I use them for quite a lot is redirecting parts of a network home directory to the local machine. I know there's a bunch of people out there who do the other way as well. They run as a local user, they mount a network drive drive on login and they might redirect parts of the network drive to the local account.
So, setting them up. You've got three main methods we can set them up. Two modern and one kind of legacy. So we can set them up locally using the defaults domain, we can put them in the directory using MCX, or we can go and edit the etc.tty.s file.
Setting them up locally is pretty simple. We just use the defaults command to write to the login window domain for the root user. And we just specify a login hook and the path to the script. So long as it's executable and it's a valid script, that'll run whenever a user logs in. Those of you doing image maintenance, it modifies login window plist in the root user's home folder and just puts that stuff in. So if you just need to edit it by hand, you can define them that way as well.
You can also put them in MCX, which is something I've only just sort of started doing recently. You can only define them for your computer lists, so not for your user groups. But one thing I was kind of thinking was that in the last couple of days, we found out about how we can now have nested groups for computer lists. And there's sort of no reason why we shouldn't be able to, not that I've tried this, but have multiple nested groups of computer lists defining different login hooks.
So you might have a generic lab login hook. You might have a specific one for these kinds of lab machines. You might have a specific one for the teacher's machine in the lab. So we should be able to get a hierarchy of login hooks going. And if we can't, and it's not working in 10.5, we should probably test that and file some bugs.
We define them in Workgroup Manager, in the Scripts tab, in Login Preferences. And as you see, we just fill in where the script is. It's actually pretty good. It'll warn you. It'll go, hey, this isn't executable. And we can also choose to execute the computer's login hook script. So that was the first method. So really, the first two methods that I'm showing aren't a case of either/or. You can define a local login hook. You can define a network login hook. And they will both run quite happily.
You probably can't read that from here, huh, maybe? There's a couple of caveats. If we're defining the MCX, we need to tell the client machine how you're allowed to run MCX login scripts, which is pretty simple. We just write to the login window domain again and just say, yes, we want to enable login scripts.
Then we need to do one of two other things. We can either bind the client to the directory using trusted binding, so not just setting it up in directory access, but actually binding with a username and password. Or alternatively, we can modify the client so that it doesn't actually require that sort of level of trust.
So what we're doing here is we're changing the MCX script trust level so that it'll trust anything. Obviously, this isn't quite as secure. You might have clients connect to other directories. You might not want other stuff to run. So binding's probably the way to go, but depending on your environment, that might not be the case.
Finally, we've got the old-school method. The first two methods, using defaults and using MCX, they only work in 10.3 and 10.4 and 10.5, I assume. This method, however, works in 10.2, but it didn't work in 10.4.0 or 10.4.1, but Apple fixed it again for 10.4.2. So I can actually think of many good reasons you'd want to do it this way.
Maybe if you're supporting 10.2, you poor, poor, poor people, you might still want to have to do this. Edit, et cetera, TTYs, change the console line, and just add the login hook. Again, we're just adding the path to the hook. If you need more info about doing things in this way, have a look at this Apple Tech Info article.
So what kind of stuff do I do in Login Hooks? Well, this is what we're going to run through today. We want to work out a way to tell whether a user who's logging in is local, whether they're mobile, whether they're a network user. We want to set up an alert for local non-mobile users, because in my environment, we try to avoid them completely. And if someone's logging in that way, we want to try and work it out and go and migrate them to something more modern.
For the mobile users, we set up our SOE so that all user home directories are sitting on a secondary partition, because we like to be able to just blow away the whole system drive and not worry about backing up user data. The trouble with new mobile accounts is they get created in users. So I'm going to go through how we have a Login Hook that on first login for a mobile account moves the home directory to the second partition.
And I'm going to go through modifying Kerberos environment for specific users. That'll make a bit more sense when we get to it. For network users, the only thing I'm going to cover is redirecting library caches to the local hard drive, because we found that makes a really, really big difference for performance on network home directories. And we're going to set up some preferences for all of the user accounts.
So, you know, I wrote this in the last week or two, and it's already legacy code. This is how we tell whether someone's a local user or not. And if you guys were at Dave O'Rourke's talk yesterday, not that I'm unhappy about NetInfo going, you know, I wouldn't want to give that impression. So we're going to have to rewrite this script.
This will have to use DSCL. The syntax is the same. Open up both main pages, look at them. It'll be simple. So we just have a look. Does the user exist in the local domain? What now gets called DSLocal. If they do, we're setting the variable to say they're a local user.
If they're a local user, we go and see if they're a mobile user. Remember that mobile users are just a specific type of local user. So again, we're using NRUtil, legacy. We're reading their authentication authority. If you've ever looked at a mobile user record, they have the local cached user in their authentication authority so that the OS can treat them differently. So we're just looking for that string. If they're a mobile user, the grep c command's going to return 1. If they're not, it'll return 0. And so now we've worked out whether someone's local or mobile. If they're not local, they network.
So, setting up an alert for non-mobile logins, local non-mobile. It's pretty simple. We're just setting up an email script so that we can alert the IT support staff. This is the user, this is the machine, this was the date, and we can go and fix it. You'll see that I've commented out of line there, kill all login window.
That's kind of the brutal approach I was talking about earlier. So as soon as you hit that in your script, the login hook stops, everything stops, you get a blue screen for five, ten seconds, and login window appears. You might want to actually let the user know something's going on if you're going to go killing their login window sessions. And I'll mention at the end, but have a look at Growl, have a look at iHook, both of them are really excellent products. pop up some kind of dialogue and just go, no.
So this is really dense looking, but so you can stop copying it down about now. But it's pretty simple. If you go through it line by line, it either consists of logger statements. Logger's a really useful tool for scripting. If you guys aren't using it, just by default, pass it a string and logs it to system log. You can set your different log levels you want to log it for, so you can redirect things to different log files. Incredibly useful. Again, we're using an IUTIL, and we'll have to change that.
So what we're doing is we're going through mobile account. User logs in, prompts them if you've set it that way. Do you want to create a mobile account? It logs in. The login hook goes and looks and says, okay, so what's their home directory? It's set to users, whatever.
We don't actually want it to be there. So we set up the system log to say we're moving their home directory, make the new home directory, set the permissions. All the magic kind of happens here. And apparently this might be going too. But at least we'll have it for all of 10.4.
And somehow, we're going to have to be able to do this somehow. This is kind of an undocumented tool by Apple. If you Google around for it, you'll come across a couple of posts by people explaining what all the different commands do. What we're using it here for is it'll go create a mobile user with $1, which remember is the username of the user logging in, with this home directory. With the mobile home's location, again $1.
You might wonder why I didn't just use an IUTIL to just modify the home directory. I'm not. I'm using a new home directory. I kind of found I had really mixed results. Some applications would pick up the new home directory. Some applications would pick up the old one. It seemed a bit flaky. And this seemed to work a lot better. Just overwrite the whole record with a new home directory. And then flush the lookup dcache, which is now another dead technology. But we're going to have to flush caches somehow. Bye.
So that's kind of pretty simple. We're just checking for the disk, checking whether the directory is already there. In my environment where I run this, I kind of have a few more dialogues. I use iHook. I sort of pop up a few things saying, hey, I'm moving the home directory here. I might have a staff member with 8 gig on the server. And they might be going, hey, so this login's taking a while because it'll sync it all down, copy it all to another partition.
And we might want to get alerted. So say we do wipe the system drive, the local mobile user account disappears. We want to make sure we don't create a blank user folder and then overwrite all 8 gig of their data with it. We run Pro Tools in a lab. There must be some of you who do this.
Apparently DigiDesign have now fixed it. The 7.0 R C 6 1 bug report patch has been fixed. It wouldn't launch if your library preferences was on a network volume. So we kind of had this big problem deploying Pro Tools 7 in our labs, and we decided that we were just going to create a dedicated mobile account and forget about the whole thing.
Then we realized, you know, we're in this really nice, Kerberized OS X server environment. My users log in and they never put a password in again as far as file servers go. But they'd log in as Pro Tools and the students would go and connect to their home directory server and suddenly be, "Hey, why can't I get to my home folder?"
And we were thinking of just sort of getting rid of the whole Cobros ticket for the Pro Tools user, but it ended up being easiest to just do this as part of our login hook. If the user's Pro Tools, we trash their Cobros ticket as them. So remember, this is how you execute a command as the user. Remember, everything's running with the privileges of the local root user. If you actually want to execute stuff as the user, sudo u $1, because the root user may have no privileges whatsoever over your network file shares.
So we use kdestroy, trash their Cobros ticket, and then we rewrite their Cobros preferences. So the Cobros dialog that pops up, rather than saying Pro Tools where the username is, says your student number here. So the students log in, they get a Cobros ticket, it trashes the Cobros ticket, rewrites their preferences, and then we have their home directory SharePoint and their classwork resources set to automount on the desktop.
And as soon as that happens, pops up the Cobros dialog, they put in their student number and their password, so they're logged into the local machine as the Pro Tools user, but they kind of get all the advantages of Cobros. Everything from that point on, they are their student identity, which has worked pretty well for us. I'm not even actually sure if we'll go back to network homes for Pro Tools, seeing as every major point release seems to break it.
This makes a really big difference. If you're not doing this with network homes, you should try it out. If you can't go to mobile home directories, this gives you a really big boost in performance. Move your whole caches folder from the network onto the local machine. It makes such a difference.
All these tiny little read and writes that are going on, Well, all we're doing is, again, we're reading the-- we're using DSCL here. We're reading their home directory path. We're making a local directory in library caches username for their caches, getting rid of the one on the network, and making a symbolic link between them.
And then finally, the last thing that we really do is put in all the preferences for all those crappy apps. We keep a folder, Library Login Data, on all our machines. We have preferences in there, application support in there. And every time anyone logs in, it copies all the stuff from there into their home folder.
These tend to be pretty specific apps that people aren't often running on their own machines. Or if they are, we're managing that as part of our imaging system. Because this just copies over them, you might want to actually think about backing up this stuff. And this is where a logout hook would come into play.
You'd log in, back up any preferences that exist inside the Login Data folder that are in their home folder, move them somewhere, copy the login data ones in, and then on logout, restore their preferences. We tend to not do that, because we kind of like managing the preferences for all these apps. Well, maybe like is the wrong word.
So why do I use Bash? I don't love Bash. So it's installed on all Macs. It's the default shell. It's good enough for these kind of tasks. Login hooks, like if you've looked at this code, this isn't complicated stuff. This is the sort of stuff you'd launch the terminal, you'd type a couple of commands.
All you're doing is just whacking those together in a shell script, putting the shebang at the top so that you know what kind of script it is, and making it executable. This really isn't rocket science. So Joel said we weren't going to teach you how to script, but that's it. Hash, $, bin, bash, put some commands in, make it executable. You said we couldn't do it.
I wouldn't use it for everything. The language you choose is pretty unimportant, but I wouldn't sit there executing SQL queries with MySQL from the command line and parsing it with sed and grep and filtering it all out. If I was having to do SQL stuff, raw LDAP stuff, anything like that, I'd move to PHP, Perl, or Python. Pick something with a rich API in the stuff that you do. I use PHP a lot for that sort of command line scripting these days. We have a Perl proponent coming up soon. But whatever you use, whatever you're comfortable with, use it for login hooks.
So as I said, you only need a really basic knowledge. We're talking about the simplest commands-- copy, move, link, sudo, defaults. These are not complicated things. You'll have a couple of Apple-specific things you want to use. You're just gluing them all together, and you're making a script. If you're performing repetitive tasks to set up your environment, just use your login hooks. They really make things a lot simpler. I really think of them as they fill in the gaps that the MCX model doesn't cover. So they're always there in the background.
As far as final tips, remember your login hooks run as the local root user. If you start trying to do stuff on your network home directories, that user has no privileges there. Use sudo u $1 to perform stuff as the user who's logging in. Use logger a lot.
Even use logger to redirect all output from your scripts when you're developing them. Just dump it all somewhere. It'll make your life a lot easier. Learn how to use just DSCL these days to retrieve information from your directory. Remember you can always use the search domain with DSCL if you don't know where a user exists.
It's really not that hard to use DSCL. Protect your scripts. This stuff runs as root. If you leave these scripts open with privileges for anyone to edit, they can just add stuff to it every time someone logs in that executes with root privileges. Give read-write execute privileges to root. Don't let anyone else touch it.
GUI Dialogues, I really suggest you guys look at iHook and Growl if that's what you want to do. Growl's really, really good for just notifications. They're nice and pretty. If you keep changing the styles, your students actually might keep reading them. We find we have to rotate styles, maybe every day, once a week. You need to move them around the screen, move them when they click on them.
iHook is really cool. If you want a little bit more interactivity, iHook is really, really simple. Have a look at how simple it is. You just send echo text to it. It lets you create buttons, timers, progress bars. Our labs are moving to 24 hours in the next couple of months, so we're going to be using iHook so that we can tell users, hey, get out of the lab now. It's imaging. But that's kind of it. Use whatever language you're most comfortable with. Logging hooks are really easy to do. And now I'm going to introduce Tim from Socap.
All right, I'm Tim Perfit. I'm an SE in Southern California, and I've made the career-limiting decision of talking about how to make getting Windows really easy onto your Macintosh. So I'm kind of, I have this sickness whenever I see a new technology, I think not only how do I use it, but how do I script it, and then how do I deploy it? So when I saw Boot Camp, I thought, wow, I can play Microsoft Flight Simulator, and then I thought, how does that work in the terminal?
And then I thought, how do I put it on a lab of 30 machines? So that's kind of what we're going to walk through with this. That's what I spend my nights doing, and I have a two-and-a-half-year-old. Okay, what this, I want to, and we've already covered it a little bit, but I want to talk about what this session will not make you into when you leave. You will not become Larry Perl, the creator of Perl. They'll be able to buy his shirt. I have one kind of like that.
So don't expect to actually be him when you leave here. But what we will talk about is actually how to go ahead and look at the interesting pieces to integrate between. Perl and AppleScript Studio. So we'll talk about a couple of technologies. I use signal catching and PIDs to be able to communicate between AppleScript Studio and Perl.
Perl doesn't have, how do you say it, a rich API to talk with Shell Scripts, so we have to do some contortions to do it. We're talking about some parsing options to be able to go through and get the output, as well as some wrapping in AppleScript Studio, so we can make a nice GUI app at the end of all this.
Alright, so let's first talk about why Perl. I know that Nigel has talked about how you should love all languages equally. Well, I chose Perl for some specific reasons. I want to parse things. It has great regular expressions. I don't do a ton of that, but I knew that going forward, if I'm going to run a command, I want to be able to pull the pieces out. It has signal catching and allows me to be able to, well, we'll talk more about signal catching later, but it's important that it's part of the language.
Modules, I don't necessarily like to code a lot. I like to just have a module that does stuff for me. I didn't know what modules I was going to use from the get-go, but I knew that Perl pretty much has a gazillion modules and I could do pretty much everything that I wanted to. And it's installed on Mac OS X, so I didn't want to have to bundle Perl inside an AppleScript Studio bundle, which would make my application pretty huge. So it's installed on every Mac OS X deployment. I don't really want to get into religious wars about whether what's better.
I don't want to get into this PHP or Ruby or whatever, but I think the one thing we can agree on is that every other scripting language is worse than Perl, so we should go ahead and... Alright, so we actually talk about what, before we look at the script, let's talk about what happens when you install BootCamp.
So when you have this nice journaled HFS+ volume, represented by this upstanding young citizen that's sitting there computing very nicely, and you go ahead and run BootCamp, and BootCamp partitions off your drive. It puts an MS-DOS partition on it, which is represented, and then NTFS is put on there, represented by this kind of shady looking, geeky sort of guy.
And then NTFS has this nice thing where it has to write down where it's located on the drive in its partition. And that's not just fun to know, but it is required to actually boot Windows, because otherwise it won't start up, and you get this black screen with a little flashing cursor.
And you have to do this whole master boot record junk. Because BootCamp puts the whole BIOS emulation, and so you've got to replicate that in order to boot up Windows. So that's what we've got to kind of replicate, and that's what we've got to kind of understand to be able to do this. So before we actually go ahead and script it, let's actually see how do we do it from the command line. How do we actually script it?
Or how do we actually run it? I'm going to do this really fast, I'm going to do it in 10 seconds. Alright, I don't want to, it's not important. So the first thing that we do is we actually take your DiskUtil, resize the volume, we take up the master boot record, we clone it.
Then we restore it by unzipping it, reversing some crazy hex numbers, putting it to NPS, and then marking the partition as bootable. Not important, I'm going to post the script, you can see it. But this is kind of the lines that if you wanted to go up, and instead of running that really difficult BootCamp application, you could run these easy commands on the command line, and it would be much easier.
In fact, Brian James probably could write a nice Ajax wrapper around it to be able to do these scripts. But instead, I chose Perl. Before we get that, I want to get to the next part. I want to give a warning for my mom. So she told me when I got here that anytime you're programming in Perl, make sure you use the -w in the shebang line. I'm being a little flippant, but it's important because Perl will ignore things like if you're using a variable that has not been defined yet, or not been populated yet, and it'll cause you some debugging nightmares.
So anytime you use this first line of your script that tells where the interpreter is, just put -w on there, and you'll make my mom happy. So that's important. Alright, so let's get to the script. Let's see what actually happens. So the first part is when we actually run it. So this is the completed script. I called it WinClone. And that's if you do a Google search, you'll actually find there's a wrapper around it, as well as the Perl scripts contained within it.
And the first thing we want to do in the Perl script is go through and parse out the command line options. So the worst way to do this, I mean, I've seen this, and a lot of computer science students will do this, is they'll try and parse out all your different options, and they'll do is they'll say, "Okay, the first thing is a space. We don't need that. Then a dash. Oh, it's an option." And then it gets the letter. It's a lot of work. If you don't see it all, there's a get-op library function that allows you to do this loop.
But Perl, one of the cool things about all these modules is they're constantly trying to find easier ways to do it. So within this, I can specify -- so for this example, I have -v. So that's a common one for verboseness. And what this does is just basically say if the user has put -v on it, set this flag in the verbose variable. Okay, so it's very simple to do.
I don't have to loop around it. It's basically one line, and it's done. We can pass it strings as well. So this one example is the source partition. So with one line, I can basically take what the source partition is specified on the command line, populate this variable, and move on.
Now, the other important thing after this is to populate your defaults afterwards because it doesn't touch any of the variables that it doesn't know about, obviously. So you want to set that. Bootcamp's really nice because it always does it in kind of the same place, and things are pretty standard.
So you can see that it's kind of in the same place as the source partition. So that's kind of when we start processing it. So let's actually talk about when we have a GUI wrapper and how to integrate between it. So we have an AppleScript app, or an AppleScript Studio app.
That's the button I'm not supposed to press? Okay. AppleScript Studio app. There's another one on here I'm not supposed to press, so I won't do that either. It calls a Perl script, and this Perl script goes ahead and it spawns off all these command line utilities. Disk util, I use a bunch of open source tools, NTFS, clone, NTFS, label, those kind of things.
And then those do funky things to your hard drive, like reformat or partitioning it and copying file systems. And that's great. I mean, that's like, you saw the lines of code. There's eight lines of code. So the Perl script is trivial. But then what happens if the user clicks cancel in the middle of this?
What do we do? Well, okay, do we stop the Perl script? That sounds like a good thing. We just kill out the Perl script. But what happens if it's in the middle of repartitioning your disk? Your disk is unmounted. It's moving data around. You don't really want to kill off the Perl script. So that's fine.
Let's leave it alone. Well, if we leave it alone, then it'll continue cloning your drive, and your CPU usage will go up to 100%, 200%, 400%. And then you'll clone it again. And you got two clones. And then you'll have the same type of operation. You'll have the same type of operation. And then you'll have the same type of operation. And then you'll have the same type of operation.
So we want to be able to do some sanity with this. So be able to have the user feedback and do things that make sense. But as you can see, these arrows are shaded. That means there's no direct connection between them. We're not controlling them. AppleScript Studio has this kind of laissez-faire attitude towards dealing with command line scripts. So let's actually look how we do it. So the way I did it is with signal catching.
So let's talk a little bit about death and dying, which you didn't think today you'd call a little counseling session. So we have this Perl script, and the user clicks cancel. We wanted to have some way for the AppleScript Studio to communicate and say, OK, I want you to go away. I want you to die. I want the parent to kill off the child, the grandparents to kill off the parents. Let's have a little nasty death and dying type thing.
So the simple way to do it is just have the AppleScript Studio app send it a signal. Signals are all the rage in Unix. You send it a termination signal. You hit Control-C. You do an interrupt, those kind of things. But in Perl, normally what it does is it just dies.
But we want to catch that. And so these first two lines, basically, we have sigterm. It'll catch the termination signal, which is-- and I'm sorry, the two of the termination signals as well as the interrupt signal and it'll set that to call the catch function. So anytime, no matter what you're doing, when the signal gets caught, it'll go ahead and run the subroutine.
So term is usually sent when you do a kill command, but then if somebody runs this from a command line, they'll do a control C, that's the interrupt signal. I don't deal with HUP, so there is a way to get around my script, but if you're doing that, then you already know about it and you can do something else.
The next one is a really important piece and this I thought was really interesting because if you don't want to have to keep track of what current scripts are running, right? The Perl scripts is running other scripts, but a good way to do it is to put them in their own process group.
So Perl has this great command set process group, set P group. I don't know the cool UNIX way to say that, set per group, set P group, I don't know. But anyways, Joel probably has one that says that he was sitting there correcting me on all the different UNIX commands, but... What this does, and this is, I love saying this because it sounds so cool at dinner parties.
When Perl spawns off another UNIX process, it will set the process group of that process to the process group, the process ID of the Perl script. Okay, I didn't even understand that. Okay, so anything it spawns off, so it spawns off, let's say it spawns off DiskUtil, it'll set the process group to the process ID of the Perl script that's its parent.
And so that's kind of cool. Anything I spawn off, I can now easily keep track of, and I can kill off. So in my catch, there's two different ways. If I'm doing a critical operation, I do $CS, a critical section, I don't do anything. I basically say, "Oh, okay, let's just set a global variable and move along." And whenever this critical operation is done, it can go ahead and clean up and figure out what to do.
If we're not in a critical section, it can go ahead and just kill off it. And I could keep track of all these little UNIX utilities, but the better way to do is now that since the process group is set, I can just send... I can send the kill signal to the negative of the process ID of the Perl script. Process ID of the Perl script is $$. If you put a negative sign in front of it, it makes it to the process group. And that's what's great about Perl. It's simple. I mean, you can just read it. It's a very legible language.
Alright, so let's talk a little bit about, so now we actually want to be able to communicate back to the AppleScript. So the AppleScript communicates to us by sending us kill signals. We want to talk back to our parents. And the way we do that is I use PID files. So if you ever go into var run on your system, you'll sign all these little PID files.
And all they are is it's saying that this daemon's running with this process ID. And a lot of Unix daemons use it to communicate. So we communicate with it. And you can see here it has the echo dollar sign dollar sign. Again, that's the process ID of the parent. And it shoves us into the process ID.
And I put it in var run because that's where it's supposed to be. And I called it winclone.pid. The cool thing about this now is that the AppleScript Studio app can just look in that directory and say, is it running? And if it is running, it can find out what the process ID is. So it doesn't have to go through and like look at the process table and figure out what process ID I am. It communicates that back just to this one file.
And so once we actually want to clean all this mess up, we basically do some operations, then we remove that PID file. And as soon as that PID file gets removed, the AppleScript Studio goes, oh, it must be done. So then it does it. The nice thing about var run is that when you reboot, it clears it out.
So you don't have to worry about stale PID files. And so that's probably why if you've ever used an app before that wasn't wrote as awesomely as my app, and it leaves this PID file when it's not supposed to, like somebody force quits it, and it doesn't work, and you reboot, then it all works great. Well, that's one of the reasons it clears the PID files out. So now let's talk about wrapping in PerlScript and AppleScript. AppleScript. Ah, ah, ah. AppleScript. AppleScript. Yahoo, Yahoo. All right. So GarageBand is a great program, but it's not for everybody, as you can see.
[Transcript missing]
And finally, if we want to actually run this, we use the do shell script with administrator privileges. And this is really cool because if you do this, it will prompt the user for their username and password, and then it'll cache it for whatever is set in your Etsy authorization, which is usually, I believe, five minutes. So that means if you're running a series of, like I am, six or seven commands, the user puts their username and password in once, and then it'll go ahead and just be able to run these scripts as root, basically, or administrator privileges.
All right, so let's look at how do we get out from Apple Script Studio. So now we have this Perl script running, and I talked about on the other end when the Perl script gets killed, it goes to this catch routine. What do we do in Apple Script Studio when the user clicks cancel? Well, we just send it the unix command kill, and we send it the dash term signal. Not necessarily needed, but it's kind of explicit, so it's nice to see. And we have to know the process ID of the Perl script.
Well, you could do all this set and grep and awk of the process table, or you could remember that in var run we had that pid file, and all we need to do is find out what the process ID is in there. Of course, you want to wrap this code with does the pid file exist. So the pid file exists, you just grab it, the back text basically takes the result, puts it in this command, so we have kill dash term some number. And we run it with administrative privileges because our Perl scripts can be running as root.
So this will do that, and then the Perl script won't necessarily die right away, because it might be in the middle of a critical operation that might take 30 seconds to 5 minutes to clean up. So what we want to do is set a state, set some global variable so we know that it's quitting.
Then we have to do something with the UI, because how many people have had an app where they have cancel, and they click cancel, and nothing happens? They click cancel, cancel, cancel, cancel. What's that, mail app? Okay, thanks. No comment, no comment. It's all fixed in Leopard. Everything's fixed in Leopard.
and then so you want to set the title of the button to change it to give the state and then make it disabled so you sit there you don't click on it get your credit card charged 15 times because you press the button more than once. So give the user feedback but then you're left with this state that's kind of stopping kind of starting and what we want to do is be able to go back and figure out when do we clean up and the way we do that is an idle handler in AppleScript Studio. So there's a great handler in AppleScript Studio called idle that gets called whenever it's idle, whenever it's not doing anything. And you return at the end of it how much how many seconds before you want it to be called again or that's not technically true.
Don't call me before this many seconds have passed. So you can't guarantee it'll be the amount of time but you know I'll do it occasionally. So it's great for cleanup tasks like this. So what we do is we ask the system events does this file exist? This is our PID file. If the file exists we know that our Perl script is still running.