Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2007-607
$eventId
ID of event: wwdc2007
$eventContentId
ID of session without event part: 607
$eventShortId
Shortened ID of event: wwdc07
$year
Year of session: 2007
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2007] [Session 607] Development...

WWDC07 • Session 607

Development Methods for WebKit AJAX Applications

Content and Media • 49:08

Innovative developers today can build a variety of applications on the Web accessed from varied platforms. Join a Web 2.0 industry leader to see their design and development techniques for creating optimized AJAX applications for WebKit on Mac OS X. Learn the latest and best industry techniques for handling the ever-changing landscape of AJAX development.

Speakers: Jason Fields, Michael Agostino

Unlisted on Apple Developer site

Downloads from Apple

SD Video (150.7 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

[Jason Fields]

Hi, my name's Jason Fields, I'm Product Evangelist for Emerging Technology for Snap.com, a next generation visual search experience and content distribution network. I'd like to first say thanks to George and his associates at Apple for inviting us here to speak on some of the challenges that we faced in regards to developing a cross platform AJAX app. And so most of you probably know about Snap from our content distribution platform network Snap Shots.

You've probably seen it on Tech Crunch. Many WordPress blogs, many Xanga sites, Guy Kawasaki's blog, Truemors, etcetera. For those of you that are not familiar with the product, it's a publisher product for bloggers and site owners. It's a one line snippet of JavaScript code that you can add to your page and any hyperlink that you've created in your page, if you hover over it, you can see a preview or more of the destination website.

So, this is our first release. About six months ago we launched a product line called Snap Preview Anywhere. At that time it was just a high resolution preview of the destination site. But at Web 2.0 we've re-launched the product line, calling it Snap Shots, and utilizing web services and API's, we now provide a whole host of other URL and content types that we recognize. This is IMDB, Amazon product, you get product information, reviews, what the prices is.

This is our stock ticker. We recognize a whole slew of finance site URLs, stock ticker symbols, and provide interactive stock chart, Reuters Stock Feed. This is a custom shot, actually. There are content providers that might want to provide additional nuggets of data within the Snap Shots platform, and this is our first commercial partnership with Reuters. Anyone that links to a financial ticker on the Reuters news site will get an RSS feed of storage relating to that company.

This is a photo, an example of our photo shot. We support PhotoBucket, Flickr and Picassa, currently, photo streams. So anyone that links to any photo stream within a page can now interact directly with the photos in the bubble. Audio Shot. This is streaming audio. We collect ID3 data and display cover artwork and title and track information.

And an example of our Video Shot, which actually enables you to play a host of video formats. YouTube, Google Video, Revver Video, within the bubble itself, as well. So what exactly is Snap Shots? Snap Shots is a product that we developed to create a brand and viral awareness off Snap.com, which is our search experience.

It's actually the opposite of search. What it does is it provides relevant nuggets of content to end users that are browsing the web in general, whether it's a website or a blog, and any of the links that people are creating in these blog posts to add support information to their posts, now you can hover over and see the destination sites or get those nuggets of content directly in the Snap Shot itself.

Our product has experienced exponential growth. Within 7 months, we're probably around 13 to 14 million Snap Shot views or impressions per day. And we're not on over one million plus publisher sites and growing at about a rate of 20 percent per month as we launch new Snap Shots that we support.

So what exactly is Snap.com? Snap is, I might sound biased because I work there, but I think it's one of the, it's actually been documented in a number of blogs that it's the largest and fastest growing Web 2.0 search engine currently, features some of the most innovative uses of AJAX and prototype script and progressive user interface design that currently exists online. Clearly while we're not beating the big boys of search, as this chart shows, we are beating many other ones and in a very short period of time, noted here, we've gone from Alexa ranking of 18,000 to 1800.

So the big question that we get asked, how are we going to plan to compete with the big boys of search? And the answer as inspired by, let's see, there it is, so these are some still frames of a commercial that I'm sure is near and dear to all of you here at the developer conference. It is to me. It's one of my favorite directors.

But these are some still frames of this video that illustrate our approach to this competing in the big search space. Utilizing AJAX and web technologies clearly was sort of the angle that we wanted to go. So I guess, let me put it all in perspective. So current search, outside of Snap, is kind of like DOS.

You invert it, you add some blue links, throw in some ads to monetize it, maybe slap a logo and a search box on top, and that's search today. So, being inspired by Apple and how they launched by sort of smashing the status quo, we felt that in order to launch a product we had to sort of go back in time and take a look at exactly what the motivations were. Do we want to copy what exists now, or do we want to invent something different? And so we looked at desktop application experience.

And so our approach was to take a look at OS X Finder and tweak it, add a couple of little logo and there's our, this is an example of our search experience as it is today. Traditional text results on the left, and high resolution Snap Shot of the web result on the right hand side. And a little bit later in this presentation we'll actually do a live demo, both Snap and Snap Shots, so you can get a first hand example.

So, clearly there was many challenges that we faced in developing this product. We're using cutting edge design, cutting edge technology. At the time that we launched Snap, which is going on a year and a half ago, you know, Jason and Prototype were still, you know, very new to the scene, and I don't think had been used as a commercial product, so getting those guys to work together, both on the progressive UI side and on the programming side was extremely challenging. And I'd like to introduce Mike Agostino, our CTO, to get into more technical details in regards to some of those challenges. Thanks a lot.

( Applause )

Mike: Thank you, Jason. My name is Michael Agostino, I'm the CTO of Snap. And what I'm going to talk to you about today is our experience in building Snap.com and the Snap Shots product. So I'd like to maybe, at the beginning of the presentation, just take a little bit of a step back and talk about the search experience in general. As, sort of, Jason illustrated in somewhat of a funny fashion with the 1984 commercial, we don't believe that it's about the algorithm today. It's not about the ranking algorithm, it's really about redefining the user experience.

Early on in Snap's existence, we actually did some double line tests where we took the leading search engine's search results, placed them on another lesser search engine's logo and vice versa. And we found that even with quote, unquote, inferior search results, people still favored whatever their favorite search engine was That they really did not differentiate on the basis of search results because all of the search engines at the time were the same in terms of the user experience.

The other thing that we discovered early on was that most, if not, about 50 percent, this is according to the Pew Institute's studies for the internet and it's use, found that 50 percent of all search missions actually fail. That is people go to a search engine, they type in something into the box and they either don't find what they're looking for or they run out of time, or they just get frustrated and give up. That told us something else, you know, that there was really an opportunity here.

The third thing that we sort of thought back on, and this sort of goes back to the DOS analogy, is that you know, a lot has changed in the world of search in terms of what machines are operating with, we all have broadband now, you know, a lot has changed in 10 years, but the search experience of typing into an empty box and getting back blue links, really hasn't.

I mean, we said hey, there's really an experience that we can leverage all that great hardware, the fact that everybody's broadband connected, that there's a lot more content in the world and do something interesting with all of that hardware. So we basically said there's a radical opportunity here to just revolutionize what's going on.

So this is what we came up with. Jason showed it to you earlier. On the left you see a traditional experience, on the right you see our experience. So going from 10 blue links to visual previews of results. Going from a page loading paradigm where you load pages, to one that's much more interactive. We call it channel surfacing.

This is similar to how some people interact with their television where they just click, click, click, click, click and see different channels very quickly. We thought that same approach could be applied to search, allowing people to visualize the results in a very rapid fashion and decide for themselves what's good.

Allow them to more directly find the page they're looking for, rather than a process of going forward and then going backwards and then forwards and backwards, really directly find what they're looking for faster. And of course, the technology that we used for this was AJAX. So now we're going to get into the main part of the conversation, which is really about the rules that we developed over time for optimizing across platform AJAX application.

So these are rules that we developed just from going through the process. We hope they apply to other applications. They're not just search specific. Some of them are processes, as we've labeled here. Some are much more in depth about the technology. So the first process related rule that we really came up with was about getting everybody involved in the process.

So you can get down to the tail end of a product release and we've got weeks or days until the thing has to go live, and there's concerns about performance. We had this issue. One thing that we learned was that you have to break out of the traditional way of thinking about performance being an engineering concern and only an engineering.

We actually got our product management involved and it actually helped us quite a bit because they were able to illustrate and help us make the appropriate trade offs between what users believed was important and what maybe was in the product. Second thing we discovered was having every developer really brains from the process and leveraging QA.

Another main rule that we set up is the, we call it the no sacred cows here, but it can probably stated as the sacred cows make great cheeseburgers rule. And the thought behind that is you have to be willing to sacrifice any product feature at any time if performance is not adequate.

If you can't find a way to optimize that algorithm to get it to the point where people are actually found the product usable, why have that feature in there? And the third thing we did was we actually published our performance every single day. So for us, that was sending out an email that said how many search transactions met our SLA. It was helpful in two ways. The first way was really engendered a lot of spirit.

You know, people saw the numbers getting better and better every single day, they got behind the efforts to improve performance. The second was when we dealt with setbacks people were able to sort of immediately scratch their head and go what was that code I checked in yesterday, and is there something wrong, maybe I should rethink that and go back and reexamine what they had done.

Second sort of theme that's more process related was really about the frame of mind that we approached this problem. Many times people look at this as more of a backend problem. But we really wanted to be guided by user experience research. So one of the experts we called upon was Jakob Nielson.

And he has a few rules that he's created over time that talk about how people interact with applications and what's the necessary time for them to feel satisfied and to continue to use the product. So we developed what we call the .2 to 10 rule. The .2 to be for an application to feel highly interactive, small things have to happen in less than .2 seconds.

If they happen in .1 seconds, that's nice, but it doesn't, people don't really see any difference between .1 and .2 for the most part. If it happens in .3 people start to, you know, feel a little bad. Similarly, if you have something that runs in a little bit longer, it has to happen in about 2 seconds. Now if it happens in one and a half seconds, that's good, you know, people don't differentiate a lot.

If it happens in more than two seconds, people start to see friction and their mind wanders and maybe they start thinking about, you know, maybe what am I going to have for dinner tonight and, you know, what time do I have to go home, or you know, what's that other bug I'm thinking about. So we've really established a couple of rules here.

One was that all of our search results would have to come back in one and a half seconds and that our first preview would have to be loaded within two seconds. So that is a total time of two seconds from the time that people submitted that first preview on the right that Jason showed you in the intro section has to be down in two seconds.

Now that's a pretty big challenge, even with broadband. Now this wasn't a problem for us, but if your application does happen to take more than 10 seconds, people are going to assume it's broken unless you have some kind of progress indicator, and generally, in today's world, people are still frustrated if things are taking 10 seconds.

Another thing that we looked at is really taking a look at the end-to-end performance. So many web applications that you're probably familiar with manage performance on a data center perspective. That is how long did the servers take while computing the response? We didn't think that was really a good way to look at it. We thought the more true way to look at it was to look at it from the end user's perspective.

So from the time that they initiated an action until their browser, in this case, was ready for the next action. So that includes all the wait time on the server side, that includes, the download of any data associated with it, any active elements such as JavaScript executed, all of that had to be complete. And that's how we looked at the world.

So we ended up monitoring every significant end user action in Snap.com so we could find out how we were doing. So this is really easy to do in AJAX, but you could also do it without AJAX. There's some other techniques that you can do to get this information.

To talk about this is Brian..

Is more advantageous the rendering the speed at which things are returned by the servers from the backend and then the performance gives a more accurate idea of what these users are actually experiencing, whether they're on any type of platform or dial up connection or that kind of thing. We track the performance numbers on the front end instead of what the servers tell us, because that way we're able to actually gauge what the users are actually experiencing. What we discovered from this is that we, there were a few key areas that we had to improve.

For example, we know what portion of our audience is using dial up and we had to speed up the experience for those users. As well, we knew what browsers and platforms, what operating systems people were using. And with all that information we knew where we could take advantage of some of the latest technologies so we could give a better user experience, more Web 2.0 type of interaction.

Okay. So what I'm going to show you probably for the first time outside of Snap, is actually our first performance graph. So we instrumented all of our code and you can see from this line here, the top blue line is actually our 80th percentile, so that was the, our SLA so to speak, of we wanted 80 percent of all of our searches to actually execute within 1.5 seconds so the solid red line there is the target so to speak or the goal. And what we found was we were, on many days, so this is the days leading up to when we launched our optimization effort, we were 2 seconds too slow.

So this was a real aw shucks moment for us and we knew it. We really had to dig deep. We knew that we had performance problems and we wanted to address them, but we didn't realize the full extent. So what this is showing is that when you really do measure end-to-end up to 50 percent of our search experiences were in adequate in terms of performance.

So one of the first things we did was really build a user interaction timeline. So this is a simplified version, but what it's meant to illustrate is a typical user session, so you see the user typing in a query, getting images back, starting to interact with the various listings. Now the reason why this is important is because we wanted to optimize based on the end users expectations of how things should function. So this is the first thing we built.

Next thing we did is we actually went one level deeper, which was to look at the machine perspective. So if you think of the green as being the end user's machine and the blue being our website, there's a number of internal steps to each one of those arrows, so in this case, we're showing in the green, again, is the end user machine and then the blue is the server, and then the white arrows between them are actually packets flowing back and forth. So what this graph is showing you is that up to 8 packets can be sent for even the simplest request.

So you download a one byte transparent GIF, you can sent 8 packets back and forth. It turns out that if you have users across the country or around the world, as we do, that can take quite a long time, all of those round trips add up. So, what we're trying to illustrate here is that HTTP transactions are a significant source of slowness in a typical AJAX based application. So the first step, if you really want to optimize your applications is to count the HTTP transactions.

So a simple test you can do, clear your cache, load your website up and really see what's going on. You know? It sounds simple, it sounds very simple minded in some ways, but that's pretty much the most basic way to start. Maybe the next level step is if you want to look at that machine view is load up a network analyzer, so you use your favorite network analyzer and actually look at all those packets that are going back and forth, seeing which things are being re-downloaded if you re-access them.

Count the connections that are being created. Really start to form a much more detailed timeline. One thing that we found that we could optimize after we did this was really start to use a palette. So this is an optimization technique that tries to combine images into one large image.

So Snap, because it's very rich and we're trying to emulate a desktop application, actually uses like it says here, 130 independent images. It sounds like a lot. It sounds crazy. It sounds wasteful. But if you really want to have the richness of the experience that we're going after, you really do need lots of images. They may not all be used at once. They maybe be used in different sections of the site. But they eventually all have to be there. So we have 59 on our search engine results page.

So if you load each of those, if you think about the last slide, each of them could potentially be 8 packets for every one of those images going back and forth. So we created a palette, these palettes are essentially a very large concatenation of images all into one giant image file, they're sorted by color and then by height, and then we use CSS to actually extract them.

So what this allowed us to do is to go from 130 independent images that could have been loaded, down to a small number of palettes. So we had one for orange, one for yellow and one for some other colors. We then wrote some tools that allowed our web developers to interact and write code as though it was as a single image rather than having to understand what was going on with the CSS and such.

  • We're looking at front end issues when you're analyzing what the total download speed for search results pages was. One of the things we noticed was there is a huge number of graphics and every graphic had to be individually fetched with HTTP headers, and so one of the ways we increase the load speed was to take similar graphics, we basically matched graphics up by the color palettes that they used and organized them into a huge mosaic of images that were pieced together into one huge image and so then the page could load that one image and then we wrote some scripts that automatically generated the CSS to cut up the palette images into the individual pieces and place them where they belong on the page. And I think that the speed improvement was a half a second just from that one.
  • So as Tim said, we got about a half a second out of just that one transformation. And if you think about we're trying to get to 1.5 seconds total time, chopping out half a second doesn't sound like much, but it's pretty significant. So taking that as sort of our mantra, eliminating these transactions were kept going. We looked for other places. Here's some rules of thumbs that we came up with that might be applicable to your AJAX applications. Have your style sheets inline. Pretty simple to do.

Move your images inline. Not many people know about this. But this is kind of a wacky thing you can do where you can actually place the source data of the image actually right there in the html. But obviously this eliminates one additional request to your server. This is actually was invented, I believe, by Larry Masinter of Xerox Parc fame. It doesn't work in IE. Maybe that's not a big concern. But if you want to know more, 23, RFC 2379 has# all the details on it.

Really big win that we got was actually using persistent connections. So if you think back about the chart that I showed you before where there's those 8 packets going back and forth for every single request, if you use persistent connections, it can really lower that quite a bit. Here's an illustration of that. So on the top we have using traditional non-persistent connections and on the bottom you can see using persistent connections. There's only two packets going back and forth because the connection is maintained from request to request.

So after we did that, this is about where we were. So you can, this is a little bit more complicated graph, but you know, it has multiple different bands in there showing where each one fits, but you can see with the orange line, we're just about our target of 1.5 seconds. The green line, which was the 80th percentile was still quite high. We had about another half a second or so to trim out. So we kept going.

Next hint that we came up with was really about exploring parallelism. So using persistent connections, we actually started to use multiple different hosts. So we broke out all of our resources across the site, so those palettes, the CSS, other things, and placed them under different hosts. So we'd have, of course, www.Snap.com, but we'd have other hosts like i.Snap.com for static images.

Now the trade-off when you do this, you obviously get a more persistent connections, but you use more DNS time. We did some experiments and we've read other places that this seems like the best case is two to three hosts, it might be four for some particularly rich applications. This sort of shows visually what it looks like.

If you do everything in a series, you know, you just use a single host, obviously everything's going to go one after another. If you use two hosts, you get a little bit better. It effectively makes your site feel a lot more responsive if you get all the way to four. The thing you want to be careful of, of course, is starting to strain the end user's network connection.

The other thing that we did was really look at our cacheability. We adopted a rule that we're still using today, which is to consider all static files immutable. That is they don't ever change. If they change, you change the version number. So we did this on images, we did it on our CSS and then we set our expires so that those would be cached forever out to about a year.

The other thing we did is we were using a CDN before this, but we actually used our, you should use a CDN to really push that out. One gotcha that we did find is that the CDN doesn't always respect expires headers. There are bugs in CDN. And it seems funny that there would be some after five plus years, maybe seven years, of CDN's being mainstreamed, but there's still various bugs. The other thing that we did discover is be aware of your use of cookies with your static elements.

You probably want to move your static files to another domain so that they don't interfere. This has two benefits. One, your cache it rate will be preserved at the CDN, but secondarily if you have a very long cookie that's being sent, you'll preserve upstream bandwidth. And upstream bandwidth, as many of you know, is much more precious than downstream bandwidth. If a user has to send a 1K cookie all the way across the internet, that may break a request that's normally one packet into multiple packets, which means that the overall application is less responsive.

The next rule that we came up with is squeeze bytes. Now this sounds a little bit 1995ish, you know, Web 1.0, but it still matters. You know? Packets are small on the wild internet and congestion is everywhere. Right? Your end user's neighbor might be running Bit Torrents out of his house, and it's all going to conflict and if you can make your site just a little bit smaller, you might just get lucky.

So one thing we did is we actually went through and audited all of our old image and style sheets. This is particularly important for people who are dynamic, sort of developing sites in a rapid fashion, it's very easy for things to get left behind. Just the croft builds up very quickly and it's very easy to get forgotten. The other thing you can consider is really using a compressing obfuscator. We did configure our web server to use GZIP for all of the non image elements. And that's how you do that right there, at least in Apache.

Last thing we discovered we discovered is really using a format called JSON. Now it's not related to Jason Fields, even though he'd like to think it is. It really stands for JavaScript Object Notation. This is a way of sending your objects down as sort of precompiled JavaScript objects so all they have to do, all that has to be done is to call the eval function. Now it turns out the bytes are about the same between XML and JSON, typically, but the parsing results are quite different. So, what we found is on a development machine, there was about a 65 milliseconed different.

Now it doesn't sound like a lot, but if you think about that on top of our, another thing we found out is that the typical end user's machine is 3 to 5 times slower than our developer's machines. You're starting to talk about a lot of time. 300 milliseconds and trimming out 300 milliseconds in the middle of a transaction that's supposed to last 1.5 seconds is pretty significant.

So this is a little visual of what that looks like. On the left you'll see what our XML looks like. On the right you can see a snippet of the JSON. And if you sort of look at that you can see that it's a JavaScript hash. Right? Each of the elements in XML has now become a key.

Another thing we did to make the site even more responsive is to enable pre-fetching. So if you have that really detailed timeline that we talked about in the beginning, look through that entire timeline and assess are there places where certain actions appear to require a previous action to end before they can start? And really challenge yourself, is that true? So you know, what we found is that there's, in Snap, there's a lot of places where we had end to start dependencies, but they weren't really necessary.

So we put in a pre-fetch in the first preview. We also started pre-fetching the images as people go through the listings and you'll see this when we do the demo. The thing to be careful of is that if you arbitrarily pre-fetch everywhere, you're just going to waste resources on your own servers, of course, and you're going to waste, you're going to tie up resources on your end user's machine.

So we sort of came up with a rule of thumb, which is if it's 80 percent likely, go ahead and do it. You're end user's probably grateful for the fact that your application's smooth and responsive and your business model's probably not going to break down because you're using 20 percent more servers.

One of the things we were looking at while we were testing the Snap.com website was how do we test performance, end to end performance from the user's aspect all the way to the backend of our system? The complexities of the system is that it is highly integrated and that it comprises of multiple segments and so one thing we had to look at was how do we break that system down. And what we did was we started looking, we took the entire system and we broke it into pieces.

How does these transactions work from one segment of the system and how does it communicate to another piece of the system? We also look at transfers across the different segments and we tried to eliminate as many hops between the systems as possible. While testing performance, we also came across a very big issue was recursive DIVS.

We have a lot of DIVS across our site and we noticed that when we have recursive DIVS it ate up a lot of browser's memory, especially in Firefox and in IE, which causes a lot of user's computers to slow down. Now most computers, most users don't have computers that are extremely fast with high volumes of memory.

So we had to test across multiple platforms and we realized that over time the recursive DIVS were basically draining the memory and we were noticing a lot of memory leaks across the system and so that's something that we had to pay a lot of attention to and we had to redesign our site according to how we can use, to how we confected the use of DIVS across the site basically.

Another thing that we looked at was image caching because we wanted to eliminate as many hops across the system as possible to the users. We needed to cache as much content as possible. That included JavaScript files, CSS files, that included images, so we create a lot of palettes to cache all of the images and that way static transfers to the user's system thereby improving overall performance.

So one thing that Mike highlighted there was really profiling on every major browser. This is easy for us at Snap, we pretty much have every platform known to man. You know? In a developer's hands or in a QA's hands or in a web designers hands. We did find some huge performance differences and some of the functions are essential to AJAX style programming. As Mike highlighted we did use DIVS extensively and we did find a couple of browsers that were not so good at doing that. It turns out Safari is okay.

Here's some performance graphs. We'll leave the actual functions out of the equation here, but what you can see is the blue line and the pinkish line show Firefox and IE relative to Safari, so they're taking anywhere from two to four times as long to execute some very common functions.

And again, this doesn't sound like a lot, but when you consider that the average end user's computer is 3 to 5 times slower than our developer machines, it all really adds up. So we actually went through and in certain cases wrote platform dependent code where we had to.

So after we went through all of this, this is where we got, this is where our performance was. Showing that the green line finally had been moved below our 1.5, 1.5 second target line. So we've gone from 25 percent of our end users having a successful experience to 81 percent. It took us a lot of work, you know, but in the end we were very successful.

Now what I'd like to talk about is how you can apply these same lessons, how we applied these same lessons at Snap Shot. So I want to demonstrate to you that it's not just a search specific optimizations that we did, these are actually cross platform, or cross application type optimizations that you can apply in your own AJAX applications.

So all the same technical issues that we saw in Snap, remained in Snap Shots. In fact, some of them were even more difficult. In Snap, we said we could respond within 1.5 seconds. But with Snap Shots, as Jason showed, it's actually triggered by a mouse over. So if someone moves from one link on the page to another link on the page, they don't want to wait a long time. They want that to be a very smooth, flowing experience. So it had to be really instantaneously responsive, not just fast.

The other thing we have with Snap Shots, it was, again a very visually rich experience, so there were lots of graphics in there. So we had to do interesting things. And then even more important than in Snap, was that they identically perform across all the different browsers and across all platforms, because people that were going to use this, they were going to use it on their blogs, right? They can't choose what browser their end users are using.

In search we could say well, maybe there's some smaller browser that has 1 or 2 percent market share and maybe it's really not part of our business model to serve that user, but with Snap Shots, since we wanted to allow the publisher to really control that and have a good experience for their user, we had to be much more cross platform. Like Jason said, we conceived it offsite in October, we launched it six weeks later in November.

We've been growing really rapidly. I think now we're actually available in ten different languages and we're expanding to 30. actually if anyone here has a language that they speak that's, other than English, Spanish, German, Russian, Portuguese, the two different forms of Chinese, I'm probably forgetting one, please come up and see me after and we can talk to you about how you can get Snap Shots in your language.

So here are the specific issues that we faced. Scalability. You know, this is, you know, we're rapidly expanding our search footprint, but Snap Shots is growing even faster. We're up over 1 million sites that have it active today. It's used 14 million times per day. We're serving about 160 shots per second. We peak at 275.

It has to work, like I said, across any website. And we have many more content types than we ever did in the search experience. We have 10 different sites and we have many different content providers. So what we did in optimizing Snap Shots was really to build a timeline specific to the type of shot. So for the Wikipedia Shot it had a different timeline than say a YouTube Shot. So understand what the issues are in each of those. Understand where they can breakdown, where they can fail, where there might be end to start dependencies that we need to eliminate.

So what are the things we did? Well, they're the same things we did for Snap. Single palette for all the images. We used persistent connections everywhere, especially important when you want to have instantaneous response from link to link, we streamlined our JavaScript and finally, we used multiple different IFRAMEs to allow parallelism.

So we took a look at some of our requests that could take a little bit longer and actually placed an IFRAMEs in there to allow the bubble itself to be responsive while maybe a transaction that was going to fill in that bubble to sort of go in parallel.

So in conclusion, developing a cross platform AJAX app is really hard. The tools are starting to get better, but it's still really difficult. If you wanted to really be, you know, if you really want to reinvent an experience and make it seem like a desktop application, it's going to be a lot of work.

You're going to have to do probably 200 really small things and if you foul up on any one of them, you might not get the experience that you want to have. But we found is that with a talented and empowered team, it's not impossible, but you could probably have massive growth yourself.

So I'd like to invite Jason back up to the stage and we're going to do a demo, as promised, of both Snap, the search experience, as well as Snap Shots. Once that's completed we're going to have time for a Q and A if people have it. So what Jason is showing you here, or was showing you, there it goes.

Okay. So this is Snap Search Engine. We actually offer multiple types of search. You know, we showed you web search, but we actually have image search as well. So the first thing that you see that makes, that shows you this is different is actually interactivity that you get in the search box. So rather than typing into a blank box, and getting no response, you actually get some interactivity that you can actually do. It suggests potential queries that you want to type.

So the next thing you get is the search engine results page. So on the left is our traditional listings, or text, and on the right is the preview. And what Jason is actually doing here is actually scrolling through those listings and as you see him scrolling through them the image on the right is changing. So that's an image of the destination site.

It's providing you with information about it helping you decide is this an academic site, is this a commercial site, is this a pay to use, rich visual images, is it a site that's written in my language, helping you sort of judge more about it so that you have an informed decision before you, look before you leap is what we call it.

So this is all AJAX based. You can use, as we showed, you can click on things, you can use your keyboard, you can use your mouse, all of that to activate that. In fact, since its AJAX, you can do all sorts of things. You can actually change up the UI and make the image smaller or larger, depending on the particular search mission that you're actually on.

Now we're going to show you Snap Shots. This is a page on our site called Snap Shot central, where we list all the different shots that are available. We started with the preview shot, which is, as we said, just a high resolution JPEG of the destination of a link. But that's what we launched back in November.

You can see that you can see the multiple different sizes, but we actually went beyond that into what we call rich shots, so bringing information directly into the bubble. So this is an example of the Wikipedia Shot. We're actually bringing the first paragraph or so of an article about Picasso. Here's a movie shot where we have the bio of Scarlett Johansson.

All this information is being pulled from IMDB. Now you can see, as Jason's interacting with it, how we responsive this actually is. It's showing you all this information that you could get that normally you'd have to click, go to a website, wait five seconds, find your path around there and then maybe go back. We have some other ones on here. This is actually an example of our PhotoShot, we support a number of different providers. You can see that you can interact with an entire photo album right there in the bubble.

We have a video shot, which allows you to look at various videos, so you can actually see here actually playing the video directly in the site. And of course, you can movie it so that maybe it's out of your way maybe while you're watching a long video if you want to continue working on something else.

We just recently announced the RSS Shot. It actually came live today, just in time for this conference, is the RSS Shot. So what this enables you to do is if you're looking at a link to any destination page that happens to have an RSS feed associated with it, you can get a summary of that feed right there in the bubble.

So you get the top four or five articles, along with the short excerpt, and it helps you decide is this something I want to go visit? What are the hot stories that this blog is talking about? Something that's not showing here, that we're taking the next step with RSS, is if you actually link to an article, you can actually read the article of a permalink directly inside the bubble.

We also support the Profile Shot. Do you want to show that, Jason? So this is for social networks. So if you're browsing around the internet and you want to find out a little bit more about the people that are there, so MySpace and others who will be supporting this soon, you can actually get some biographical details about them just by using Snap Shots.

We plan to support new shots going forward. Actually, if you're interested in talking to us maybe about a particular content type that you have that you want supported in Snap Shot, come talk to Jason or me after that. There's another thing that we did not highlight, which is we have another technology called Snap Shot market language, which actually uses micro formats to give you direct control over the Snap Shots that appear.

So not only that, you can mark any form of content, so an image, a plain piece of text, basically removing the Snap Shots restriction from the links and putting it anywhere. So you can say hey, I want the Wikipedia shot here and I want the bio of Henry Ford. So you can get more details about that in Snap Shots Central as well. Can we go back to the slides, please? So George is going to come back up.

George: Pretty incredible technology they've been working on. I think a lot of their methodologies are extremely applicable to probably future applications that you're currently or will be working on in the very near future. There's one last thing I wanted to add, which is more or less a recap of some of the other things that you've probably heard during this week. One more thing, and that's, you know, when you're designing, when you're designing AJAX applications in particular, as you start thinking about them for the iPhone, we just wanted to reiterate a couple of really good design disciplines and leave that and see that with the audience.

Good design practices are extremely important, whether you're designing AJAX applications for the desktop, and even more so importantly for the iPhone as you start thinking about what you heard in the session yesterday. That session will also be re-broadcast in this room tomorrow at 3:30 p.m. Column layout, when you're designing the actual grid structure for your page, consider columns and DIVS, basically laying out the columns and the constructs of your actual page. Size does matter. We talked about the Edge Network as well as the WiFi Network and being able to strategically plan out how your contents will scale and have progressive enhancement based on the network protocol and the network connectivity at that given time.

Media queries, CSS3 media queries is extremely important. Use the media queries to do the device detection and the intelligent auto layout when you design your content. And then also, take into consideration as you're designing your content how you could best, do your best practices, optimize your content for the iPhone.

The viewport tag, considering what the height and width will actually be for that content, given on whether or not it's portrait or landscape mode. Double Tap, the access bits that we have inside through JavaScript, how you actually will interact with that content. Text size adjustments, the additional tags that we've mentioned in that session.

And then lastly, the dom events that are specific and they are supported on the actual device. Please come do tomorrow's session. Listen to it. See it all over again. We're going to get that content up on the website as soon as possible so as you're building out your future, you know, as you're working on your current AJAX applications to take all these best design practices into consideration.

And then media, when you're creating your media for the iPhone, or whether it's for the desktop, be sure to also take into consideration those bandwidth constraints as well. So for more information through the WorldWide Developer Relations, your contact for dealing with anything related with AJAX for the desktop or AJAX applications for the iPhone is Mark Malone. You have the contact information here. There's useful information around user agent and object detection listed here as well.

This will also be posted onto the Apple Developer Connection Website and then also within the W3C and the What Working Group on the guidance around dealing with CSS3 media queries for doing the dynamic layout and design of those objects and then also within the image and audio tag that we are contributing as Apple to the What Working Group as well as the HTML 5 specification within the W3C. please be sure to take a look at that spec, it's an open forum if you have advice, if you want to contribute your own two bits, be a participant in helping define what HTML 5 is.

For the media site, Allan Schaffer is your WWDR contact handling anything related to QuickTime, QuickTime optimization for either the Desktop Safari Experience or for the iPhone Safari Experience. A few housekeeping reminders before we open it up to Q and A. We have a widget design technique session in this room a little bit later this evening, and then over in the main keynote hall, Presidio, we have Vector Graphics for WebKit, where we will be going in-depth over canvas, CSS3, and scalable vector graphics, SVG, which is new to Leopard.

Tomorrow's sessions, we are going to be hosting the Apple.com Design Team, which you guys have probably already seen the latest redesign of Apple.com. they'll be coming and speaking about the really interesting uses of AJAX that they've applied to this new redesign and actually dissecting it and giving optimization approaches that we here at Apple are using as best practices. We have an AJAX Methodologies for QuickTime Development session tomorrow that I think is going to be really interesting. And again, the rebroadcast of the Developing Websites for iPhones session here in this room.

And then we are also very fortunate to have with us this week, two very esteemed individuals that are contributing and making web applications become a reality on the web. We'll have Dillon and Alex Russell from the site penned the Dojo Foundation for the Dojo library set giving a session. And then on Friday, we're also going to have a separate session with the creator of prototype, Sam Stephenson, from 37 Signals, that will be here as well. So please, be sure to check out these sessions. We also have some labs.

AJAX Web Development lab for two hours in the graphics and media section downstairs tomorrow. And then on Friday, we'll have a blog development lab covering MAMP and Press as well as our own OS X Server new feature sets. And then we have a Hybrid-Web/Cocoa Applications Development lab on Friday, where those of you that might be thinking about I'm going to take the best of the knowledge that I have inside of AppKit and apply that within Cocoa context with a WebKit view, we're going to be showing some really compelling demos of what that means in that session on Friday. But we also have a lab for you as well.