Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2008-380
$eventId
ID of event: wwdc2008
$eventContentId
ID of session without event part: 380
$eventShortId
Shortened ID of event: wwdc08
$year
Year of session: 2008
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC08 • Session 380

iPhone Multi-Touch Events and Gestures

Essentials • 39:42

iPhone's ability to handle multiple touches simultaneously is central to its unique usability. Learn how touches and events are represented to your application and how it can respond to a user's gestures to provide an intuitive, easy-to-use interface.

Speaker: Jason Beaver

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon. Welcome to Multi-Touch Events and Gestures. My name is Jason Beaver. I am an engineer on the UIKit team. And I really hope today that we can show you just how easy it is to use the multi-touch interface to build really rich, powerful, intuitive applications for your users.

As you can see in the interface, or the video on the screen, the touch interface provides a really natural, intuitive way for your users to interact with elements on the screen. They can touch on items, scroll around, zoom in and out. All directly with the elements on the screen instead of indirectly through touchpads and physical buttons and things like that.

This provides an experience that's much closer to what users experience on the desktop, where they interact directly with the items on the screen using the mouse. The finger isn't a mouse. You can't click on the screen or touch on the screen with anywhere near the level of precision you can with a mouse on the desktop.

In fact, when the user touches on the screen, the contact patch is actually elliptical. And the shape and size of this contact patch varies based on which finger the user uses, how hard they press, what angle they're holding their hand, et cetera. In fact, even as they drag their finger over the screen, the size and shape of this ellipsoid changes.

As a developer, though, you really don't want to think about this ellipsoid. You just want to think about a single touch point, the point where the user was intending to touch. And you might think, well, that's just the center of this ellipse, right? But it's not. When we've done studies and tested where users touch, they think where they're touching is very close to the tip of their finger. It's actually offset somewhat below that. So the system sort of takes all this information, as well as the orientation of the screen, and computes a single touch point for you.

So what are we going to cover today? We're going to cover a number of different topics. The first is, what is a touch sequence? How is it represented to your application? And then how is it delivered to your application? Then we'll talk about single touches, multiple touches in the same view, and multiple touches in different views. And then we'll talk a little bit about how the touches actually get routed to those views and how you can control that. As well as using UI control when your needs for touch are a little more simplistic.

So let's start with what a touch sequence is. A touch sequence starts when the user's finger contacts the surface of the screen and continues as the finger tracks along the surface of the screen and ends when the finger lifts from the screen. So let's talk now about how this is represented to you. Well, a finger is represented by an instance of the UI touch class.

There are three properties on UI Touch. The first is the timestamp. This represents the last time the state of that finger changed in some way. The phase, which indicates what the finger is currently doing. Did it just contact the screen? Is it moving around on the screen or is it stationary on the screen? Or did it just lift from the screen? And finally, tap count.

Now, a tap is a lot like a click with a mouse on the desktop. If the user touches the screen and touches just briefly enough and then lifts, and there's relatively little motion, that's considered a tap. And if the user taps two or three times, we increment tap count for you so that you can use that to perform certain actions.

There are a couple of additional properties. The touch is associated with a window and a view. This happens when the touch first contacts the screen. The system determines which window it's associated with. And then the window further figures out which view it's associated with. And from that point forward, the touch is directly associated with that view, and the event is forwarded directly into that view as the finger moves around on the screen.

There are also a couple of methods to figure out where the finger currently is on the screen. There's location in view and previous location in view. Location in view tells where the touch is in the view's coordinate system. The view that's passed in is the argument here. And if the finger's in motion, previous location in view tells you where it was right before this.

Now, Events is a container for all of the touches on the screen. It also has a timestamp indicating last time the event was updated. And there are three methods to access the touches inside this event. There's All Touches that access-- that gives you a setback containing all the touches currently on the screen.

And there's Touches for Window and Touches for View to find out the touches that started in that window in view. So let's look at this a little graphically. We have an event. This event has four touches associated with it. And those touches are associated with different windows. The first three are with window A, and the last one is with window B.

Furthermore, those touches are associated with various views within those windows. If we ask the event for all the touches, obviously we'll get back a set containing all of the touches. And if we ask for touches for window and you pass in window A, we'll get back a set with the first three, and for window B, we'll get back a set containing the last one. Same with touches for view. If we ask for touches for view for the first view, we'll get back the first two touches, the third touch for view B, and the last touch for view C.

So let's now talk about how those touches are delivered to your application. To receive touches, you need to be a subclass of the UI Responder class, and there are four methods that you need to handle touches. The first is touchesBeganWithEvent. There's a couple things to notice about this method. The first is that the first argument is a set. If multiple fingers contact your view simultaneously, you'll be passed a set of all of those fingers.

And the second is there's an event argument at the end, and as we've talked about, the event argument can be used to access other fingers that may already be in contact with the screen. Now, as fingers move within your view, Your set of touches moved with event message. And you're only told about the touches that are in motion. If there are several fingers in contact with your view but only one moves, that set will just contain the one touch that's in motion.

And finally, Touches Ended. This is sent when a finger lifts from the screen. And again, the set will only contain the fingers that have lifted. There may still be other touches in contact with your view, and you can use the event object to find out about those. And finally, Touches Cancelled.

When there are certain system events, such as an incoming phone call or an alert sheet popping up, any fingers that are currently tracking on the screen will get canceled. And one of the most common problems that we've seen in some of the apps we've encountered is people not implementing touches canceled. Anytime your view changes its internal state in any way, whether that's visually or just in your model, when it's tracking a finger, you need to always make sure you implement touches canceled to reset that state in exactly the same way you would with your touches ended.

So we've mentioned you have to be a subclass of UIResponder. So what sorts of things are subclasses of responder and can handle touches? Well, the application object is, as well as UIView and UIViewController. And since UIView is a responder, this means that pretty much everything you see on the screen is a responder and can handle touch events. So it's from controls and labels, image use, buttons, things like that.

So let's see how this all works now for a single touch. When the finger contacts the screen, we'll figure out which window and view are associated with that touch and send that view, touches began with event, and we'll pass along a single touch instance, and that's represented by the box in the upper right-hand corner.

There's a couple things to notice here. The first is that the address of that touch, here it's represented as 123, will remain constant for the lifetime of that touch. When the finger actually contacts the screen, we allocate a touch instance, and then as it moves around on the screen, we simply update that instance and tell you about the changes, and it's not deallocated until the finger lifts. So you can actually use the uniqueness of that instance address as a key to sort of keep track of touches over time.

The next thing to notice is that the phase of the touches began, indicating that it just contacted the screen, and the location indicates where on the screen it contacted. As it moves, you're going to start to receive a series of touches moved bins. Notice the address is the same and the phase is now moved, and the location is being updated each time you receive the message.

And when the finger finally lifts from the screen, your touch is ended with event, the phase is now ended, and the location represents the last point that the finger was in contact with the screen. So we're going to look at a simple demo now. We have a canvas, which is the white area in the background, and a single image view in the middle, and we want to allow the user to click within that image view and drag that image around on the screen. Basically, we want to achieve an effect kind of like this. So let's go over to Demos and see how we do this.

So I have a basic app structure here created from one of the templates. There's only a couple of things we've added. We've added a touch image view, which I'll get into in a second. So this has a top-level application delegate and a top-level view controller. So we're going to go into the view controller, and in our viewDidLoad method, we're going to add that image view. So the first thing we'll do is load our image, the flowing rock image.

And then we'll create a rectangle and a touch image view from that rectangle. Now, a touch image view is just a subclass of UI image view, and we'll get into that in just a second. We need to assign the TouchImageView's image property to the image we loaded. And we'll just center that view in the window. will then add it as a sub-view of our View Controller's view and release the touch image view.

So let's go over to the header now of our touch image view. As I said, it is a subclass of UI image view, but we need a couple of additional instance variables here. We're going to need an affine transform for how we'll manipulate this image. Now, for this first demo, really, we're just allowing you to move it around.

We could use this by just positioning the view directly, but we're going to be expanding this demo over the course of this session to allow multiple fingers for scaling and rotating. So we'll do this with a transform. And we need a dictionary to record the beginning points for all these fingers.

Now, in our initWithFrame for our touch image view, we need to set up the original transform to be just the identity transform, indicating we don't want this transformed at all initially. and we'll create that dictionary that we're going to use for the begin points. We need to set one more thing because this is a subclass of UI Image View. Image Views by default have user interaction disabled, so we need to turn that back on to allow us to receive the touch events.

So now we need to provide our implementations for the various touch handlers. So the first we'll start out with is touches_began. In this case, all we need to do is actually cache the begin points for the touches. We're not going to get into the specifics of some of the methods that we'll see here. They're a little too complex to go with, especially the ones that do the transforms in this session. But this demo is available for download afterwards, and I'll talk about that in a little bit.

So now on our touches moved, we need to compute an incremental transform. Basically, this is going to take that start point that's cached and the current point and figure out a transform. In the case of a single finger, it'll just be a translate transform. When we get into multiple fingers, it will automatically handle rotate and scale for us. And then all we need to do is take our original transform and that new transform and concatenate that and set that as our view's transform.

Now, when a finger -- when a -- when a touch is ended, when a finger lifts, we need to update that original transform so the image stays where it was and then just remove those touches from the cache. Now, finally, we need to implement our touches canceled, and we have a couple of choices here.

If we want to -- we want that image to snap back to where it was, we could just reset our state and clean up. In this case, though, we probably want the image to just stay wherever it was under the finger, so we want to do the same thing we do when touch is ended. And so we can simply just call touches ended from our touches canceled in this case. So now if we build and run on this.

[Transcript missing]

Okay, if we can go back to slides, that's really all we need to do to handle a single touch. So now let's talk about how we would handle multiple touches. Most views really only want to handle a single touch. In fact, it would be extra work for most views to deal with the fact that there may be multiple fingers contacting that view at the same time.

So by default, if a finger is tracking in a view and another finger contacts that view, it's simply ignored by the system. The view is never told about it. But if your view expects to and knows how to handle multiple touches, all you have to do to enable that is to set the UIView property MultipleTouchEnabled to Yes.

So let's now look at a multiple touch sequence. Obviously, this is not the way you normally interact with your phone with two hands, but... When the first finger contacts the screen, that view will be sent a Touch As Began with Event with a touch instance, and as it moves, it will receive Touch As Moved. So far, this is exactly like the single-touch case.

When the next hand contacts, or next finger contacts, you'll be sent another Touches Began with Event. And that set will only contain that new touch, the touch on the right here, address ABC. Notice that the other touch is grayed out there. It's still in the event. If you ask the event for all the touches for the view, you will see that touch. And you notice that its phase is stationary.

Now, if both fingers move at the same time, your view will receive a touches moved with event message. And because both fingers moved, the set will contain both touches. Notice both--the phase for both touches is in moved, and the locations have changed. And if just one moves, again, you'll receive a touches moved, but the set will only contain the finger that moved, the other touches there, and mark stationary.

And when the fingers both lift, you're sent a Touches Ended With Event, and the set contains both touches. The phase is in the Ended state. So we're going to modify our demo now to allow the user to put multiple fingers down in this view and rotate and scale and move that image view. So this is the effect we're trying to achieve. So let's look at now how to do that.

So the first thing we need to do is tell the system that our view can handle multiple touches. So we'll do that by setting the multiple touch enabled property to yes. And then we need to do a little more work in our touches began and touches ended, because we may receive these with other fingers still in contact with the screen.

So when our touches began, We need to figure out what touches are already in contact with our view. So we do this by asking the event for all the touches for our view, and then simply subtracting off the touches that just came in. This gives us a set of touches that were in contact before this touches began.

And if there are any touches in that set, we simply need to update our original transform and reset the begin points for those touches, and then we can cache the begin points for the new touches and move on. Now, the touches moved actually doesn't have to change at all because the method that computes the incremental transform already knows how to deal with multiple fingers. We need to do a little more work, and our touch is ended, and it's somewhat analogous to the stuff we did in the touches began.

Basically, we need to compute the remaining touches, and we do that in exactly the same way. We ask for the touches for view self, subtract off the touches that are ending. That leaves the remaining touches. And then we just need to cache their begin points. Now, we've already updated the original transform here, so we don't need to do that. Now, if we build and run this,

[Transcript missing]

You can go back to slides.

So that was how to track in a single view. But what if those fingers are tracking in different views? How do we handle that? So in this case, we have a view on the left and right of the screen. When the finger contacts the left view, it will receive a Touch As Began with event, and as it moves, it will receive a Touch As Moved with event, exactly like in the single-touch case.

When a finger touches the other view, it will receive a Touch As Began with event with the touch that landed in it. There's still the other touch there. It's marked Stationary, but the view associated with that doesn't receive any notification that a touch is moved because the touch in that view didn't move.

Now, if they both move, we're going to send two separate Touch As Moved with event to views, and they'll each contain the one touch associated with that view. If just one moves, again, we'll only send Touch As Moved to the one that actually has a finger that's in motion. And if they both lift, we'll again send touches ended with event to each of those views separately.

So you want some control, though, over how we deliver events to multiple views. It's not always appropriate for multiple views to be tracking these fingers. For example, if you have a table view with some sort of row that's highlighted and you have two different buttons, one to edit that row and one to delete that row, you probably don't want the user clicking on both of those at the same time.

So in that case, we provide a flag that lets you control this, and that is the exclusive touch flag. The exclusive touch flag, if that is set, then if there are fingers tracking in any other views within that window and the user tries to touch on your exclusive touch view, it's simply ignored. Your view's never told about it.

And conversely, if there's a finger already in your exclusive touch view and you try to touch on some other view in that window, that touch is ignored. Basically what this does is it guarantees that if your exclusive touch view is tracking any touches, it's the only view within your window that's tracking touches.

So now we're going to extend our demo to have several image views. And we're first going to show how we can move and scale all of these separately. And then we'll turn on the exclusive touch flag and show how that limits us to only interacting with one at a time. So this is the effect we're trying to achieve here. We're only showing movement here in the video, but we'll be able to do rotation and scale as well.

So in our view controller, we already loaded one image, and we have a block of code to do that. We'll have two more blocks of code that are very similar, one to set the clownfish image and one to set the stones image. And that's actually the only change we need to make. If we run our demo now-- You'll see we have multiple image views. I can click and move these, and I can scale and rotate each of these independently.

Limit that so that we only can interact with one at a time. All I need to do to do that is go back to our touch image view, and in our init with frame, I'll set the exclusive touch flag to yes. While we're here, I'm going to make one other little change. Notice these views can now overlap. I want to allow the user to reorder those.

So I'm going to enable them to double tap on one of those images to pop it to the front. All we need to do to do that is to go down to where our touch has ended. And we'll just iterate through the touches that are ending. We'll check to see if their tap count is greater than or equal to 2. And we'll just move that to the front of the view hierarchy.

[Transcript missing]

As I mentioned, this sample is available up on the attendees site. The URL is here for that. I urge you to take a look at that. So let's now talk about how touches are routed to your views. There's two parts of this I want to talk about. The first is the responder chain. Now, I mentioned that--all along I've mentioned that a touch is associated with a window and a view.

And once we make that association, we'll send touches began to that view. But what happens if that view doesn't implement touches began? Well, the system will try all of the objects in the responder chain to try to find some object that knows how to handle this. So we'll start with the view, and again, if it doesn't implement it, we'll see if that view has a view controller. And if it does, we'll ask that view controller to process the touches began.

If it doesn't implement it, we'll go to that view's super view and try it. Again, it's super -- it's view controller. And we'll work our way up the super view chain until we reach the window. The window is a subclass of view, so obviously it can handle touches, and we'll see if this window wants to handle it. And if not, we'll forward it on to the application. And I mentioned the application is one of those subclasses of UI Responder.

Now, if no object handles the touch, that's fine. We won't try to send it on any farther, but as the finger continues to move, we'll still send touches moved in exactly the same way. We'll start back at the view and work our way up the responder chain, trying to find somebody that handles the move.

So what does this mean to you if all you care about is when the finger ends, like when it lifts? You can simply implement touches ended with event. The touches began and touches moved will work their way up the responder chain and nobody will handle them, and you'll still be sent the touches ended. Now, this is what the full responder chain looks like. In most applications, only the root-most view will have a view controller, so this is a little more typical picture of what the responder chain looks like.

The other part of touch routing is hit testing. Hit testing is how we actually determine which view within a window is the one that should receive the events. So when a touch contacts someplace on the screen, the system will determine which window that is and then ask the window to figure out which view the touch should be routed to.

And it does this by starting at the back of the view hierarchy and sending a hit test with event message. Now, that view will check several properties of itself. The first is, is user interaction enabled? This was the property we had to set on the image view to enable it to receive touch events.

The next is, is that view hidden? Hidden views can't process touches. So if the view is hidden, it'll return nil from hit test and the window will go on to try to find some other view that's capable of handling that touch. We'll also check the alpha. If the alpha is zero, that means the view is completely transparent and that's not a valid view for hit testing or for tracking. And the last is we'll call Point Inside with Event. This checks to see if the touch is completely within the bounds of the view. If it's outside the view, that's not a valid view for tracking.

Now, then that view will go on and recursively call HitTestWithEvent for all of its subviews, and that'll work its way down the view hierarchy until it finds a view that meets these criteria and doesn't have any subviews. So in this case, it's just our subview, and it doesn't have any further children. The subview will check all the same things and then return itself. At that point, we create the touch instance. We assign the view property to that view and the window property to the window.

Now, not all views implement the default hit testing, and you can override this to change how hit testing works in your application. A good example is the scroll view. In a scroll view, you want to be able to have a lot of content inside the scroll view, but if you click on that, you still want to be able to scroll. You don't want, like, there's a button in the middle of that and you hit the button, you don't want that to block scrolling.

So the way scroll view implements this So when the finger contacts the screen, the window gets that, calls hit testing just like it has. ScrollView receives that. And it checks the exact same parameters it does to make sure it's a valid hit target. And at this point, the ScrollView simply returns itself. We create a touch instance at this point where the view is associated with the ScrollView.

So this means that as that finger moves around on the screen, we're going to send the updates to that touch directly to the scroll view, not to the view inside that the user thinks they clicked on. Now, the scroll view still needs to know which object the user actually touched on, so it will internally go through the same process. It'll call hit test with event on down the view hierarchy until it finds the actual object that was hit. But it'll just cache that internally so that it can forward those events on.

So if your object is inside a scroll view, you may be sent a touches began and touches moved to track a touch. But what happens if you call touches for view and pass yourself in? In that case, you're actually going to get a nil back or an empty set.

There are no touches that are actually associated with your views. So you need to be a little careful in certain circumstances. If you do implement touch routing, touch forwarding in your application, you need to recognize that sometimes views will be asked to process touches that aren't directly associated with them.

So, up to this point, we've talked entirely about handling the touches yourself. But if all you need from the touch system is simple notification when something like a user touches somewhere or lifts their finger from somewhere, You could avoid some of this complexity by using the UI Control class. So UI Control defines several events which you can associate actions with.

It shows you it has events for when the finger contacts the screen, or in the case of double or triple tapping, contacts the screen with repeat. If the finger's dragging around inside your control or has started in your control and is now dragging around outside your control, there are events posted for that.

When it is dragging, if it reenters your control or exits your control, there are events associated with that. And finally, when the finger lifts, if it lifts inside your control or outside your control, there are separate events for that. We also mentioned touches canceled. That happens. There's a touch cancel event that you can associate an action with.

So how do you associate those actions with the control? There is a method on UI Control called Add Target Action for Control Events. The target is the object you want to be notified. The action is the method selector that we will invoke on your object. And the control events are the set of control events that you want to trigger those actions. And that's a bit mass, so you can specify as many as you'd like there.

So I should mention that this is a little different than the way target action stuff works on the desktop for those of you who are familiar with Cocoa. In Cocoa, there's only a single target and action associated for sort of the entire control. In UIKit, notice it's add target action, not set target action. You can have as many as you'd like, and you can associate multiple targets and actions even with the exact same control event.

So let's look at the action signatures. This is also different from the desktop. In the desktop, the action method has to have a single signature, and it's something that takes a single argument. On the phone, though, there's three possible signatures you can use. This first is a method that doesn't take any arguments. Here it's called PerformAction, but you can give this any name you might like.

The second is one that takes a single argument. In this case, it's the control that is generating the control event. We're calling it performAction: in this case, but again, you can use any name you'd like. And the last is a method that takes two arguments: the control that's generating the control event and the event object.

So that's how you can associate things with actual control events, but you still might want to do your own tracking in a control. Since control is optimized to handle single touches, there's some simplified methods to do that. The first is begin tracking with touch with event. The first argument here is a touch instance, not a set of touches, because again, the control is optimized to deal with a single touch. And there's a return value of Boolean here, which is different than touchesBeganWithEvent, which has a void return type.

The return value here indicates whether you want to continue processing events. If when the finger contacts the control and you receive the Begin Tracking with Touch event or message, if you determine at that point that you don't really care about this touch, you can simply return No and you won't receive any future updates about that touch. Now, compare this to handling the touches more directly yourself.

If in your touches began, you didn't want to handle that touch, you'd have to keep some sort of state around because you're going to continue to receive touches moved and touches ended, and you're going to have to check that state each time and make sure that, or check to see if you really want to process that touch.

As the finger moves, you'll receive a "Continue tracking with touch with event" message. Again, it contains a single touch, and there's a Boolean to indicate whether you want to continue receiving updates about this touch. When the finger lifts, there's an "End tracking with touch with event" message. And if it's canceled, there is a corresponding "Cancel tracking with event" message.

So we're going to look at a simple little demo here that has a set of labels up at the top.

[Transcript missing]

Let's see, so the first thing we want to do is we've got a new project here with a similar type structure. It's got an application delegate and a view controller. And in the ViewController, we want to add a couple of instance variables. Oops, excuse me. I need to go switch.

Which script I'm running? So we have a couple of instance variables. The first is a rectangle that we'll use to position each of those labels down the screen. will have an instance variable for each of those labels. And finally, the button at the bottom-- and button is just a subclass of control, and we're using it here just for visual appearance-- Now, in our implementation file, we need to add a couple of methods.

We'll add a convenience method that'll create that label for us. So we'll add-- this is called addLabelWithText. It'll create a label, set its font and text color, and we'll align it to be in the center, and then we'll set the alpha to this default value, which in this case is 0.3.

And then we'll assign the text, add it as a subview of our view controller's view, and release the label. Now we need to move that rectangle down so we have a place to put the next label. We'll do that here, and then we'll simply return the label. We also need another method that will highlight this label for us and then unhighlight it in an animated fashion. So I have a method here called HighlightLabel.

Here we're just going to set the alpha to 1, which makes it fully opaque and fully blue. And then we'll begin an animation context, set the animation duration to 1 and 1/2 seconds, and reset the alpha back to the default value of 0.3. And then we'll just commit that animation to get it started.

When our view did load, we need to set the default label rectangle. And then we're going to add a series of labels here. We'll just look at the first one, but they're all roughly the same. Text will be the text that's associated with that event. So in this case, it's the Touchdown label, and we'll assign that to the Touchdown label, Ivar. Now we'll create our button.

We'll set its title to UI Control, and for the control state Normal, because we're not setting it for any other control state, it'll have this title for all possible control states. We'll add that as our subview. And then now we need to associate all the target and actions. So again, we've got several methods here. We'll just look at the first one. We're going to add ourself as the target. The selector is touchdown, again, something we defined. For the control event, touchdown. And the rest are similar.

Now we need to actually go provide implementations for all those actions. Again, we've got a set here. We'll just look at the first one. Touchdown, we'll simply call highlight label and pass in the touchdown label. Finally, we just need to make sure we're cleaning up those IVARs properly.

If we look at the demo, we notice we have the labels at the top and the control at the bottom. And if I just touch in the control, notice I get a touchdown. As I move around a little bit, we're getting a drag inside. If I stop, notice it fades out, meaning we're no longer getting it.

But if I start moving my finger again, we'll start receiving that again. The fact that it stays blue indicates we're receiving that continually. And when I finally lift, we'll get a touch up inside. If I click and I drag outside, notice you'll get a drag exit. And then now we're dragging outside the control. And when I move back in, we'll get a drag enter.

Jason Beaver You may notice it doesn't happen directly on the boundary. The boundary's actually a little farther away. Because of the imprecision of the touch system, there's a buffer sort of built around the control. You have to touch in the control to start, but once you do that, you can actually move a little bit outside it, and it won't. It'll still track inside it. So as you can see, we're getting all the various kinds of events as we touch up inside or even outside. And that's it.

So we covered a lot of stuff this morning. We talked a little bit about how touches are represented and how they're delivered to you. And we talked about how to handle single touches and multiple touches, what happens if those touches are in multiple views, and how to control how those touches are routed to those views, and how to use the UI Control class to do some simpler touch handling.

I really hope you guys will go out and take a look at this touch demo that's available for download, as well as some of the other ones that are up on developer.apple.com. And we're really looking forward to seeing the kind of great stuff you guys can do with this. I know you can build some pretty powerful interfaces with this multi-touch screen.

If you want more information, you can contact Derek Horn. There's also documentation up on the Apple site. There is a lab Thursday morning, tomorrow morning, in iPhone Lab B. I'll be there along with some of the other UI kit engineers. There's also a related session Friday morning at 10:30 called "How Do I Do That? Tips and Tricks for iPhone Application Development." They'll get into a little more detail there about some of the scroll view stuff and how to use that and interact with that.