Hardware • 1:06:02
This session is for developers of FireWire-based embedded systems who are planning to use Apple's FireWire Reference Platform. Topics include a software architecture overview, sub-unit description, embedded operating system support, and hardware drivers.
Speaker: Colin Whitby-Strevens
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Well, good afternoon, ladies and gentlemen. Welcome to the QuickTime feedback session. If you really want the QuickTime feedback session, though, you shouldn't be in here. You should be in the room right behind us, the marina. So does anybody want to take this opportunity to get up and leave? Okay, so welcome to the FireWire Reference Platform session.
My name is Colin Whitby-Strevens, and I speak with an English accent. So this is really just a few words for you to get yourself calibrated to listening to an English accent. The other thing I should perhaps warn you about is that when I get excited about technology, I tend to speak faster and faster, so you may have to slow me down as well, or else it'll be totally impossible to understand what I'm saying.
So what I'm intending to tell you about over the next hour or so is to give you an overview of what the FireWire Reference Platform is, and then to tell you in some detail about two or three of its components. In particular, I'm going to tell you about something called TNF Kernel, which is the heart of it. TNF Sys, TNF Link Drivers.
This is the heart of the platform. I'm going to be giving two examples of the code which is included in the platform for supporting protocols, FireWire protocols. There are many protocols supported in the platform, and these are really just two examples. and TNF gets used very frequently in this presentation. And what it means is just a little acronym for 1394.
So we talk about TNF this, TNF that all the time. And you'll find that all the APIs have got the letters TNF at the beginning. And that really was just a little attempt to make sure we didn't get nasty name clashes and the like. But just to say that really the objective I have this afternoon is not to train you in every fine detail of the reference platform, but to give you a flavor of how it is constructed, how it is architected, and how you can use it, and how you might go about building your applications using the TNF reference platform.
So, what is it? Well, in a single sentence, the reference platform is the source code and documentation for a pile of software which enables you to develop what we might call end-of-the-wire FireWire devices. So these are products that you would like to build which are complementary to the main Mac products. This also means that this software is not related to other Apple operating system FireWire stack software. It is independent of that, and it's intended to be freestanding in end-of-the-wire products.
Indeed, it's been designed very much with the embedded system in mind. It needs some form of real-time operating system, but really its requirements on real-time operating system are pretty small, and a lightweight real-time operating system will do. There are a variety of RTOSs that are supported, and there's a very easy way of porting it to other RTOSs as well.
It also supports a variety of 1394 link chips. You may be aware that in a computer like a Mac, you find a link chip which is called OHCI, Open Host Controller Interface. And that is very much a chip which is optimized to use in that sort of device, in a Mac of some sort.
But there are many other link chips around which are, again, optimized towards embedded applications. And the FireWire Reference Platform really is designed to be equally suitable for use in an OHCI environment or in other link chip environments. So there are several drivers provided, and in essence there's a kit provided which you can use in order to drive other 1394 link chips.
But the great thing about it is that it supports multiple FireWire application protocols, and it supports them simultaneously, in that you can build an application which requires, in order to work properly, several different FireWire application protocols, and you can have those all running simultaneously within your application using the reference platform.
So, what actually is in the box, or when you go and download from the web, what do you find? Well, first of all, you find some software for a number of modules which we call the class services. The heart of the whole system is a thing called the kernel, TNF kernel. There are then some services for supporting different classes of protocols.
One, the main one, first one we mention here is AVC General. The audio-visual control protocols is a large family of protocols that have been standardized by the 1394 Trade Association. We provide the support for all of those, the general underlying support for all of those, and then we provide specific protocols for disk, DVD, panel, and tape sub-units.
We provide software for asynchronous connections. That's a method of streaming data from one device to another, but by using asynchronous operations. That again is a standardized protocol. We provide support for something called EIA-775, which is an EIA protocol for on-screen display for digital TVs. We provide IP 1394, in particular RFC 2734, which is IPv4. PPDT, point-to-point data transport protocol, and last but not least, SPP2, which is a protocol specifically for devices like disks and printers. You'll find all of those in the FireWire Reference Platform.
For RTOS interfaces, there's a module called TNF-SYS, which is the module which, a single module, which provides the RTOS independence. And we provide TNF-SYS implementations for VxWorks, MicroiTron, Nucleus, a rather early version of MicroCOS, and there's also a generic template so that you can port to your own favorite RTOS.
We provide a board support package so that you can bring up the whole of the FireWire Reference Platform and actually make it tick and do something, even though it may not be working on your particular hardware. And that's been engineered around a standard PC x86 platform with VxWorks.
For link drivers, you find link drivers for OHCI, for the Texas Instruments CE-Links and GP-Links, for the MD8412, and also a generic link driver toolkit. The link driver also features some plug-ins, because quite often you have some specialized hardware that you also want to be able to access at the driver level. And in particular, there's a couple of examples. One is for the global unique identifier, which will be implemented in a system-dependent way in your hardware. And the other one is for DMA features.
In order to exercise this pile of software, we have a generic command line interface, which is called ZShell. Using ZShell, we then provide a number of little command-line applications, which essentially implement various 1394 transactions, exercise various protocols, and so on. There's a generic one called TMA, and when you get to use the FireWire Reference Platform, you probably find yourself using TMA really quite a lot.
It's a rather low-level thing, but it allows you to directly generate 1394 transactions on a bus so that you can do reads and writes. You can go and view devices. You can find out what's on the bus. You can find out which devices route, and so on and so forth.
You can generate bus resets by typing appropriate command lines commands at it. There are then command-line applications for an AVC tape controller and target. In other words, you can pretend to be an AVC tape controller and give the sorts of commands that an AVC tape controller will do, or you can pretend to be an AVC tape target.
There's somewhat embryonic, but it's there, a 1394 bridge manager. That's in anticipation of the 1394.1 standard, which is still not yet finished. AVC disk controller and target, DVD, PBDT, SPP2 disk, et cetera. There are a few others as well. All of these are rather low-level. There are other low-level ways of exercising these functions from the FireWire Reference Platform.
We also have a number of demonstration applications built into the reference platform. There's one for the on-screen display features of EIA-775 for digital televisions. There's actually a couple of applications for personal video recorders, essentially emulating a tape recorder using a disc. There's some demonstration of IP3094, AVC panel.
There's another PVR application which is really using AVC disk directly, SVP2 disk, SVP2 initiator, where you're on the host side controlling a disk, and some more. So a whole range of demonstration applications which, with a bit of luck, you ought to be able to get up and running really quite quickly. Again, to sort of get you started, get you with the right sort of feel for what it's like to either drive or to implement a FireWire end-of-the-wire device.
And of course, there's a ton of documentation. I actually went through and counted that there's a total of 20 PDF manuals in this thing, and also various release notes with the various modules. And then there's a whole lot of assorted other things which are helpful and useful, like make files and various other little tools all bound up in it.
and all comes in a tile. Now, the architecture of all of this is given in this slide. The heart of it all, as I said, is TNF Kernel. TNF Kernel provides transaction services for the 1394 bus. It operates on a client-server model, so it can support a number of clients.
And the client, if you like, logs into TNF Kernel, gets some IDs, and then does what it wants to do. And so examples of clients are our protocols to support SPP2, or our protocols to support AVC, IP1394, or whatever. TNF Kernel also includes some bus management utilities, which in fact operate as clients as well. So you can have any number of clients, and this is extensible. You can invent your own clients if you like.
Then each of our protocols actually operates in the same sort of model. Each protocol itself can have one or more clients. So we have some software to support disks and printers, for example, which run on top of SPP2. AVC, we've got some software modules which support very specific AVC devices.
On top of all of this, you build then your application. And in general, you will use the interfaces sort of up here to the particular protocols that you want to support.
[Transcript missing]
So let's start peeling the onion a little bit, tell you a little bit more about two or three of these modules. As I said, really, the objective here is not to give you every last detail, but to give you a flavor of how these things are constructed, how they're used, and how you can build an application with them. And we start off with TNF Kernel.
So the main features of TNF Kernel. First of all, it provides transaction support for 1394 operations. And a FireWire transaction is something like a read or a write. So you get quad-lit reads, quad-lit writes, block reads, block writes to other devices on the bus. And this is actually both incoming and outgoing, so other devices on the bus can send reads and writes to your device. And Kernel will pick up those reads and writes, and will decide what to do with them.
It has command and indication queues. So if you like the model, as far as you're concerned, as far as the clients of Kernel are concerned, are that several transactions can be outstanding at the same time. You know that's a nice feature of FireWire. It's very good for bus efficiency that you don't have to wait for one transaction to finish before another one can start with split transactions. And Kernel supports that.
Indication queues are essentially the queues for the request transactions that are coming into this device from across the bus. And of course, if you get a request coming in, like a read request, you need to generate a response. So there's the response generation. And you can also sign up for the handlers so that you can see the requests and generate the appropriate responses.
So essentially, everything you need in order to be able to do normal 1394 bus transactions is in TNF Kernel. And of course, it does a whole lot of nice things to make sure that these transactions are properly formatted and you get them right and they are dealt with correctly across bus resets and a few things like that. It sort of deals with some nitty-gritty issues of that sort.
Second feature in TNF Kernel is bus management functions. Main function of bus management is to do things like optimize the bus by optimizing gap counts, making sure that the most capable node on the bus is indeed the bus manager, and so on. So we provide serial bus management. And by and large, that's autonomous. You just say to the kernel, I want you to be bus manager capable. And it'll go and do it. And you don't have to worry about it again.
CSR services. There's this wonderful misnomer in 1394 which is called configuration ROM, which is always changing. Essentially, the configuration ROM is an area in your address space where you advertise to the outside world what you're capable of doing. This can be built up dynamically using kernel services. You can say, I want a configuration ROM that says I'm an SVP2 target, and kernel will go away and build the right sort of configuration ROM, taking into account the fact that some other client of kernel has also asked for something else to be advertised in configuration ROM. The configuration ROM specification has got a complicated structure of pointers and the like, and the kernel will build the right structure given all the requests that have been made of it. And you can go and add capabilities dynamically if you wish to.
Sort of complementary to that are reset services. A bus reset on FireWire is essentially the hint that something's changed. There's a new device on the bus. Perhaps the device has gone away that used to be there. Or perhaps a device that's on the bus, was there before, has actually changed what it can and can't do. It's got an extra capability or maybe a capability that's gone away.
Colin Whitby-Strevens So there's a bus reset on the bus, and that's sort of a hint for everybody else to go out and look at the configuration ROMs, see what's changed, and then understand what the bus now looks like. And there's a whole pile of reset services which deal with that for you in TNF Kernel.
Now, one of the ways that Kernel helps the application is that it provides a device reference, which is not the PHYID. It's not the node number on the bus, because actually the node number changes every time, or can change every time you have a bus reset. What Kernel has is a device reference, which it holds constant.
So by and large, when you want to go and talk to a device across the bus, you don't say, I want to send it to node number five. You say, I want to send it to this device reference. And Kernel maintains track of what node number is the correct node number for this particular device reference at this moment in time. So it looks after all of that sort of stuff for you as well.
What I've talked about so far has basically been looking after the aspects of asynchronous transactions, the ones that use the memory address space. But there's a whole lot of isochronous services as well provided by Kernel. Again, as part of serial bus management, there's a thing called the isochronous resource manager, which tracks what channels are being used, what bandwidth has been used, what's available to be allocated. And TNF Kernel provides the isochronous resource manager function. It provides support for something called function control protocol. This is a protocol defined in IEC 61883, which is the basis for all of the AVC protocols and some others as well.
So the basic function control protocol support is implemented in TNF Kernel. There are isochronous services for doing isochronous transactions. So depending on the nature of your link chip, you can send and receive isochronous packets. Some link chips, this may sort of go completely through hardware to a side port and kernel needn't get in the way.
For other link chips, it may well be that software gets involved at a fairly low level with sending and receiving of isochronous packets. And basically, the isochronous services that you need are provided in TNF Kernel. Then there are wonderful things called plugs, which is a way of managing the isochronous channels on FireWire. And we provide input output. And output plug handlers and plug input and output plug management for FireWire. That again is defined in IEC 61883.
So, looking at how all this fits in with the rest of the system, here's kernel. We basically have three main APIs. We have a protocol client API, which goes to these clients, mainly the ones implementing specific protocols. We have a system services API, and we have a kernel services link driver API.
Text to go with previous file. So just a moment or two sort of on the marketing. When are you going to use TNF kernel? When is this sort of thing going to be useful? Well, what does it do for you? The main thing it does for you is indeed to abstract a whole lot of nitty-gritty details of what it is to be a good citizen on 1394. And it abstracts that from the clients. In other words, it looks after that for you.
So there's actually quite a lot in FireWire which you have to get right. And it's great just to go with TNF kernel because then you know there's a pile of well-proven software which is going to get it right for you. You don't have to worry too much. We isolate all the hardware-specific code to the particular link driver.
So all the APIs that are at the top of TNF kernel, all the client APIs, are essentially independent of whatever link chip might be underneath. So you may well be able to develop some software which is generic across a range of products, or across an evolving sequence of products, which may well have new and different and improved link chips. And your application software is protected against those changes. It'll stay the same.
And of course, the same thing goes for different RTOSes and different hardware. You can recompile and go to a different CPU. You can move all your software onto a different RTOS. So again, you may well have a family of products of different complexities, which for good reasons have different RTOSes. And still, your application can look the same. It can look and feel the same. So there's both a support advantage and a usability advantage.
So going into the architecture in a little bit more detail, we find ourselves with initialization services, reset services, our Sokka services, CSR services, and then various queues and clients, which essentially take you directly from the client APIs down to the link driver, or in the other direction, as the case may be.
So first of all, a word on the link driver. I've only got one slide on this. But as I said before, it does support a variety of link devices. And one of the ways it does this is that it has a capability call. And so a link driver essentially reports what it's capable of doing. This link can transmit isochronous, or perhaps this link can't transmit isochronous. This link does isochronous completely independently. This link has various levels of queuing or whatever.
So the link capability is derived by the link driver itself saying what it can do, and then there are various features in kernel which essentially use that information to make sure that a bus utilization is as optimal as the link chip allows. And it is very much the case that some link chips allow much more efficient use of the bus than others. That may or may not be an issue, but it's good to know that you actually are going to use the bus as well as a particular link chip does allow.
Then, in fact, essentially it's got one API, one main API, which is a single command for all of the bus sub-actions. By the time you get down to the link driver, you're talking in terms of sub-actions, not transactions. A transaction is something like a read request and then the response coming back. At this level in the link driver, you're talking about the read request going out or a response going out. And the link driver has some specific plugins for specific hardware support, which you may well have.
The system interface API is the place at which we try to isolate everything which is going to be dependent on the system and the real-time operating system. There is a module which is called TNF Sys. And if you want to port to a different operating system, basically you go and pull out TNF Sys and you go through it API by API and work out exactly how the system works.
The APIs are going to be implemented for this particular RTOS. And Kernel is a real-time system. It's a concurrent system. So it does need support of concurrency, but really there's not very much in the way of APIs that it needs, and they're pretty conventional, pretty familiar. So we have... We have a system initialize, create task.
We use semaphore. So you create a semaphore, you post to a semaphore, you pend on a semaphore. You can create a queue, delete a queue, you can send a message on a queue, you can receive a message from a queue, pend on a queue for a message. And that essentially is the main interface that we expect an RTOS to provide for us. There are memory allocation operations as well.
Basically, there's a TNF malloc, which in many systems just gets implemented straightforward as malloc. There is a facility for doing a DMA-safe memory allocation. In some cases, you need to make sure that the memory is not going to be taken away from you by some all-powerful memory management system, which really has got no place in an embedded system anyway. And those sorts of things. there are some system calls for atomic operations. But essentially, it's a pretty simple and straightforward set of APIs, easily portable to a new RTOS.
Moving on to the client services, the client services support an arbitrary number of clients. There's no particular limit built into the software. In fact, this runs through a thread through much of TNF Kernel and the TNF software, is that by and large, we haven't built in artificial limits. Instead, we've gone for things which go and allow configuration. So you can set limits which are relevant to your particular application in its particular circumstances. And by and large, those limits are set dynamically rather than statically. They are set in initialization calls rather than hash defines.
So the typical clients are the transport protocols for applications. They operate independently. They operate in parallel. And also some of the features that Kernel itself provides you, such as the bus manager, the FCP, function control protocol implementation. Those actually are implemented, again, as clients, because what they want to do is to do ordinary bus transactions. Thank you.
So what happens is that a client, which is one of these protocols maybe, or maybe your own application, a task in your own application, will introduce itself to Kernel, it'll log in, and it'll get a client ID. And then it'll use that in future calls, so Kernel can keep track of which clients are asking for which features and which capabilities, which callbacks, and so on and so forth.
So the sorts of things that a client can do, having got its ID, is that it can sign up for unit notification. A unit is a capability in a device somewhere on the bus. And when there's a bus reset and some new device joins the bus, a client of TNF kernel can say, I want to be told when an SPP2 target turns up on the bus. I want to be told when an AVC tape recorder turns up on the bus. So you can sign up for unit notification.
It can sign up for cable event notification. For some purposes, things like bus reset are important. Many applications, it's the case it doesn't matter. And you can often get away with your application not even knowing about bus resets on FireWire. But for some applications, it is important. And so you can sign up for that.
You can read CSR locations on 1394 nodes. You can actually go and read the configuration ROM on some other node. You can go and search the ROMs for specific entries. Essentially, it's the same sort of thing as client notification. You can say, find me a tape recorder if there is one. And there's a whole pile of other functions similarly.
Now, if a client wants to access a particular device that's on the bus, what it does is to say to the kernel, I want to be able to do this. Give me a device reference. So it gets a device reference ID from kernel. I mean, essentially that's a pointer to an internal data structure of kernel. And... Normally it does this by, sorry, within kernel, kernel tracks particular devices by global unique identifier.
The device reference remains the same for all time. And then the sorts of services that Kernel provides to a device with such a device reference is all the asynchronous request types, create isoquinous stream descriptors. It can also do things like allocate isoquinous channels and bandwidth, sign up to receive isoquinous events, and bus reset notifications and the like.
There are, in fact, two sorts of device references. One is the standard way, the way I've been talking about so far, where kernel tracks it all for you and you just use an abstract device reference. Alternatively, there's a type called unspecified, which provides you maximum control. In other words, you can actually then go and access a device by its specific device address on the bus.
But that, of course, requires you to go and track the changes to that. So if for some reason you don't want to use the kernel services for doing this, we're not sort of blocking you off from a lower level access to the bus. You can do that if you really wish to. In general, you won't need to.
The reset services, I said you can sign up for bus reset. Well, you can sign up for all bus resets. You can sign up for a bus reset notification when a particular device or unit joins the bus, or if it leaves the bus. There's some initialization you do. You can set the number of outstanding requests in the outbound request queue. You can set the number of incoming requests that can be queued. You can set the number of client procedures. And these basically just limit the internal data structures that the kernel will build for itself.
One minor point I should make when we're talking about request queues. The kernel model to its clients is that outbound requests are queued. And yes, they are queued. But in fact, it turns out that they're not queued actually in the kernel. The kernel goes and passes on the queuing responsibilities to the link driver. And that might seem a rather odd thing to do, because it makes link drivers more difficult to write.
But it turns out that there are some link chips, and more and more link chips, which basically support queuing in hardware. OHCI is a very good example, but there are others as well. So by passing on the queuing responsibilities down to the link driver, this allows the link driver in turn to take advantage of any queuing capabilities that might be in the hardware of the particular chip that it's a driver for.
You want to be able to configure the node as well. What's this node doing? You can set its OUI. You can set busy retry codes, appending controls and things, say whether you want to do appendings, app busies or whatever. And then there's a whole pile of optional features, which you can decide whether they're going to go into your device or not. And of course, these are optional features which are dynamic.
So you can build bus registries, you can contend for being bus manager, you can decide that this device is going to be a bus manager, or potentially, you can optimize gap count, decide that it's going to be isochronous capable, power manager capable, and so on and so forth. You can decide it's going to be cycle master.
As I said, these are dynamic, which means that you can build software which maybe itself goes and loads capabilities dynamically. And so, you know, for some time you may decide that you don't want to be ISOP and resource manager capable or something until some other software has actually managed to get loaded or something of that sort. So you can decide as the capabilities come and go whether you wish to have these features advertised and these capabilities supported.
Another class of services that are provided are to be able to access the fire chips that are across the bus. Your local chip or a remote chip. So you can do things like set the force root bit to persuade a particular device that it wants to be root. You can send a link on which goes and turns on the link layer device, link layer hardware in a remote device. You can set gap counts. You can ping nodes. You can engage and suspend, resume power management stuff, enable fire ports.
And you can read and write remote fire registers. So if you have a need for being able to access fire chips across the bus, you can do that. Now, I have to admit, the main user of this is going to be a bus manager, which is in kernel anyway. So it's unlikely that you will actually need these services, but they're there if you do. And they're the ones that are going to be used by some of the other features that kernel provides anyway.
Looking at this sort of from the other way around, you can be a target device advertising a CSR ROM, and so we provide the capabilities to build the CSR ROM and to modify or add units to the CSR ROM dynamically. So essentially, if a read to CSR ROM comes into the node, then Kernel will actually provide the response based on what you've asked Kernel to set up in CSR ROM. And your application doesn't actually directly get involved in servicing those CSR read and write commands, unless for some reason it wants to. It can choose to, but in general it won't want to.
There's a direct memory client. This allows you to designate an area of your memory and hand it over to kernel. and say, well, what I want you to do is to allow access to this area of memory from remote devices across the bus. So reason writes to this area of memory. We'll go straight to that memory. And by and large, your application needn't get involved. Now, sometimes this can be directly supported by hardware. OHDI has a facility for supporting this. Some other link chips can support this sort of thing as well. Otherwise, it's supported by software.
But then the question is, okay, so there are other devices reading and writing from this lump of memory. There are going to be times when I do actually want to be able to know what's going on, put things in that memory, get things out of that memory. So what I'd like is, I'd like to be told when other devices do access that memory. I can be told, I can say, I want to be told every time. I want to be told never.
Or perhaps I want to be told every N packets. So if there are N packets that are read from memory, then I'm going to go back and I'm going to go and fill it up with some more stuff. Or perhaps every N bytes that are read from that memory, or written to that memory, I want to be told.
And there's optional support for posted writes. In other words, a posted write is one where the transaction comes in... and it's acknowledged with an at-complete. Even before the data that's in that transaction, that write transaction, has been physically pasted in the memory, the device at the far end thinks it's got there. And that's a good optimization which is used quite a lot in FireWire. So the kernel sends the right complete whatever may be needed to finish that.
For isoquinous services, you can allocate and deallocate channels and bandwidth. You can initialize an isoquinous stream, you can start an isoquinous stream, you can stop one. and then you can sign up for isochronous event notification. And the sorts of events are start of isochronous data, end of isochronous data, and then some odd things to do with channels and managing what happens with cycle starts.
The FCP services, those are the underpinning services for AVC, and the functions that are provided there are to send an FCP command or an FCP response. You have to be careful. An FCP response is not a serial bus response. FCP uses directed writes in its protocol, and one FCP initiator will send an FCP directed write to the target, and then the target will send an FCP response, which is an FCP directed write, back to the initiator. So you can send FCP commands or responses. You can add an FCP command handler for incoming commands. You can add a response handler. And then there are plugs which you can deal with as well.
Odd little features of the way that kernel's been put together. Little Endian, big Endian support, whatever processor you happen to be using. Sort of whatever reasonable size processor you happen to be using. There are some compile time optimizations. So, for example, if you decide that you're never, never, ever going to be bus manager, you needn't compile it in. And a few things of that sort in order to get the footprint down. And there is this multi-threaded, multi-tasking support.
So, how do you use Kernel? Well, the steps are... that first of all, you configure it. You configure kernel, your application sort of at the master application level goes and sets the various queue sizes, the various parameters that are needed. - You then complete the node configuration. You say whether you want the node to be bus manager capable, IRM, whatever. Then you create CSR ROM, ready for the various clients that are going to want to advertise their capabilities in CSI ROM.
Around about that time, TNF goes and gets capabilities of the link driver and does a link driver configuration. And then your various clients, protocol clients, will individually register for required services. So basically you go and set the clients going as independent tasks in your application, and they'll go and register for their services. And then you just sit back and enjoy it, watch it all work.
So that's Kernel. Second module that I'd like to introduce you to is TNF AVC. And this provides an API for AVC controllers and AVC targets. And actually, this presentation, this part of the presentation feels just like the previous part of the presentation. It's the same model all over again. Same sorts of things, same feel all over again. First of all, we're supporting multiple sub-units, so your AVC device can have several personalities simultaneously. There are notification mechanisms for AVC-related bus and remote unit events.
You can construct appropriate unit and sub-unit descriptors if you're an AVC target to go into your configuration ROM. You publicize your AVC unit directory. You can be a controller, and AVC provides support for parsing the descriptors that you'll find in some other target somewhere else on the device.
And again, the API provides references to units and sub-units and plugs and so on and so forth by logical references which will remain unchanged through bus resets and through things which might actually change the physical representation of these as known on FireWire. So again, it insulates the application or the higher-level protocol from those changes. Thank you.
AVC supports AVC 3.0, the 1394 Trade Association specification. It provides a complete set of APIs for both controllers and targets, multiple sub-units. It does automatic mapping of addresses and plugs and so on and so forth after bus reset. provides dynamic construction of descriptors. And there are also some interesting rules for open and closed descriptors. And again, it ensures that those rules are complied with.
It provides IEC 61883 connection management. It provides support for FCP or asynchronous connections for command transport. Both of those are used in different circumstances. You can have multiples of those command transports, multiple outstanding transactions, and a target can service multiple controllers. It's quite an important feature. and multiple devices outside on the bus can find that this is a target. And it won't fall over if two or three of them start trying to poke at it.
So the block diagram amazingly looks just like the previous block diagram. AVC sitting down here, it's got its interface to kernel using those kernel APIs that I was talking about a moment ago. It's got its interface to TNF Sys, and its interface then provides the APIs to the particular protocols, particular AVC protocols that you want to support.
And again, the sort of the block diagram looks similar. We have our various services essentially going between the AVC device API and the transport controller, which goes down to kernel. System API, actually hardly used at all. The only thing that it does is to provide memory allocation. It goes TNF malloc for memory allocation.
Incidentally, I should have mentioned that the fact that TNF Malak is provided and that everything that we provide in FireWire Reference Platform goes through TNF Malak means that you can choose just to go and call Malak if you want to. But if there's some good reason why you want to control the memory areas that are used for all of the FireWire Reference Platform data structures and buffers and so on and so forth, you can go and implement your own special version of TNF Malak with your own specific system-dependent capabilities. And the rest of the FireWire Reference Platform will just go and use that all automatically. Thank you. Support multiple transports and we support them simultaneously. You don't get any deadlocks or anything horrible like that as a result of doing this.
You want to access a particular target device, a particular target unit that's somewhere else on the bus, you go and get an AVC reference ID. And then you'll use that in all future calls to go and make accesses to that target device, getting its unit information, sub-unit information, plug information, and so on and so forth.
Now, if you look at the layers of connection management protocols that are defined for FireWire, there's a set of protocols called connection management protocols, which uses this model of a plug. So two devices sort of share a plug if they're going to send asynchronous data to each other.
And that's supported in kernel. Then what AVC does is that it extends that model so that if you push into a device and you get to some sub-units, essentially you put a plug on a sub-unit and you plug that to some other sub-unit in some other device. And in fact, you can even go and plug one sub-unit to another sub-unit on the same device if you really want to. So AVC, in general, goes and extends this model to sub-units from just units. And so our AVC supports all of that.
So we find ourselves being able to do things like configure the output plugs, master plugs, configure the input plugs, connect and disconnect plugs, the serial blast plugs, if you like, at the device level. And most important of all, perhaps, is that we can connect and disconnect between sub-unit plugs and other sub-unit plugs and or unit plugs. So you can sort of plug your system together whatever way around is appropriate.
All references to a device are made by a device reference ID. and you can then go and get the various units and information from an ABC reference. We have basically, again, references for units and for sub-units and for plugs. So all of that, again, is handled in a way which won't change over bus resets, won't change over 1394 topology changes or reconfigurations.
So essentially, what you do is to initialize an AVC device. You say it's going to be a controller, it's going to be a target, or perhaps it's going to be both. You register for the unit event notifications that you're interested in. You deal with sub-units. You can construct AVC command headers for sending commands to other devices. And you can get plug references.
If you're a target... Then you can create a target unit with a reference ID for yourself. You can create sub-units within your target. You can create AVC plugs, and you can register to be a command handler. And you'd want to do this because somebody else on the bus is going to send you as a target an AVC command. So you can have a command handler which goes and deals with that AVC command when it comes across.
So that's a look at AVC. And finally, another protocol is SPP2. This is just slightly different from the other two, but it gives you, again, to give you a feel for the sorts of software and the sorts of ways we've gone about constructing the support for various protocols in the FireWire Reference Platform. I do want to emphasize, in this presentation, I'm talking about SPP2 Target. We provide excellent support for both SPP2 Initiators and for SPP2 Targets. And the fact that I'm picking out on Target for this presentation is simply a matter of my time and your patience.
So, SPP2 target, well, we provide full target support. It's a generic API for all types of SPP2 targets, disks, printers, whatever you like, plotters. SPP2 is defined in end sites 325, and it's fully compliant with that. And of course, it operates as a client of TNF kernel. So in this particular case, it only uses the TNF kernel APIs. It's operating system independent. It uses the TNF malloc, and I think it uses TNF DMA safe. So it's using the main APIs that we provide, no others. It'll run in any environment in which TNF kernel runs then.
It provides callback functions for CSR event handling. So in fact, the application does, in this case, have to get involved. And this is one of those cases where actually a bus reset has to be significant. Unfortunately, a bus reset is a significant event as far as a client of SPP2 is concerned.
We can't hide the bus resets. Full range of configuration functions and buffers as needed are allocated dynamically. And that's whenever some initiator across the bus goes and initiates a session with this target. At that point, we then go and allocate all the necessary buffers for that particular login that the initiator has made.
[Transcript missing]
The system task deals with CSR accesses, control and status ROM accesses. So this is where an initiator is finding out about this SPP2 target, what it's capable of and what it can do and what it can't do. There's a management agent, and that deals with initiator login. A device across the bus logs into an SPP2 target, logs into the disk or whatever, before it starts to do reads and writes to that disk.
So this management agent task deals with those logins. Then the application tasks deal with ORBS, Operation Request Block Processing. So the actual requests to go and read blocks from disk or whatever, for disks it would tend to be a command protocol such as the... I've forgotten its name. There's a specific command protocol for disks. That interpretation would have to be done by the application task, which you, the application writer, would write.
And we have to issue a sort of little health warning here as well. There are some aspects of SPP2 compliance which actually depend on appropriate application behavior. We're not able to shield you entirely from these. And so, for example, every time the initiator sends you an operation request block, you are required to return status for that block. And we rely on the application doing that correctly.
But the reason why we've done this is to allow application flexibility. But we do have a sample target application. You can actually go and run the application code and do accesses to a real disk if you wish to, so that you can see how to implement a compliant application.
[Transcript missing]
Although this is sitting on top of kernel, there's nothing in this which prevents the application from getting directly to the kernel APIs if it wishes to.
For the data transfer APIs, one of the things we do provide is automatic support for initiator page tables. In other words, in the SPP model, there are various models of when you're sending writes across the bus, exactly where they go to and how you deal with the fact that the initiator might have page tables or might have a linear memory.
And the code that we provide actually deals with that. Essentially, the API to the application is a start data transfer, and then our code determines whether a page table or direct memory is being used and translates that into the appropriate bus transactions to be able to enable the right thing to occur.
So the application will make repeated calls to rebuffer or write buffer, and then call stop data transfer. Status APIs. There are various times when an application needs to send status. There's a thing for sending unsolicited status. You need to send status in response to our orbs that come in, and command error and orb error statuses.
In order to achieve these functions, the application provides some event handlers, and we'll have a quick look at the management and command agent event handlers. The event handler is provided when there's real work to be done. For example, operation request blocks. An event handler for things like bus reset. And some of the event handlers are optional.
So the configuration of these callbacks works by the application calling SPP2 target initializer, target init, with one callback parameter, which is called onOpenSession. So the idea here is that whenever an initiator initiates a new session, then this callback will be provided. and the login descriptors provided as a parameter to that callback.
Entries for all the other callbacks that you might need are provided within that data structure. So on the on open session callback, you initiate those entries, which are then set up according to however your application wants to configure all those other callbacks for this particular session. So you can actually use different callbacks for different sessions if you wish to.
So the callback functions are on open session, on end session, set password, abort task set, reset target. All buffer-full stuff to do. Get on with it, please. There's Doorbell, which basically says go and read the host descriptors for more commands. You needn't actually do anything about Doorbell in the application if you don't want to. You can rely on the SVP2 code doing that for you.
There are also some callbacks that may be called as a result of internal conditions like agent dead, bus reset, unit attention. So that's a quick look at SPP2. And just as a final slide, the FireWire Reference Platform is available on the web. You can go download it. There are some mailing lists where people who are using this are putting all their experience, questions, and indeed swapping their own good hints as to how to make best use of this. So that's the end of a quick gander through all the facilities that are provided in the FireWire Reference Platform.