Networking and Security • 1:03:52
Mac OS X offers the most powerful and flexible networking technology in the industry. Learn about networking APIs and protocols such as IPsec, IPv6, and PPPoE.
Speaker: Vincent Lubet
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it may have transcription errors.
Good morning. My name is Tom Weier. I'm the Network and Communications Technology Manager in Developer Relations. I want to welcome you to session 300. This is a networking overview. As you heard yesterday, the networking in Mac OS X is pretty much the core of a variety of services that are layered on top of it. It's an extremely high performance subsystem. With that, I'd like to introduce the manager of Mac OS X Core OS networking, Vincent Lubet.
Thank you. So here is what we are going to go through today. So obviously, an overview of the different component of Mac OS X networking. We're going to talk also a great deal about APIs, because I guess some of you are more familiar with the classic Mac OS networking. And we're going to talk more about some of the new stuff that you will discover on Mac OS X. A number of hints and tips, especially for developers who come from Mac OS 9. The architecture of the system is quite different. And there are many things that work well on Mac OS 9 that don't work or have severe impact on performance, for example. And we'll go over that. And we'll talk also briefly about future directions.
So maybe you've already seen this graph, this picture. So the core of the networking lies in the kernel. Its networking really is a subsystem of the BSD kernel. The Darwin kernel, if you will, has three main components, the BSD kernel, which provides a lot of the APIs for the core services and the upper layers. We have the I/O Kit, which provides access to the hardware. For example, that's where you will find the Ethernet drivers in the I/O Kit. And the MAC kernel is the basic core of the kernel that provides the basic services, like scheduling, memory management, and things like that.
So some of the features of Mac OS X. It comes with TCP and protocol stacks. It comes with also links layers, so PPP built in with serial support and PPPoE, especially important today for access DSL. And the other type of link layers, of course, is Ethernet. So it's built in with the internet support. And another important feature of Mac OS X is the dynamic configuration. And that's a core of the ease of use we want to bring to a Unix-based system. We really want to-- Some of the problems that people see in BSD or Unix-like implementation is that people have to type a lot of commands. It's command line oriented. And with Mac OS X, we went away with that. And it's dynamic. You don't need to restart to change the configuration. And that's very important.
Another thing, it also provides for an extensible architecture. So again, it ties into the ease of use, especially the kernel. And we have the ability to add functionality to the kernel without having to recompile, which is the kernel, which is usually the model that you have on BSD or many BSD systems.
So the ongoing goals we set for ourselves for the networking experience in Mac OS X, of course, is ease of use. Something that users are being, you know, when they come from the classic Mac OS, they really often don't think about configuring the network and we that's one of our goals also for Mac OS X, performance. And I think that because of the underlying networking stack i think that uh... we're pretty pretty well off already today so uh... better extensibility uh... and stand up compliance of course uh... which means tcp ap and that would see that we we have plans to to get new new protocols So the kernel, especially, part is based on 3BSD 2.0. So it's a robust and proven implementation. It's used by-- Typically, it's used by large companies for servers. So I mean, it shows that it can stand a lot of abuse and heavy loads. It has a popular API, which is the Sockets API. And so one of the important points is that it's easy to port Unix-like applications. There's a lot of code out there. And you can-- we really-- I mean, of course, provided that you don't violate any license. But you can reuse available code and even learn, by example, by looking at the open source, some various open source implementation.
So we brought some enhancement to the 3BSD implementations. First of all, the kernel is multithreaded and MPCV. Multithreaded means that we can run-- We can take advantage, for example, of multiprocessor architecture that That's a totally different model from Mac OS 9, especially for the networking. So we'll see that some of the assumptions that you could make on Mac OS 9 are no more true on 10. And it's not only for the kernel pieces, but also it has impact on the applications. So we have tuned also the network buffer allocation so that on the typical client system, we don't use too much the way of memory, which is a very critical resource. Because that's-- but it can-- of course, the buffer allocation can stretch and grow. For example, if you have your configuration, like a server or something like that. So part of the enhancement we brought to the FreeBSD kernel is this extensibility. That means that you don't need to recompile the kernel. We, the session in the session 304 will go in more detail and it's session 304, it's just after this one in room J, if I remember well, which is the opposite end.
So dynamic configuration, very important for us. So that means that you don't need to restart when you reconfigure your system. But we went even further than that. Means that as the kernel supports multi-link, multi-homing out of the box, we-- we came up with this idea of automatic configuration. So we don't use multi-homing really in the sense, by default, on a client system. We don't use multi-homing really to do routing or things like that.
But one of the important features we have, for example, is that if you have the classic example we come up with, it's that you can have several network interfaces active at the same time. And the typical use, you come to your office and you plug in the ethernet, the ethernet becomes your main interface.
If you have airport and you unplug the ethernet cable, the airport interface will pick up automatically. And that's without any user interaction. So that's very, very important for us. I think that that's something that really shows the benefit of multi-homing. And this afternoon, there will be the session 303 that's going to talk a lot more about network configuration and mobility.
PPP so my question come with built-in PPP. It's based on the let's say the Unix Reference implementation called PPP D which has been important on many many platform That's really also the the one that for example next step was using So it supports out-of-the-box internal modem external modems and PPP oe From we've added some enhancement to it which come from our Mac OS roots which are CCL scripts and also the OTP control API so we we thought that many of of application I mean for the CCL scripts obviously for our users and The control API is because many applications wants to dial from within the application without user interaction.
Another component, important component of Mac OS X, of course, is the classic networking, the one that provides-- So the important thing is that Classic and Darwin share the same IP address. It's, again, something that-- Without that, it would be almost impossible. If Classic, for example, had to add a separate address, it would become a nightmare. You couldn't, for example, usually connect to your ISP, because usually you have only one address assigned by the server.
So that means that the TCP and UDP port space are shared between those two environments. And something we've done also, we took a great deal of effort to make sure that when you ping a system, we reply with only one ICMP reply. That was something that was brought to our attention a few years ago, one of the sessions. So we took your feedback and made sure it's right. In classic, there's no user configuration. The configuration of the TCP/IP in OpenTransport in classic is done automatically behind the scene. So the TCP/IP control panel is read-only. One important thing to note is that classic and Darwin do not share the Apple Talk address. And usually, that's not really a problem.
So one of the important core of the classic networking implementation is this global UDP and TCP port space. So basically how it works is that, for example, when a classic application does bind to grab a port, it does some special call into Darwin. And the same for Mac OS applications. When they bind, they call into this global port space. And it's, in a way, similar to NAT in some sense. But the thing is that the clients are on the same machine. It's not really NAT, but it's similar to NAT. And of course, we have this component, kernel extension, maybe you've seen on your system, which is called shared IP, which handles-- It's a filter, really, at the lower level of the stack that handles incoming and outgoing traffic. And for incoming packet, it calls into this global port space to see to which end to send the packet.
So now we're going to talk a little bit more about the APIs for the different application environments. I forgot to put Java there, but of course we support Java. But we think that for Mac OS X, really the primary, I mean, If you use classic, you don't have to do anything. If your application, if your plan is to continue to use classic, it works. It's 100% fidelity. So I'm going to talk more about Carbon and Cocoa. And so we-- We see three different types of APIs in Mac OS X. And we're going through all of them. The first kind are the URL-oriented APIs, easy to use. The other ones are the open transport APIs, really for Carbon. And the last one is the BSD sockets API, which is the native APIs for networking on Mac OS X.
But the URL-oriented APIs are for what we say norm-centric application mainly. That means that one of the great benefits of those APIs is that it's easy to download. some URL so basically you you let's say you have a URL you pass it to the API and that's going to download it locally on the system and tells you when it's done so you don't have to know about the protocol themselves you don't have to deal with FTP the details of FTP or HTTP it's all done for you and they are basically so three sets I mean three three kinds of API says a NSURL API for Cocoa application URL access for Carbon and classic of course and a new one that's kind of bridging those two application development that are the CFURL access for can be used both by Carbon and Cocoa application and you You will learn more about those if you go to the session 311. So I won't go into any more details, especially because I don't know them very well.
The next set is the open transport for Carbon. So the goal here is to provide an easy migration from the classic port to your application from classic to Mac OS X. So it provides only for API level compatibility. That means that if you have a solution on classic that relies, for example, on the streams module, it cannot be ported right away on 10.
The OT implementation of Mac OS X is layered on top of sockets. So it's a framework. It's part of the core services framework. And it uses threads to emulate asynchronous mode, which means also is that the-- Implementation of OpenTransport can also take advantage of multiprocessors. It can do some of the background work that can be done on another processor while your application is accessing some APIs from another processor.
Something very important to note is that OpenTransport 10 is layered on top of Socket. means that you're going to incur overhead compared to similar code that use straight sockets and it really depends you know roughly it's five to ten percent overhead depending of what the code is doing it could it can be far worse sometimes it can be roughly even better because sometimes you can you can You can take advantage of the fact that some of the worker threads in the background run on another CPU. But for a typical application, 10% is common.
So here it starts to go into more details, but important gotchas. When we designed Carbon, we didn't think that Carbon was just a compatibility layer. So the goal was not to be just compatible. We were thinking that developers bringing their application to Mac OS X and already moving their application from just the Mac OS toolbox to Carbon already to-- sometimes it's a big step. And we are not always thinking that then going the further steps, which would to be to use a Cocoa, would be always easy. So we were thinking that it was very important to provide for performance. The Carbon OT implementation on Mac OS X, the OT framework, was meant to perform well. Which means that-- so there's a flip side to that, is that it's not a high-fidelity implementation, especially because of the preemptive nature of macro-S10 and the multi-threading. So for example, in Carbon, there's no interrupt level. a part of the open transport is using notifiers and default tasks that on classic run in the background at the software interop level. That does not really exist on Mac OS. You only have threads.
There's no interop level. And one of the things that we did is that we-- I really decided that it was worth for the performance implication that, for example, a notifier run just as another thread. There was no higher-- a notifier has no higher priority, if you want, than the main event loop or other cooperative threads. The only solution that we could have done to work around this problem was to really have notifier being serialized with cooperative threads. Means notifiers would be just another cooperative thread. That would make very easy part of application, but the performance would greatly suffer, because you don't want the networking activity to be dependent on, for example, some UI or some other lengthy task. So that means that it's very, very important when porting your application that you think about protecting your data, making sure that if you're using asynchronous mode and notifiers, you have to be sure that you use the right primitive to serialize the access from the cooperative threads and the notifier.
Luckily, in OpenTransport API, there are already a bunch of primitives that can help that. The main ones are really OTEnterNotifier and OTLeaveNotifier that defines really critical sections. When you call OTEnterNotifier from your main event, for example, that prevents the notifier to run until you call OTLeaveNotifier. that's really the most useful of the synchronization primitive we have in Carbon for open transport. We have also a set of atomic operations, but that should really use to update a single data, because it's atomic. That means it's a single shot.
For example, maybe from your notifier, you can just set the flag to signal the main event loop to do something, that something happened at the networking. That's fine to use with OT atomic operations. And finally, also-- One primitive I really like are the LIFO. They are pretty powerful to handle a set of queries and queues. They are sometimes a little bit obscure, but can be very, very useful.
So other gotchas, if you want-- or at least this one is-- are the recommendation. So if you bring your open transport application, what are the best way to use OpenTransport API? You're certainly aware there's a new kind of library that has been developed by DTS, Quinn especially, which was OTMP that provides some kind of framework or a library that allows to use open transport from MP threads. The only downside is that it's not available on Mac OS 8. But otherwise, it's very powerful, And it runs great on both 9 and 10.
Otherwise, especially for Mac OS X, it's very important not to poll. uh... so the more the future using them synchronous synchronous endpoint you you should use a synchronous booking mode with secretly done it all so that for that uh... your corporate threats can be can be called back and lastly the asynchronous mode with notifier is certainly the mode open transport mode that is the the most efficient so if you are heavily if your application is relies heavily on the networking and gets a lot of data or is a server kind of of application certainly it's a it's worth the effort of using asynchronous mode. And it's available.
So if you're doing that on 9, it will run just great on 10, provided that you make sure that you serialize the access to the data that's shared between cooperative rights and the notifier. So I will repeat that several times here. But what's important on Mac OS X is not to poll. If some of you went to the Darwin overview yesterday, there was a demonstration of a simple Carbon application that was using different well-known pitfalls. One of them is to use, for example, wait next event with a timeout of value of 0.
And you could see with the tools that available from the BSD layers, is that you could see that the CPU is used as 100%. If it's the only application and the only thread really that runs, you see, oh, what's the point? We will see that also it has also some implication about, for example, when the system can go to sleep, I mean, light sleep like those and now, And it uses more power. It's important for power moves, for example, and battery life.
Another thing is that OpenTransport has this code called OT_IDLE that in the documentation we say shouldn't be used. And really, on 10, if you use OT_IDLE in a way to have a tight loop, and you say, OK, I will be good. I will call OT_IDLE. That's going to add a great deal of latency. We took really great care on 10 to make it very, very-- inefficient. So it will slow down your application, and it will block it. So if you want to block, for example, if it's a way for you to block, it will block your application for a while, and your thread, at least, for a while.
So there are other differences and statistics that can come to surprise your code if you bought it straight from 9 is that, for example, flow control is much more common on Mac OS X. The reason is mainly is that because on Mac OS X you have a split address space, so data has to be copied from the user space to the kernel and the code In the kernel, memory is very expensive. So it's a resource that we don't want to over-- to be abused because it impacts all those different subsystems in the kernel. Everything is shared. So by default, the-- The socket buffer size, which are equivalent to the send buff and the receive buff in the streams, are really, on Mac OS 9, they are not really enforced. And on Mac OS X, the equivalent are strictly enforced.
That means that if you set a socket buffer size of 4K, that's just the amount of data that you can copy at once. That means that if you send data, you will-- for example, an OT send call will return much more frequently the OK OT flow error. And your application should be ready to handle that. So a way to work around that is to increase this, to use this XTI option send buff, but it has an impact on the overall system.
Also, differences from the implementation of the Open Transport Framework content is that concern the timer and default task. So again, they are-- In a way, it's to be performance sensitive. They are not implemented with the different task manager and the time manager that are provided by the Carbon the Carbon layer, and instead they are serialized with the notifier.
And we've done that because we noticed that many applications, that's what is important, why they would use the open transport time manager and deferred task manager. It's not because they were better than the time manager or the deferred task manager. It's just because they were relying on the fact that online they are serialized also with the notifier callbacks. So that's-- the main reason. But it might have some subtle implications for some applications. So you should review that.
So another thing is that for AppleTalk, Carbon is the only API that allows to access-- the only API to AppleTalk. And it supports only a subset of AppleTalk. It supports those protocol, DDP, ZIP, and NBP. And so-- We know that it's not for all the developers. It's not enough. But that's what we were able to do. And one of the things that we got requests from-- why, for example, NBP is that many application, many developers still use NBP to register services and to provide maybe for copy protection and things like that. And frankly, that was relatively easy to do. So that's why we've done providing a supporting higher level. We got a lot of requests. Lot of requests. We got many, many requests to support higher level protocols, and we still don't have that.
so no i'm going to talk about uh... briefly about the sockets it p_i_ That's the native API for Mac OS X. That means that that's the one where you're the closest to the system. And if you can, I would encourage you to use this API. One of the benefits is that there's a lot of publicly available code out there, open source. So there's many code to learn from. There's also a lot of books. One of my favorites is the Unix Network Programming by Richard Stevens. It's really kind of the bible. It's a great book. And something very important, especially if you're porting your application from Mac OS 9, is that-- because you have sockets in Mac OS X. Maybe you should also review your code and making sure that if you have a layer that's a sockets like layer in your application maybe it would be time to get rid of it you know instead of having this socket accumulation layer on top of open transport that's called open transport if you go just through directly open transport you will get you will see a lot of performance benefits um and for example the one of the the requests that we got from developers the past years, where we have to include-- to add Socket API, Socket support into OpenTransport on Mac OS 9. So it's not going to happen, but I mean, now you have to-- on Mac OS X, you have the Socket API available.
So I'm going to talk briefly for the people who know the Open Transport API. Going to show that it's very easy. There's almost one-to-one correspondence between the Open Transport core and the Socket's core. So for example, for endpoints, the one that you use for your TCP, UDP, access so they are called endpoint provider in the open transport documentation and they are sockets file descriptor in bsd and usually so to to make an active connection like a client would do what you do you you call ot bind with the aq lens of zero and then you call ot connect to the distant site. And in BSD, the bind is really optional. You don't need to call bind, but you can bind to get an ephemeral port. And then you can just call connect. Very simple. For passive connection, like for servers, With OpenTransport, you bind with a QLens greater than 0.
On 10, the queue length is called the back load and it's found in the listen call. So that's where there's a little mismatch. But usually those operations are done in series so it's really easy to do. So on 10 you call OT_LISTEN to set the endpoint in listening mode and when the incoming connection comes, you're notified and you call OT_ACCEPT. BSD sockets, it's roughly the same except the accept call. So the backlog, the queue lines become the backlog in the listen call. And accept returns a file descriptor.
With OT accept you pass the listener endpoint and the acceptor endpoint. But it's very, very similar. Sending and receiving data, so for example, depending on the, with open transport, depending on the type of endpoint, whether it's connection-oriented or datagram-oriented, you have different APIs to send and receive data. So, OTCend would be for TCP, for example, and OTCend UData would be for UDP to send datagrams. On BSD, it's a little bit different. There's no real strong affinity between the type of endpoint and the type of socket.
and the API, but usually what you see is that-- so send message is the catch-all. That's the most powerful API, but that's the most complex to send. So usually, if you have a TCP socket, you use just send. And you call send to for UDP, because that's where you can specify the address for each packet you send with the send to routine. Receiving data, it's the same. So you can receive or receive message for TCP endpoint. And for UDP, you call receive from because you get the address from the peer or receive message because it covers all type of file descriptor, socket file descriptor.
So disconnecting and closing, so that's also on open transport, you have the send disconnect that for TCP is going to send a reset. If you have to do that on BSD, you use the linger option, specifying abort. And for an orderly disconnect, you would send, if you're a client, you say, I mean, depending on your protocol, the order may depend. But usually you call send orderly disconnect and then receive orderly disconnect. If you want to wait until notification that the other side is disconnected. For BSD, pretty much the same. It's called shut down write and shut down read. And then finally, you call otcloseprovider when you're done with your endpoint. And when you're done with socket file descriptor, you just call close.
In many cases, if you see a simple sample code from Unix, you see that they don't call close, but it's because file descriptors are closed automatically when the process exits, similar to what happens also on open transport. But close is the one. So again, if you bring an application from OpenTransport you want to use Socket, you will have certainly to do some name to address resolution to use DNS. So the APIs we have in BSD are called get host by name for name to address resolution.
and the get_house_by_address adder for the reverse. One important thing to note is that those are non-reentrant API, so that's a well-known limitation of those APIs. That means that they return a pointer. So the information that the resolver library gets So those APIs return a pointer to a structure that is in the library.
So that's why if you have two threads calling those APIs at once, you may end up with inconsistent result. And another limitation of things to know is that those are blocking codes, and they are not cancelable. Quite different from maybe what you can be used with open transport, where you could cancel access to APIs to the internet service provider. That's not possible with BSD.
And which means also is that open transport on 10, we have similar limitations. We try to work around them, but-- Still, for example, the fact that it's a blocking call, you cannot cancel, means that-- We're going to-- the framework is going to sit there. The thread that's going to do that in the background is going to sit there until the call completes. So-- Here we're going to talk about the different ways to use the different tasking models. So that's the terminology that is used inside Macintosh networking. And really-- if you move your application you can use really the so if you have a sync if you're using synchronous blocking endpoints in cooperative threads for example or mp task it's very it's very simple to to convert that code to use the sockets equivalent there's we saw there's almost one-to-one matching there are a few a few things to know but it's it's very simple for if you're using asynchronous endpoint there's no there's no callback in in BSD so you don't have notifiers and but still what you what you do usually is that you use the select system call that allows you to to multiplex to wait for events on several several endpoints at once so usually you have a thread blocked on the select call and that blocks on several endpoints. So it's not callback. It's more multiplexing. But it achieves the same results. For example, the Open Transport framework is using that to wait for events. And when the select call returns, that's what's going to trigger the call to the notifier.
So there are three kinds of events that the select call can handle. Read event, if a socket file descriptor is ready for read, means data is available. Write, so if you are in a situation of flow control, and you can put the file descriptor in the right file descriptor set, there are sets. And when the flow control situation is lifted, you're going to select will wake up, and the bit will be set for that file descriptor saying, now you can send data. And finally, exception is used for out-of-band data. That's how you can be-- if your protocol is using TCP expedited data, you will be notified that expedited data is available if this bit is set.
So, important, so it's a, if you select, it's going to block the thread that's making the select system call, and it's going to block until one of the file descriptors you pass in in those sets is ready, or you have a timeout. So you can also have a heartbeat if you want. And the Stevens book I mentioned has a great deal of explanation. One important stuff to know is that the BSD implementation of SELECT, SELECT is used by many UNIX derivatives, but with BSD-like system, the SELECT handles also non-blocking connect. So when you're making a connection, you don't have to block your thread until the connection is complete. You can just use a non-set file descriptor to non-blocking and the SELECT will wake up connection is complete, something that That is not obvious because not all the Unix support that. But Darwin, we have that. And it's very useful. And again, the open transport framework is using that. That means that while OT connect is non-blocking in Carbon.
So now we're going to go through another list of-- that's the hints and the tips. Especially if you are performance conscious, and we would like you to be performance conscious, is that-- The most important thing that's going to kill performance on Mac OS X is polling. Polling makes use of 100% of the CPU. So if you run the top command from a shell script, you can see if your process application is using 100% of the CPU, it means that it's polling. It's very common to poll. There are many different ways to poll. For example, using wait next event with a timeout of 0 is a way to pull because the thread is going to run constantly. But many, many times when you're doing I/O, you tend to pull. And something that you could get away with on Mac OS, the classic Mac OS, is not going to work well on 10. So it hurts other processors because it steals CPU cycle away. And it uses more power. So we are energy conscious in California. So it's not only because it's growing raw power, but also it's very important for notebooks. Many of you have, because if you're blocking one of the-- and if there's no activity in the CPU, a thread is going to run in the kernel that's the idle thread. And that's the one that's going to trigger the conservation mode of the. it's going to put the chip in DoS or NAP mode, according to the level of inactivity. So that's not the real sleep. That's not going to trigger the sleep like the sleep command. But still, it's going to affect the battery life of notebooks. So the model for Mac OS X is blocking. So you block your threads and wait for an event.
And so use blocking thread. If you use threads, what we've seen sometimes is that people tend to still use, for example, cooperative threads or MP thread, but they still have some polling loop. So they are just going, before calling, instead of blocking in an OT receive, they are going to have a little while loop there to check for some flag. And that's not going to do any good, because if you're putting in an MP thread or in a regular P thread, you're going to use 100% of the CPU.
So if your application, you're using many, many endpoints, maybe it's uh it won't be good to use many many threads also threads is a resource it has a stack it has a scheduling it gets into the scheduler so instead if you for your application you're using many many endpoints you like maybe a server you should multiplex and so you should use the select routine if you're using BSD and otherwise use notifier for Carbon.
So again, so buffer size, and that really applies both for Carbon and sockets. That's an important, very important aspect of the performance for networking application on Mac OS X. So there are-- for example we saw that some some Carbon application that has been ported and have one or the other of the use buffer size that are wrong in opposite ends. Some of them are using two small buffers. So the pathological case is to call OT receive for one character at a time. That's going to-- you're going to have a lot of switch between user space and the kernel. And that's an expensive operation. And so instead, you should-- and frankly, if you're doing something like a telnet-like and at very, very low throughput, you can get away with. But usually, that would be really the only-- and it would because it simplifies maybe your state machine.
But otherwise, I would really encourage you to use larger buffers. On the other end, if you pass two large buffers, Not only are you going to get in flow control situations that are sometimes difficult to handle, but you're also going to start the VM for buffers. So you should pay attention to that. You should pay attention not only to the size of the buffer you pass to the send and receive calls, but also to get the appropriate size for the socket buffer. the size of the buffer that the kernel is going to use when it enqueues data. So the question is, so what is the correct buffer size? Unfortunately, there's not-- just a simpler reply to that question. It really depends on many factors.
So it depends on the bandwidth to the destination. If your application is-- is really going to be mainly used on the LAN, I would recommend that you increase, for example, the socket buffer size. Because, for example, over gigabit, you have a lot of bandwidth. And using large buffer size is going to decrease the number of context switches. And overall, the Ethernet driver will be able to pump the data very, very fast. But if you're using large buffer size of a PPP link, for example, if your application, maybe you think it's going to be used over the internet by a client using PPP, using two large socket buffers means that you're going to just have data sitting there in the kernel for no good use. So it's going to steal some wired memory away from maybe other processes or application. And also, it depends on the protocol.
For bulk data transfer, normally, of course, larger is better. But for transaction, maybe you're more looking at responsiveness. And having too large buffers may hurt the interactivity there. It depends also on the number of clients. So if you're a server, again, and if you're serving a very large number of clients, I would recommend that you don't use too large second buffers. size because it's again it's going to if you have many many clients using large buffer size you're going to hit the to increase the amount of world memory use and overall there will be more paging and less in the kernel less way of memory to be used maybe for for the drivers or other layers of the system so So the important thing is that you have to analyze the need. And it's not always easy. We all recognize that. But you have to analyze the need specific to your application. And still, there is many application what they do. For example, they download files, and they're going to set the file for a cache or on the disk. And-- Also, we've seen that with many networking applications that kind of feel sluggish on Mac OS X. And it's not because they don't use the right parameter on the networking side, but it's just because they write in too small chunks, too disk. And typically, we recommend that here it's also, I guess, if you talk to the file system guys, they will tell you, oh, it's more complicated than that. But a good rule of thumb is to use something like 32k to buffer in your application, 32k before calling the write to write to disk.
Also, an important aspect of Mac OS X is multi-homing. So it has several implications. And it's kind of a FAQ from the Carbon list. The system does not have a single IP address. So, the presence of an IP address doesn't mean that you are, for example, able to connect to the internet or not. It may be that TCP/IPs IP addresses change over time, so do not cache IP addresses over a long period of time. And it's for example for a typical client. You should really Maybe the the application will be a most of the time, you know, but the IP address may may change so don't and don't cache the IP address. Just call the available API, get_sock_name for BSD, or get_proc_address for OpenTransport_carbon.
There also with multi-homing is that if you bind to a specific address, that means that you will be able to send and receive data only for that address. And certainly in multi-home environment, certainly for servers, many server configurations on purpose have several interfaces active at a time. Multi-port, Ethernet card, they all have a different IP address. And certainly you don't want to limit. Usually you don't want to limit access to just one interface. So servers should bind to any IP address. They are both available, constant, are available in the BSD and the open transport. And usually for clients, clients really should bind to nil. There are very, very few exceptions. It may depend on the protocol you're relying on. But usually, the typical call for open transport is you call OT bind and it pass nil, nil. And for sockets, you just don't have to call bind. You can just write code, the connect. After you create your socket, you can just call connect. And the connect code will automatically pick up an ephemeral port if you use TCP, EIP, or UDP.
OK. Whoops. OK, that way. Also, another-- the last of the gotchas we saw, especially when seeing some Carbon application, is that something that is also costly on 10 is UI interaction. So the idea is not to-- if you're downloading and if you want to provide some feedback to the user about maybe the amount of data that is downloaded all the time, do not update the each time you get the packet. That's much too fast. Usually it's much too fast. So maybe it's OK. If you only over PPP, you will get data at a relatively low rate. But if you start to run your application on the LAN with ethernet, you will see that your application just spends time updating the amount of data or running a little icon or running a doc or something like that. So it's very costly. So instead, you should use a reasonable delist that are more based on the user perception. If you see numbers flashing over your eyes, It's not going to bring a lot of information.
What's a reasonable delay? I'm not an expert in human universe design, but if I remember well, something visible, something at a rate higher than 50 or 60 hertz is too fast. And if you think of it, updating an account at something higher than maybe two or three times a second is too much. And the same for the sliders. And they may be more smoother if you pass the rate of the HCI, of the update.
The list-- so if you came the previous year, we had, especially for Mac OS, the classic Mac OS networking, we had a long list of all the things that we were working on. And here in Mac OS X, we were starting the list from scratch. That means that we have an implementation, a product already out. But we have already some ideas. We got some feedback. And there are stuff we would like to do on our own. But really what we need and what we'd like to hear from you and from our user is the things we should add or improve. So the list is currently blank. And certainly next year it will be filled with a lot of neat stuff. I'm going to briefly talk about the future, so what we have in the plan. And if you came in the last two years, you've heard of us talking about IPv6 and IPsec. And they are still in our plan.
If you're familiar with Darwin, you can see that we have the Kame implementation of IPv6 and IPsec. So that's what we're using. If you-- you will see that it's-- It's relatively easy. I mean, it's not also-- last year, we had a package that you could install on DP2, if I remember well, where you could install IPv6 and IPsec. But building a Darwin kernel became so easy that maybe that's-- if you're really interested, that's something that you can tackle.
What we're missing there is really the higher level. So the library, the HI is not there. So that's one of the main reasons why it's not-- we were not able to put that in Mac OS X 1.0. Another thing that we're working on is zero-conf. So zero-conf is an ATF, RFC, maybe a Stuart, that allows, that provides to the TCP/IP some of the great benefits of Apple Talk which are automatic configuration. It means that once we have zero conf in Mac OS X, you will be able to plug two PCs and they will acquire address automatically and you will be able to exchange data with your with a peer without any interaction. We are also still, one of the ongoing goals is performance tuning. So we are just going to continue to work on that. Better extensibility. And also something very important is the replacement for the network setup API. And if you come to this afternoon session, three or three, we'll talk about that. What's that called? It's called-- So maybe it's the next slide. So OK. It's called Network Configuration and Mobility.
five so let me back up we'll come back here so we have a number of additional resources available on the web so of course the developer page for Mac OS X a lot of information for classic networking and Carbon there's a lot of information in the in the open transport page so if you're carbonizing you you will see a lot a link it would be a good starting point to get a link of that information.
Darwin is a good source of information. an active community there and if you subscribe to the development list you will see that there's a lot of interest from developers, a lot of people contribute to Darwin. And finally because of the roots of the networking stacks also in the FreeBSD site you will see a lot of information that maybe if you need to use some tools, some mind page or some how-tos you will see a lot of information on on the FreeBSD site.
Finally, the related sessions. So 300, that's this session. Networking in the kernel, that's a session that we're going to have just after this in the room J2, the opposite side of the building. Network configuration and mobility, that's a very interesting session. And on Thursday at 10:30, we're going to have the feedback forum. So for all the questions we couldn't answer today in the Q&A, please come back and talk to us there.