When a socket is ready for reading, does EM read everything available and send it in one large chunk to receive_data? Chris
It sets the socket nonblocking and, on read, repeats the read a few times or until it returns an error, in case there is more data in the socket buffer than in the read() buffer. Why is this approach better than just using a really big read buffer? On 10/24/06, snacktime <snacktime at gmail.com> wrote:> When a socket is ready for reading, does EM read everything available > and send it in one large chunk to receive_data? > > Chris > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk >
On 10/24/06, snacktime <snacktime at gmail.com> wrote:> > When a socket is ready for reading, does EM read everything available > and send it in one large chunk to receive_data?For TCP, EM applies a (simpleminded) heuristic. It reads what is available on the socket, up to a specific limit, and sends it all to the user code in one call. If more data is available on a given socket than EM is willing to read, it leaves the remaining data until the next pass through the loop. That''s intended to prevent starving less-busy connections. (And of course it backs up in the kernel''s read buffers, where it may apply back-pressure to the remote peer.) For TCP writes, EM coalesces small writes from the application into single writes to the network layer. It''s not as sophisticated about this (yet) as some other products are. For example it doesn''t use the Unix scatter/gather APIs. For UDP, obviously, none of this applies. Each network packet is transmitted to the application just as it is received (with the usual caveat that extra-large packets may be silently truncated or dropped by your kernel or network drivers). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/8945e483/attachment.html
On 10/24/06, snacktime <snacktime at gmail.com> wrote:> When a socket is ready for reading, does EM read everything available > and send it in one large chunk to receive_data? > > Chris > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk >Seems to me like that. Also, when your receive_data is doing some processing and hence not available as a callback...next time..event loop calls receive_data, all the data will be available as a large chunk of text. -- There was only one Road; that it was like a great river: its springs were at every doorstep, and every path was its tributary.
btw, for obvious reasons, code that depends on this behavior is unsafe; evented readers typically check to for a completed (application-layer) read, queuing partials themselves, before operating on data. On 10/24/06, snacktime <snacktime at gmail.com> wrote:> When a socket is ready for reading, does EM read everything available > and send it in one large chunk to receive_data? > > Chris > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk >
On 10/24/06, hemant <gethemant at gmail.com> wrote:> > On 10/24/06, snacktime <snacktime at gmail.com> wrote: > > When a socket is ready for reading, does EM read everything available > > and send it in one large chunk to receive_data? > > > > Chris > > _______________________________________________ > > Eventmachine-talk mailing list > > Eventmachine-talk at rubyforge.org > > http://rubyforge.org/mailman/listinfo/eventmachine-talk > > > > Seems to me like that. Also, when your receive_data is doing some > processing and hence not available as a callback...next time..event > loop calls receive_data, all the data will be available as a large > chunk of text.With TCP, it''s always important to recognize that the network layer doesn''t respect message boundaries and you can''t expect it to. EM takes advantage of this to improve performance by coalescing reads and writes whenever it can, given that system IO calls are extremely expensive. As an aside, Ruby''s IO library does a handful of things that can actually be very noticeable performance killers. For example, it makes two calls to the system libraries when you call IO#puts: one for your data, and another for the newline. EM does its best to avoid things like that. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/29568d4b/attachment.html
The reason I asked is that I''m thinking over how much work it would be to support large messages in stompserver while using as little memory as possible. Would require rewriting the parser and a few other changes, but it might be worth it as I have a real need for something like this. Chris
On 10/24/06, Thomas Ptacek <thomasptacek at gmail.com> wrote:> > It sets the socket nonblocking and, on read, repeats the read a few > times or until it returns an error, in case there is more data in the > socket buffer than in the read() buffer. > > Why is this approach better than just using a really big read buffer?I''m pretty sure the read buffer is 16k, which is about the standard size for the kernel''s read buffers. Might get better performance by stepping both up to 32k, but now you have to watch memory if you have a lot of connections. I''d really like to come up with something that bypasses the kernel''s read buffers and writes directly into EM''s process memory. Every time I''ve tried to do that, it ended up creating much bigger problems elsewhere. Has anyone else had any success? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/d260be6b/attachment.html
On 10/24/06, snacktime <snacktime at gmail.com> wrote:> > The reason I asked is that I''m thinking over how much work it would be > to support large messages in stompserver while using as little memory > as possible. Would require rewriting the parser and a few other > changes, but it might be worth it as I have a real need for something > like this.How large is "large"? One thing I did in AMQP is to allocate a read buffer of known length with "\0" * length, and then use [range] calls to populate it with each slice of data- puts much less stress on Ruby''s rickety memory management. I wonder if it makes sense to put a "read buffer size" into EM''s per-connection data structure? Then you could tell EM not to call you back until it has seen x bytes of data. That might run a hell of a lot faster. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/6fc6a734/attachment.html
It''s the loop around recv() which confuses me; is there a performance benefit to incurring 10 recv syscalls over just doing a larger recv? I think you also have a bug: you''re alway hitting recv twice, even though in the common case you''ll get a short recv which is a signal that you don''t need to try again. On 10/24/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote:> On 10/24/06, Thomas Ptacek <thomasptacek at gmail.com> wrote: > > It sets the socket nonblocking and, on read, repeats the read a few > > times or until it returns an error, in case there is more data in the > > socket buffer than in the read() buffer. > > > > Why is this approach better than just using a really big read buffer? > > > I''m pretty sure the read buffer is 16k, which is about the standard size > for the kernel''s read buffers. Might get better performance by stepping both > up to 32k, but now you have to watch memory if you have a lot of > connections. > > I''d really like to come up with something that bypasses the kernel''s read > buffers and writes directly into EM''s process memory. Every time I''ve tried > to do that, it ended up creating much bigger problems elsewhere. Has anyone > else had any success? > > > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk > >
> > How large is "large"? One thing I did in AMQP is to allocate a read buffer > of known length with "\0" * length, and then use [range] calls to populate > it with each slice of data- puts much less stress on Ruby''s rickety memory > management. I wonder if it makes sense to put a "read buffer size" into EM''s > per-connection data structure? Then you could tell EM not to call you back > until it has seen x bytes of data. That might run a hell of a lot faster. >Well ''large'' probably depends on the circumstances. Having a maximum and a minimum ''read buffer'' size would be great. That would let people really fine tune things to their specific needs. Chris
On 10/24/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote:> > I''d really like to come up with something that bypasses the kernel''s read > buffers and writes directly into EM''s process memory. Every time I''ve tried > to do that, it ended up creating much bigger problems elsewhere. Has anyone > else had any success? >Sounds like you want POSIX asynchronous I/O. You can use the aio_read and aio_write functions to hand async I/O requests to the kernel which read directly into process memory and notify you with POSIX realtime signal queues when the request completes. You read from the signal queue with sigtimedwait(), which (in the API at least) provides nanosecond precision for wait times. Last I checked, this is pretty much going to be a Linux/Solaris only kind of thing. -- Tony Arcieri ClickCaster, Inc. tony at clickcaster.com (970) 232-4208 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/e9512b26/attachment.html
O> Sounds like you want POSIX asynchronous I/O. You can use the aio_read and > aio_write functions to hand async I/O requests to the kernel which read > directly into process memory and notify you with POSIX realtime signal > queues when the request completes. You read from the signal queue with > sigtimedwait(), which (in the API at least) provides nanosecond precision > for wait times. > > Last I checked, this is pretty much going to be a Linux/Solaris only kind of > thing.It''s also available as a LKM on Freebsd, not sure about the other bsd''s.
On 10/24/06, Tony Arcieri <tony at clickcaster.com> wrote:> > On 10/24/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote: > > > > I''d really like to come up with something that bypasses the kernel''s > > read buffers and writes directly into EM''s process memory. Every time I''ve > > tried to do that, it ended up creating much bigger problems elsewhere. Has > > anyone else had any success? > > > > Sounds like you want POSIX asynchronous I/O. You can use the aio_read and > aio_write functions to hand async I/O requests to the kernel which read > directly into process memory and notify you with POSIX realtime signal > queues when the request completes. You read from the signal queue with > sigtimedwait(), which (in the API at least) provides nanosecond precision > for wait times.I have to admit, I seriously dislike Posix aio. Every time I''ve tried it (and I''ve tried on Solaris and on Linux), performance and scalability are just abysmal. Maybe I''m not doing it right. Does your experience differ? In theory, you''re supposed to be able to set the kernel read buffers to zero with a sockopt (and this is fairly portable, too), but it''s just hard to get it right. By the way, Tony, I''m not averse to #ifdeffing the EM code to take advantage of better-performing but platform-specific things. Were you going to look into epoll for Linux 2.6, or are you waiting on me for that? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/0bb466f6/attachment.html
On 10/24/06, snacktime <snacktime at gmail.com> wrote:> > > > > How large is "large"? One thing I did in AMQP is to allocate a read > buffer > > of known length with "\0" * length, and then use [range] calls to > populate > > it with each slice of data- puts much less stress on Ruby''s rickety > memory > > management. I wonder if it makes sense to put a "read buffer size" into > EM''s > > per-connection data structure? Then you could tell EM not to call you > back > > until it has seen x bytes of data. That might run a hell of a lot > faster. > > > > Well ''large'' probably depends on the circumstances. Having a maximum > and a minimum ''read buffer'' size would be great. That would let > people really fine tune things to their specific needs.Since you''re working with stomp, I''m guessing your idea of "large" in this context is gigabytes or hundreds of meg. To support that, we really need something better than in-memory buffering. I''ll be bumping up against this in my MQ server soon enough, so that would be a good time to pick this back up. In the meantime, if you have to work with smaller messages: are you trying to make it easier on yourself by not having to store partial results in your protocol handler when receive_data only gives you some of what you''re expecting? I have to admit, I run into that with every single protocol handler I write, and it would be really nice if the library had some better support for it. Any API suggestions? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/9d9d0aee/attachment.html
I was wondering about coming up with some standard way to spin off a thread for doing the actual multiplexing, and communicating back to Ruby over a socketpair via IO.select, as a means of working around the green threads issue (with the hopes of blowing all of that away post-YARV without having to modify the platform abstractions) If you can fit things into this model, I have all of the platform code already written (except for Win32 IOCP which obviously won''t fit into that model) - Tony On 10/24/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote:> > On 10/24/06, Tony Arcieri <tony at clickcaster.com> wrote: > > > > On 10/24/06, Francis Cianfrocca <garbagecat10 at gmail.com > wrote: > > > > > > I''d really like to come up with something that bypasses the kernel''s > > > read buffers and writes directly into EM''s process memory. Every time I''ve > > > tried to do that, it ended up creating much bigger problems elsewhere. Has > > > anyone else had any success? > > > > > > > Sounds like you want POSIX asynchronous I/O. You can use the aio_read > > and aio_write functions to hand async I/O requests to the kernel which read > > directly into process memory and notify you with POSIX realtime signal > > queues when the request completes. You read from the signal queue with > > sigtimedwait(), which (in the API at least) provides nanosecond precision > > for wait times. > > > I have to admit, I seriously dislike Posix aio. Every time I''ve tried it > (and I''ve tried on Solaris and on Linux), performance and scalability are > just abysmal. Maybe I''m not doing it right. Does your experience differ? > > In theory, you''re supposed to be able to set the kernel read buffers to > zero with a sockopt (and this is fairly portable, too), but it''s just hard > to get it right. > > By the way, Tony, I''m not averse to #ifdeffing the EM code to take > advantage of better-performing but platform-specific things. Were you going > to look into epoll for Linux 2.6, or are you waiting on me for that? > > > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk > >-- Tony Arcieri ClickCaster, Inc. tony at clickcaster.com (970) 232-4208 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/e8929919/attachment.html
On 10/24/06, Tony Arcieri <tony at clickcaster.com> wrote:> > I was wondering about coming up with some standard way to spin off a > thread for doing the actual multiplexing, and communicating back to Ruby > over a socketpair via IO.select, as a means of working around the green > threads issue (with the hopes of blowing all of that away post-YARV without > having to modify the platform abstractions) > > If you can fit things into this model, I have all of the platform code > already written (except for Win32 IOCP which obviously won''t fit into that > model)An early version of EM did exactly that (you can find it in the source archives if you want- go back to around May 1). It worked reasonably well but I thought it was really putrid, and I was glad to get rid of it. The key to interoperating with green threads is not ever to block for longer than Ruby''s time quantum (about 10 mills). If you can get your alternate io multiplexers to work that way, you''ll be fine. I was able to get my experimental epoll implementation to work with no trouble. IOCP is going to be a big pain. It''s essentially a complete rewrite of em.cpp. I spent a good bit of time on it before I remembered that I hate Windows in the first place. But having written IOCP in the past, I know it''ll probably give a wicked performance improvement. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061024/e48c98ec/attachment.html
> Since you''re working with stomp, I''m guessing your idea of "large" in this > context is gigabytes or hundreds of meg. To support that, we really need > something better than in-memory buffering. I''ll be bumping up against this > in my MQ server soon enough, so that would be a good time to pick this back > up. > > In the meantime, if you have to work with smaller messages: are you trying > to make it easier on yourself by not having to store partial results in your > protocol handler when receive_data only gives you some of what you''re > expecting? I have to admit, I run into that with every single protocol > handler I write, and it would be really nice if the library had some better > support for it. Any API suggestions?I can think of 2 settings that might come in handy. 1. The max amount of data EM will buffer before sending to receive_data. If it reaches that limit it stops reading from the socket until the current buffer is sent to receive_data. 2. The minimum amount required in the buffer before sending to receive_data, along with some type of timeout where after X amount of time it sends whatever it has to receive_data regardless of the set minimum. I think there are a lot of different scenarios, even within one application where being able to tune those two things would help. For example I''d love to use something like stompserver as the base for a central logging facility. Where I work we have to log everything, even sql selects. Log files would average 50mb or so. Performance isn''t nearly as much an issue as memory use in this case, so I would prefer small buffers, no more than a couple mb or so. Someone with a different purpose might have the opposite need.
> 1. The max amount of data EM will buffer before sending to > receive_data. If it reaches that limit it stops reading from the > socket until the current buffer is sent to receive_data.I think this is almost always a bad idea; Francis'' job is to move bytes out of the socket buffers and into userland as fast as possible. Why do you want to leave those bytes in the kernel?
On 10/24/06, Thomas Ptacek <thomasptacek at gmail.com> wrote:> > 1. The max amount of data EM will buffer before sending to > > receive_data. If it reaches that limit it stops reading from the > > socket until the current buffer is sent to receive_data. > > I think this is almost always a bad idea; Francis'' job is to move > bytes out of the socket buffers and into userland as fast as possible. > Why do you want to leave those bytes in the kernel?Why is it almost always a bad idea? I don''t see what it hurts. If your application has a need to only process so many bytes at a time, what''s wrong with that? I''m primarily thinking of handling large amount of incoming data where I want to control memory use. Chris
From: "snacktime" <snacktime at gmail.com>> > On 10/24/06, Thomas Ptacek <thomasptacek at gmail.com> wrote: >> > 1. The max amount of data EM will buffer before sending to >> > receive_data. If it reaches that limit it stops reading from the >> > socket until the current buffer is sent to receive_data. >> >> I think this is almost always a bad idea; Francis'' job is to move >> bytes out of the socket buffers and into userland as fast as possible. >> Why do you want to leave those bytes in the kernel? > > Why is it almost always a bad idea? I don''t see what it hurts. If > your application has a need to only process so many bytes at a time, > what''s wrong with that? I''m primarily thinking of handling large > amount of incoming data where I want to control memory use.I must confess I''ve been wondering about this too, recently. If my server can only process so many bytes at a time (say the bytes form requests or queries which take time to process) then I wouldn''t want a client to be able to DoS my server, by hammering it with requests that pile up and exhaust all memory. (I''m just jumping into this thread, though, so my apologies if I''ve misunderstood what''s being discussed.) Regards, Bill
On 10/25/06, snacktime <snacktime at gmail.com> wrote:> > On 10/24/06, Thomas Ptacek <thomasptacek at gmail.com> wrote: > > > 1. The max amount of data EM will buffer before sending to > > > receive_data. If it reaches that limit it stops reading from the > > > socket until the current buffer is sent to receive_data. > > > > I think this is almost always a bad idea; Francis'' job is to move > > bytes out of the socket buffers and into userland as fast as possible. > > Why do you want to leave those bytes in the kernel? > > Why is it almost always a bad idea? I don''t see what it hurts. If > your application has a need to only process so many bytes at a time, > what''s wrong with that? I''m primarily thinking of handling large > amount of incoming data where I want to control memory use.In the current implementation, EM will not take any more than 16K out of a socket before sending it to the application. It reads a maximum of 160K from any one connection (10 reads of 16K) before moving on to the next connection. It seems to me that in practice, it rarely comes anywhere close to these limits with network connections, because it''s usually fast enough to take data from the kernel roughly in chunks that match the ethernet packets. You could say that the implementation is "naive" in the sense that it''s using a very simple pair of heuristics (trying to minimize the number of kernel crossings while preventing heavily-used connections from starving out the less-used ones). I think it''s well worth trying to tune the heuristics, even with runtime parameters, although years of experience with IP stacks shows that, while quite configurable, they rarely need it. Chris, are you thinking you''d want to cut down the maximum amount of data read from a socket on each pass through the loop? Or make it bigger? I like your idea of setting a minimum (low-water mark) before dispatching data. TCP has something similar, but for a somewhat different purpose. I''ll get that into the code the next chance I get. It''s funny, the deeper you get into network programming (and I''ve been at it for many years), the more you come to admire TCP. I like to think it''s the best 6000 lines of code anyone has ever written. (IP Routing, on the other hand, is black magic. The more I learn about it, the harder it is for me to believe it actually works.) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/4dcbe747/attachment.html
> In the current implementation, EM will not take any more than 16K out of a > socket before sending it to the application. It reads a maximum of 160K from > any one connection (10 reads of 16K) before moving on to the next > connection. It seems to me that in practice, it rarely comes anywhere close > to these limits with network connections, because it''s usually fast enough > to take data from the kernel roughly in chunks that match the ethernet > packets.I don''t see 160K really ever being too ''large''. I should have gone and looked at the source to start with. So knowing this, I''d probably just add a low-water mark with a timeout to prevent something from hanging up. What I wanted to avoid is stuffing too much in memory all at once, but as you said that''s probably very unlikely to happen anyways. Now i just need to go rewrite half of stompserver so it can support large files...
On 10/25/06, Thomas Ptacek <thomasptacek at gmail.com> wrote:> > > 1. The max amount of data EM will buffer before sending to > > receive_data. If it reaches that limit it stops reading from the > > socket until the current buffer is sent to receive_data. > > I think this is almost always a bad idea; Francis'' job is to move > bytes out of the socket buffers and into userland as fast as possible. > Why do you want to leave those bytes in the kernel?It''s fun that we''re getting into the high-performance edge here with Ruby! Hopefully this list will become the place where the discussions about making Ruby super-fast and super-scalable gravitate to. As with everything else, the answer to this is "it depends," but you can make a case that in many applications you don''t want to take data off the network any faster than you can process it. The unprocessed data doesn''t pile up in the kernel. It piles up on the other end of the network, thanks to the genius of TCP. It''s an important priority for EM to stay linear and predictable with regard to memory usage, just as the kernel does. Related but different: it''s probably possible to run EM out of memory on the *write* side. I haven''t tried this but I don''t think it would give an error if you called 1000000.times {send_data "A" * 10000}. We probably need an outbound-write maximum that would throw an exception. I can''t think of a graceful way to deal with this case from the application programmer''s point of view. Unless we did something like buffer outbound data on the filesystem. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/3498b400/attachment.html
What happens now if you fill up the kernel UDP buffers by writing to them? On 10/25/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote:> > On 10/25/06, Thomas Ptacek <thomasptacek at gmail.com> wrote: > > > > > 1. The max amount of data EM will buffer before sending to > > > receive_data. If it reaches that limit it stops reading from the > > > socket until the current buffer is sent to receive_data. > > > > I think this is almost always a bad idea; Francis'' job is to move > > bytes out of the socket buffers and into userland as fast as possible. > > Why do you want to leave those bytes in the kernel? > > > > > It''s fun that we''re getting into the high-performance edge here with Ruby! > Hopefully this list will become the place where the discussions about making > Ruby super-fast and super-scalable gravitate to. > > As with everything else, the answer to this is "it depends," but you can > make a case that in many applications you don''t want to take data off the > network any faster than you can process it. The unprocessed data doesn''t > pile up in the kernel. It piles up on the other end of the network, thanks > to the genius of TCP. It''s an important priority for EM to stay linear and > predictable with regard to memory usage, just as the kernel does. > > > Related but different: it''s probably possible to run EM out of memory on > the *write* side. I haven''t tried this but I don''t think it would give an > error if you called 1000000.times {send_data "A" * 10000}. We probably > need an outbound-write maximum that would throw an exception. I can''t think > of a graceful way to deal with this case from the application programmer''s > point of view. Unless we did something like buffer outbound data on the > filesystem. > > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk > >-- Tony Arcieri ClickCaster, Inc. tony at clickcaster.com (970) 232-4208 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/332c3323/attachment.html
On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote:> What happens now if you fill up the kernel UDP buffers by writing to them?I imagine that an attempted write by EM would return some type of failure, and it would just backup in EM until the socket was ready.
I should RTFS, but the normal evented write pattern is "fire and forget" --- filling up the socket buffers is the common case (think: any file transfer more than 16k), so the event framework should just queue the data up and drain it to the descriptor as it signals writeable. On 10/25/06, snacktime <snacktime at gmail.com> wrote:> On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote: > > What happens now if you fill up the kernel UDP buffers by writing to them? > > I imagine that an attempted write by EM would return some type of > failure, and it would just backup in EM until the socket was ready. > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk >
On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote:> > What happens now if you fill up the kernel UDP buffers by writing to them?I imagine that the kernel would drop packets on the floor, but I haven''t tried it. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/f9501ee5/attachment.html
Well, the write() call should error out with ENOBUFS. I was specifically wondering how EventMachine handled it. If you''re going to impose some sort of limit on the size of the write buffer, hopefully EventMachine would respond to that in a similar manner as it responds to the kernel buffer being full. On 10/25/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote:> > On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote: > > > > What happens now if you fill up the kernel UDP buffers by writing to > > them? > > > > I imagine that the kernel would drop packets on the floor, but I haven''t > tried it. > > > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk > >-- Tony Arcieri ClickCaster, Inc. tony at clickcaster.com (970) 232-4208 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/b131a315/attachment.html
On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote:> > Well, the write() call should error out with ENOBUFS. I was specifically > wondering how EventMachine handled it. > > If you''re going to impose some sort of limit on the size of the write > buffer, hopefully EventMachine would respond to that in a similar manner as > it responds to the kernel buffer being full. > > On 10/25/06, Francis Cianfrocca <garbagecat10 at gmail.com> wrote: > > > > On 10/25/06, Tony Arcieri <tony at clickcaster.com> wrote: > > > > > > What happens now if you fill up the kernel UDP buffers by writing to > > > them? > > > > > > > > I imagine that the kernel would drop packets on the floor, but I haven''t > > tried it. > > >I took a closer look at DatagramDescriptor::Write in ext/ed.cpp. EM should only rarely see ENOBUFS because it only writes to a UDP socket if it has selected writable (meaning buffer space is available). According to the sendto(2) manpage, ENOBUFS doesn''t happen on Linux, but I imagine you could see it on BSD or Solaris. (On Windows, who the hell knows what happens?) Currently EM handles EAGAIN by retaining the outbound data in its own memory, but that would be affected if we added an outbound-data size limit. I''m not sure what to do about ENOBUFS. It would be easy enough to treat it like EAGAIN, but I don''t know how you could force it to happen, for testing purposes. What do you think? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/eventmachine-talk/attachments/20061025/f7433eeb/attachment.html