Hello, on the application that I''m working on, we are using Thin. The server implements long polling and commands that alter its internal state. Connecting to the long poll is a very fast operation, while updating the state of the server is a relatively slow operation. The long poll connection is held open until there is a change to be sent to the connection. Here is my current understanding of EventMachine along with its implications. Please correct me if I''m wrong about anything. * The main event loop runs in Ruby''s main thread. This means slow Ruby handlers delays the event loop, which means unbinds are not picked up. This also means that the OS queues incoming socket events until the main EventMachine loop rolls around to handle the socket events. This delay causes data to be sent to dead connections because there is no way to know if a connection has been unbound. An example of this is at: http://pastie.org/249009 To minimize the occurrence of this race condition, the main thread needs to be as fast as possible. Aman pointed me to: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7872 http://readlist.com/lists/ruby-lang.org/ruby-talk/3/16264.html It seems like a solution would be to have a very fast reactor loop in an OS thread that tracks the state of the connections and updates a data structure that can be queried and send the events to a subscription queue in Ruby main thread. For example, Connection#error? would return true the "instant" the client closes the socket, which would be before the unbind event is sent to the connection. This structure would be writable from the OS thread and readable from the Ruby thread. This would significantly reduce the window of the race condition. Is such a technique feasible or am I missing something? Thanks, Brian
On Thu, Aug 7, 2008 at 12:08 AM, Brian Takita <brian.takita at gmail.com>wrote:> It seems like a solution would be to have a very fast reactor loop in > an OS thread that tracks the state of the connections and updates a > data structure that can be queried and send the events to a > subscription queue in Ruby main thread. > For example, Connection#error? would return true the "instant" the > client closes the socket, which would be before the unbind event is > sent to the connection. > > This structure would be writable from the OS thread and readable from > the Ruby thread. This would significantly reduce the window of the > race condition. > Is such a technique feasible or am I missing something? >This is quite easy to do in a fairly elegant manner in Ruby 1.9. In fact, I may try it out in Rev, my event framework. Essentially you''d create a new Thread (object) which would then call a method in a C extension which entered an rb_thread_blocking_region(). This would release Ruby''s Global VM Lock, allowing other threads to run uninhibited. The event loop would collect events in a buffer then return from the rb_thread_blocking_region() returning the event buffer. The event buffer could then be passed over a Queue back to the main thread where the Rubyland event loop is running. In Ruby 1.8, it''s a bit more of a bitch. MRI was never intended to be multithreaded, and using pthreads in conjunction with it is really asking for trouble. For one MRI makes extensive use of signals, but uses sigprocmask() to handle masking. This means any signal masks you may set for the event monitor thread can get repeatedly clobbered depending on your platform, as the behavior of sigprocmask() in multithreaded applications is undefined, although fortunately most operating systems make it effectively the same as pthread_sigmask(). In either case you''d need to use a pipe to wake the event loop up when notifiers are added or removed. Rev has this built in in the form of an AsyncWatcher. -- Tony Arcieri medioh.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://rubyforge.org/pipermail/eventmachine-talk/attachments/20080807/88a31e46/attachment.html>
On Thu, Aug 7, 2008 at 11:38 AM, Brian Takita <brian.takita at gmail.com> wrote:> Hello, on the application that I''m working on, we are using Thin. > The server implements long polling and commands that alter its internal state. > > Connecting to the long poll is a very fast operation, while updating > the state of the server is a relatively slow operation. > The long poll connection is held open until there is a change to be > sent to the connection. > > Here is my current understanding of EventMachine along with its > implications. Please correct me if I''m wrong about anything. > > * The main event loop runs in Ruby''s main thread. > This means slow Ruby handlers delays the event loop, which means > unbinds are not picked up. This also means that the OS queues incoming > socket events until the main EventMachine loop rolls around to handle > the socket events. > > This delay causes data to be sent to dead connections because there is > no way to know if a connection has been unbound.Well, at least for your current case (where ''data'' is being send to dead connections), EM can probably implement a ''error_on_write'' event, which will ensure that, client class knows immediately, if the write succeeded (i am presuming that, this is your chief concern). But yes, its not that simple, because, send_data is not an immediate write, but at least for the first chunk, EM can generate that event. Also, as Tony mentioned, what you have asked is almost impossible with current MRI implementation, but very honestly, I don''t see much use of it. If your handler is slow, it will obviously delay the event loop (it happens the same in all event driven network programming frameworks that i know of), the solution, in some of them is to, process slow handlers in another thread and set callbacks ( I have done this in Scala and works like a champ), but here also Ruby threads, doesn''t bring much value to the table because, if your handlers is slow, its probably CPU bound (assuming IO bound tasks are already within EM loop). Currently, you can use EM.popen() to process slow handler in another unix process, so as main loop remains lightweight and your handlers return fast. PS: I couldn''t catch, what you were trying to prove through that snippet. Are you saying, second unbind will not see, that first connection is closed? If yes, second unbind will not be invoked until first unbind finishes execution. Please explain, i think, i missed something there.
> > Currently, you can use EM.popen() to process slow handler in another > unix process, so as main loop remains lightweight and your handlers > return fast.You can also fork off a reactor: http://github.com/raggi/eventmachine-svn/commit/385e6cef0d7e17a350f60b2452b1b43bb144ddfc I use this in my amqp library to distribute work among multiple copies of a single worker: http://github.com/tmm1/amqp/tree/master/examples/mq/primes.rb Aman -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://rubyforge.org/pipermail/eventmachine-talk/attachments/20080807/7d753634/attachment.html>
On 7 Aug 2008, at 07:08, Brian Takita wrote:> Hello, on the application that I''m working on, we are using Thin. > The server implements long polling and commands that alter its > internal state.Interesting, I wrote this a month or two ago, and haven''t finished it. Last night I had a hellish discussion about the interfaces I put together, but I think they''re holding strong.> Connecting to the long poll is a very fast operation, while updating > the state of the server is a relatively slow operation.Which state? It should be all internal memory adjustments, so I must be missing something :)> The long poll connection is held open until there is a change to be > sent to the connection.Right, so in reality, it''s just an open connection, that shouldn''t really induce any load under epoll / kqueue.> Here is my current understanding of EventMachine along with its > implications. Please correct me if I''m wrong about anything. > > * The main event loop runs in Ruby''s main thread. > This means slow Ruby handlers delays the event loop, which means > unbinds are not picked up. This also means that the OS queues incoming > socket events until the main EventMachine loop rolls around to handle > the socket events.Yes, this is why you want to split up longer operations into Deferrables or Spawned Processes (not referring to OS processes) in order to do ''a piece at a time''. The idea is, you need to ''free up'' the scheduler regularly, which is to return from a callback. If you still have work to do, then schedule it, use next_tick or a timer... Keep the state in something like a Deferrable.> This delay causes data to be sent to dead connections because there is > no way to know if a connection has been unbound.Right, you want to free often, but not as often as thread switching, because otherwise you''ll add context change overhead. N.B. It''s not a full context switch, but there''s still some overhead...> An example of this is at: > http://pastie.org/249009Mmm, the only real way round that is threads, but generally the point is, if you start to handle that stuff, you''re going to write a lot of code bloat. This is already what''s happening in your simple unbind example. One of my favourite things about EM, is that it boils errors and states down to a very small number of simple methods. The point is you want to take an approach more similar to BASE than to ACID.> To minimize the occurrence of this race condition, the main thread > needs to be as fast as possible. > > Aman pointed me to: > http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7872 > http://readlist.com/lists/ruby-lang.org/ruby-talk/3/16264.html > > It seems like a solution would be to have a very fast reactor loop in > an OS thread that tracks the state of the connections and updates a > data structure that can be queried and send the events to a > subscription queue in Ruby main thread.I''ve been thinking about this too, but I think under certain conditions that leads to other edge cases which are really hard to deal with. If you work on a server that manages lots of long running operations split up into Deferrable states and scheduled work piles, you soon notice that under compute bound scenarios you can end up with the IO system (even now) scheduling more work than you can ever deal with. Even without the context switch overhead of threads, there are still issues. The event loop will keep cranking, and it will queue more and more jobs to be run. Eventually that queue will just OOME your app. This is contrary to a threaded approach, where if you''re spawning threads to deal with it, the overhead gets you first. My recommendation is, generally, to monitor some sensible metric, and restrict it. Say 1000 timers (default inside EM C++), or 10,000 clients (if your client state is relatively lightweight), or say, 500mb of ram. Whatever happens to be good for your app. After you pass this boundary, you want to stop accepting new jobs until you''ve cleared the backlog.> For example, Connection#error? would return true the "instant" the > client closes the socket, which would be before the unbind event is > sent to the connection.You''re not supposed to care, and that''s kind of the point. send_data should just work, as should the rest of the methods and callbacks, and they happen in a sane order. This is very much a BASE approach, and your design / architecture will have to take that into account. One of the things I observed whilst working in the other camp, was that even under something that was more ACID, it was damn near impossible to actually deal with all errors in entirely sane ways, so I think abstracting it all away really frees you up to deal with the app logic, rather than say, every errorno and it''s cause in the BSD socket api.> This structure would be writable from the OS thread and readable from > the Ruby thread. This would significantly reduce the window of the > race condition. > Is such a technique feasible or am I missing something?It''s kind of feasible, but it detracts from a large part of the point, IMO. It''s the same as #callback and #errback on Deferrables being statefully called, rather than order called. You can make a deferrable, call #succeed, then call #callback, and the block from callback is run right away. But it''s already succeeded? - That''s the point, we can''t garuntee when things will happen, and we try to ignore those minor details. Instead we care that it did happen, and do the right thing. I think the same thing applied to unbind. It''s not when it happens that''s specifically important, it''s the fact that it happened. This does require a change in architecture though. You have to be ''sensory'' as my partner likes to call it. In other words, after an unexpected unbind, you have to go ''feel'' what happened, and recover. You can''t guess based on when, or some parameter given to you. The important thing with this is, it makes the handler deal with it''s state internally. It feels around, makes a conclusion, and acts on it, and that''s based on real world data, rather than error data. This is important aswell, as error data can often lie too. With regard to your long polling stuff, take a look at this build of thin: http://github.com/raggi/thin/tree/async_for_rack In particular, you''ll want to look at the async_chat.ru example, which is basically a long polling app (actually it''s a stream to the main response, and ajax based send, but you can swap those over trivially, for a ''normal'' app - I wrote it that way round in the example to pay homage to the old go.com chat servers). http://github.com/raggi/thin/tree/async_for_rack/example/async_chat.ru If you have any questions regarding the Thin branch or techniques, I''m on the IRC channel. Kind regards, James Tucker / raggi.> > > Thanks, > Brian > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk
> if your handlers is slow, its probably CPU bound (assuming IO bound tasks are already within EM loop).Currently, we keep the handlers "fast" by enqueuing a lambda to be called on a worker thread. Currently our tasks are IO (db queries + memory operations) and CPU (string generation) bound. I think that its besides the point though, because given enough "fast" handlers, the reactor loop will be slow. It seems like it comes down to load balancing and scheduling. i.e. My app can handle a velocity of n events per second. On Thu, Aug 7, 2008 at 3:39 AM, James Tucker <jftucker at gmail.com> wrote:> > On 7 Aug 2008, at 07:08, Brian Takita wrote: > >> Hello, on the application that I''m working on, we are using Thin. >> The server implements long polling and commands that alter its internal >> state. > > Interesting, I wrote this a month or two ago, and haven''t finished it. Last > night I had a hellish discussion about the interfaces I put together, but I > think they''re holding strong. > >> Connecting to the long poll is a very fast operation, while updating >> the state of the server is a relatively slow operation. > > Which state? It should be all internal memory adjustments, so I must be > missing something :)Actually the current bottleneck in our app is generating json. We have some optimizations to do, but effectively processing state updating + response operations (the work of the app) is what is going to be slow compared to the reactor loop.> >> The long poll connection is held open until there is a change to be >> sent to the connection. > > Right, so in reality, it''s just an open connection, that shouldn''t really > induce any load under epoll / kqueue. > >> Here is my current understanding of EventMachine along with its >> implications. Please correct me if I''m wrong about anything. >> >> * The main event loop runs in Ruby''s main thread. >> This means slow Ruby handlers delays the event loop, which means >> unbinds are not picked up. This also means that the OS queues incoming >> socket events until the main EventMachine loop rolls around to handle >> the socket events. > > Yes, this is why you want to split up longer operations into Deferrables or > Spawned Processes (not referring to OS processes) in order to do ''a piece at > a time''. The idea is, you need to ''free up'' the scheduler regularly, which > is to return from a callback. If you still have work to do, then schedule > it, use next_tick or a timer... Keep the state in something like a > Deferrable. > >> This delay causes data to be sent to dead connections because there is >> no way to know if a connection has been unbound. > > Right, you want to free often, but not as often as thread switching, because > otherwise you''ll add context change overhead. N.B. It''s not a full context > switch, but there''s still some overhead... > >> An example of this is at: >> http://pastie.org/249009 > > Mmm, the only real way round that is threads, but generally the point is, if > you start to handle that stuff, you''re going to write a lot of code bloat. > This is already what''s happening in your simple unbind example. One of my > favourite things about EM, is that it boils errors and states down to a very > small number of simple methods. The point is you want to take an approach > more similar to BASE than to ACID. > >> To minimize the occurrence of this race condition, the main thread >> needs to be as fast as possible. >> >> Aman pointed me to: >> http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7872 >> http://readlist.com/lists/ruby-lang.org/ruby-talk/3/16264.html >> >> It seems like a solution would be to have a very fast reactor loop in >> an OS thread that tracks the state of the connections and updates a >> data structure that can be queried and send the events to a >> subscription queue in Ruby main thread. > > I''ve been thinking about this too, but I think under certain conditions that > leads to other edge cases which are really hard to deal with. If you work on > a server that manages lots of long running operations split up into > Deferrable states and scheduled work piles, you soon notice that under > compute bound scenarios you can end up with the IO system (even now) > scheduling more work than you can ever deal with. Even without the context > switch overhead of threads, there are still issues. The event loop will keep > cranking, and it will queue more and more jobs to be run. Eventually that > queue will just OOME your app. This is contrary to a threaded approach, > where if you''re spawning threads to deal with it, the overhead gets you > first. > > My recommendation is, generally, to monitor some sensible metric, and > restrict it. Say 1000 timers (default inside EM C++), or 10,000 clients (if > your client state is relatively lightweight), or say, 500mb of ram. Whatever > happens to be good for your app. After you pass this boundary, you want to > stop accepting new jobs until you''ve cleared the backlog.That makes sense. We will probably use custom scheduling of tasks on one or more (for pritorization) queue(s). Using EM.next_tick will allow us to go away from Threads.> >> For example, Connection#error? would return true the "instant" the >> client closes the socket, which would be before the unbind event is >> sent to the connection. > > You''re not supposed to care, and that''s kind of the point. send_data should > just work, as should the rest of the methods and callbacks, and they happen > in a sane order. This is very much a BASE approach, and your design / > architecture will have to take that into account. One of the things I > observed whilst working in the other camp, was that even under something > that was more ACID, it was damn near impossible to actually deal with all > errors in entirely sane ways, so I think abstracting it all away really > frees you up to deal with the app logic, rather than say, every errorno and > it''s cause in the BSD socket api.My interpretation of this is that the client needs to acknowledge that it received the proper payload. One way to do this is to version the payloads and resend any missed payloads.> >> This structure would be writable from the OS thread and readable from >> the Ruby thread. This would significantly reduce the window of the >> race condition. >> Is such a technique feasible or am I missing something? > > It''s kind of feasible, but it detracts from a large part of the point, IMO. > It''s the same as #callback and #errback on Deferrables being statefully > called, rather than order called. You can make a deferrable, call #succeed, > then call #callback, and the block from callback is run right away. But it''s > already succeeded? - That''s the point, we can''t garuntee when things will > happen, and we try to ignore those minor details. Instead we care that it > did happen, and do the right thing. I think the same thing applied to > unbind. It''s not when it happens that''s specifically important, it''s the > fact that it happened. > > This does require a change in architecture though. You have to be ''sensory'' > as my partner likes to call it. In other words, after an unexpected unbind, > you have to go ''feel'' what happened, and recover. You can''t guess based on > when, or some parameter given to you. The important thing with this is, it > makes the handler deal with it''s state internally. It feels around, makes a > conclusion, and acts on it, and that''s based on real world data, rather than > error data. This is important aswell, as error data can often lie too. > > > With regard to your long polling stuff, take a look at this build of thin: > > http://github.com/raggi/thin/tree/async_for_rack > > In particular, you''ll want to look at the async_chat.ru example, which is > basically a long polling app (actually it''s a stream to the main response, > and ajax based send, but you can swap those over trivially, for a ''normal'' > app - I wrote it that way round in the example to pay homage to the old > go.com chat servers). > > http://github.com/raggi/thin/tree/async_for_rack/example/async_chat.ru > > > If you have any questions regarding the Thin branch or techniques, I''m on > the IRC channel. > > Kind regards, > > James Tucker / raggi. > > >> >> >> Thanks, >> Brian >> _______________________________________________ >> Eventmachine-talk mailing list >> Eventmachine-talk at rubyforge.org >> http://rubyforge.org/mailman/listinfo/eventmachine-talk > > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk >
On 9 Aug 2008, at 23:07, Brian Takita wrote:>> if your handlers is slow, its probably CPU bound (assuming IO bound >> tasks are already within EM loop). > Currently, we keep the handlers "fast" by enqueuing a lambda to be > called on a worker thread. > > Currently our tasks are IO (db queries + memory operations) and CPU > (string generation) bound. > I think that its besides the point though, because given enough "fast" > handlers, the reactor loop will be slow. > It seems like it comes down to load balancing and scheduling. i.e. My > app can handle a velocity of n events per second. > > On Thu, Aug 7, 2008 at 3:39 AM, James Tucker <jftucker at gmail.com> > wrote: >> >> On 7 Aug 2008, at 07:08, Brian Takita wrote: >> >>> Hello, on the application that I''m working on, we are using Thin. >>> The server implements long polling and commands that alter its >>> internal >>> state. >> >> Interesting, I wrote this a month or two ago, and haven''t finished >> it. Last >> night I had a hellish discussion about the interfaces I put >> together, but I >> think they''re holding strong. >> >>> Connecting to the long poll is a very fast operation, while updating >>> the state of the server is a relatively slow operation. >> >> Which state? It should be all internal memory adjustments, so I >> must be >> missing something :) > Actually the current bottleneck in our app is generating json. We have > some optimizations to do, but effectively processing state updating + > response operations (the work of the app) is what is going to be slow > compared to the reactor loop.Often writing this kind of thing by hand is faster. The various complexities of typed serialisation can really slow things down, just due to the height of the architecture. Whether or not this is cost- effective to you as a business is dependant on the requirement for maintenance. If the software is in flux, then hard coding the serialisation layer is unlikely to be cost effective.> >> >>> The long poll connection is held open until there is a change to be >>> sent to the connection. >> >> Right, so in reality, it''s just an open connection, that shouldn''t >> really >> induce any load under epoll / kqueue. >> >>> Here is my current understanding of EventMachine along with its >>> implications. Please correct me if I''m wrong about anything. >>> >>> * The main event loop runs in Ruby''s main thread. >>> This means slow Ruby handlers delays the event loop, which means >>> unbinds are not picked up. This also means that the OS queues >>> incoming >>> socket events until the main EventMachine loop rolls around to >>> handle >>> the socket events. >> >> Yes, this is why you want to split up longer operations into >> Deferrables or >> Spawned Processes (not referring to OS processes) in order to do ''a >> piece at >> a time''. The idea is, you need to ''free up'' the scheduler >> regularly, which >> is to return from a callback. If you still have work to do, then >> schedule >> it, use next_tick or a timer... Keep the state in something like a >> Deferrable. >> >>> This delay causes data to be sent to dead connections because >>> there is >>> no way to know if a connection has been unbound. >> >> Right, you want to free often, but not as often as thread >> switching, because >> otherwise you''ll add context change overhead. N.B. It''s not a full >> context >> switch, but there''s still some overhead... >> >>> An example of this is at: >>> http://pastie.org/249009 >> >> Mmm, the only real way round that is threads, but generally the >> point is, if >> you start to handle that stuff, you''re going to write a lot of code >> bloat. >> This is already what''s happening in your simple unbind example. One >> of my >> favourite things about EM, is that it boils errors and states down >> to a very >> small number of simple methods. The point is you want to take an >> approach >> more similar to BASE than to ACID. >> >>> To minimize the occurrence of this race condition, the main thread >>> needs to be as fast as possible. >>> >>> Aman pointed me to: >>> http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7872 >>> http://readlist.com/lists/ruby-lang.org/ruby-talk/3/16264.html >>> >>> It seems like a solution would be to have a very fast reactor loop >>> in >>> an OS thread that tracks the state of the connections and updates a >>> data structure that can be queried and send the events to a >>> subscription queue in Ruby main thread. >> >> I''ve been thinking about this too, but I think under certain >> conditions that >> leads to other edge cases which are really hard to deal with. If >> you work on >> a server that manages lots of long running operations split up into >> Deferrable states and scheduled work piles, you soon notice that >> under >> compute bound scenarios you can end up with the IO system (even now) >> scheduling more work than you can ever deal with. Even without the >> context >> switch overhead of threads, there are still issues. The event loop >> will keep >> cranking, and it will queue more and more jobs to be run. >> Eventually that >> queue will just OOME your app. This is contrary to a threaded >> approach, >> where if you''re spawning threads to deal with it, the overhead gets >> you >> first. >> >> My recommendation is, generally, to monitor some sensible metric, and >> restrict it. Say 1000 timers (default inside EM C++), or 10,000 >> clients (if >> your client state is relatively lightweight), or say, 500mb of ram. >> Whatever >> happens to be good for your app. After you pass this boundary, you >> want to >> stop accepting new jobs until you''ve cleared the backlog. > That makes sense. We will probably use custom scheduling of tasks on > one or more (for pritorization) queue(s). > Using EM.next_tick will allow us to go away from Threads.:)> >> >>> For example, Connection#error? would return true the "instant" the >>> client closes the socket, which would be before the unbind event is >>> sent to the connection. >> >> You''re not supposed to care, and that''s kind of the point. >> send_data should >> just work, as should the rest of the methods and callbacks, and >> they happen >> in a sane order. This is very much a BASE approach, and your design / >> architecture will have to take that into account. One of the things I >> observed whilst working in the other camp, was that even under >> something >> that was more ACID, it was damn near impossible to actually deal >> with all >> errors in entirely sane ways, so I think abstracting it all away >> really >> frees you up to deal with the app logic, rather than say, every >> errorno and >> it''s cause in the BSD socket api. > My interpretation of this is that the client needs to acknowledge that > it received the proper payload. One way to do this is to version the > payloads and resend any missed payloads.Yes, lightweight mechanisms are nice and simple, and generally highly effective.> >> >>> This structure would be writable from the OS thread and readable >>> from >>> the Ruby thread. This would significantly reduce the window of the >>> race condition. >>> Is such a technique feasible or am I missing something? >> >> It''s kind of feasible, but it detracts from a large part of the >> point, IMO. >> It''s the same as #callback and #errback on Deferrables being >> statefully >> called, rather than order called. You can make a deferrable, call >> #succeed, >> then call #callback, and the block from callback is run right away. >> But it''s >> already succeeded? - That''s the point, we can''t garuntee when >> things will >> happen, and we try to ignore those minor details. Instead we care >> that it >> did happen, and do the right thing. I think the same thing applied to >> unbind. It''s not when it happens that''s specifically important, >> it''s the >> fact that it happened. >> >> This does require a change in architecture though. You have to be >> ''sensory'' >> as my partner likes to call it. In other words, after an unexpected >> unbind, >> you have to go ''feel'' what happened, and recover. You can''t guess >> based on >> when, or some parameter given to you. The important thing with this >> is, it >> makes the handler deal with it''s state internally. It feels around, >> makes a >> conclusion, and acts on it, and that''s based on real world data, >> rather than >> error data. This is important aswell, as error data can often lie >> too. >> >> >> With regard to your long polling stuff, take a look at this build >> of thin: >> >> http://github.com/raggi/thin/tree/async_for_rack >> >> In particular, you''ll want to look at the async_chat.ru example, >> which is >> basically a long polling app (actually it''s a stream to the main >> response, >> and ajax based send, but you can swap those over trivially, for a >> ''normal'' >> app - I wrote it that way round in the example to pay homage to the >> old >> go.com chat servers). >> >> http://github.com/raggi/thin/tree/async_for_rack/example/ >> async_chat.ru >> >> >> If you have any questions regarding the Thin branch or techniques, >> I''m on >> the IRC channel. >> >> Kind regards, >> >> James Tucker / raggi. >> >> >>> >>> >>> Thanks, >>> Brian >>> _______________________________________________ >>> Eventmachine-talk mailing list >>> Eventmachine-talk at rubyforge.org >>> http://rubyforge.org/mailman/listinfo/eventmachine-talk >> >> _______________________________________________ >> Eventmachine-talk mailing list >> Eventmachine-talk at rubyforge.org >> http://rubyforge.org/mailman/listinfo/eventmachine-talk >> > _______________________________________________ > Eventmachine-talk mailing list > Eventmachine-talk at rubyforge.org > http://rubyforge.org/mailman/listinfo/eventmachine-talk