hangonsoft-GANU6spQydw@public.gmane.org
2005-Mar-08 07:17 UTC
find_by_sql ON STEROID possible?
find_by_sql BREAKS THE OOP BEAUTY and perhaps we can solve that. NAMING CONVENTIONS can perhaps make find_by_sql much more clever. lets imagine that relation: publishers <- books <-> authors_books <-> authors -> universities (can''t find something else than university for that last association :) ) imagine we want to fetch every book and every associated object.....down to the author university with rails we can do Books.find_all and then after in the code book.publisher , book.authors and then author.university.... this is to me the real beauty of active record that allows us to write object.associated_object(s). this is the real beauty that allows us to forget SQL while concentrating on model construction....and that lower development times A LOT!!!!! when there is 10 books everything is OK !!!! there''s only 10 books inside a development DB But when there''s 500 books, in a production DB, that leads to 500 request to fetch publishers than 500 more for authors and then 500 more to find every author''s university....DB EXPLOSION!!!! the solution with activerecord is to use find_by_sql to optimize a production app , But what an efficient way to break the OOP BEAUTY i was talking about......i mean with find by sql, authors are no more book associated objects but additional informations added to every book object... with find_by_sql no more object.associated_object synthax :( SEEMS LIKE OPTIMIZATION IS BREAKING THE MODEL!!!! no? optimizing a request means using find_by_sql and breaks the inheritance... there is a sql request that can fetch all that information at once: SELECT b.*, p.*, a.*, u.* FROM books b, publishers p, authors a, authors_books ab, universities u WHERE b.publisher_id = p.id and a.id = ab.author_id and b.id = ab.book_id and a.university_id = u.id that fetch every association in the (publisher book author university) db model active record is based on NAMING CONVENTIONS so why can''t we do that? (no matter if it''s fastidious because it''s optimization time of the dev process !!!!!!) Books.find_by_sql( "SELECT b.*, p.name as book_publisher_name, p.id as book_publisher_id, a.name as book_author_name, a.id as book_author_id, u.name as author_university_name, u.id as author_university_id FROM books b, publishers p, authors a, authors_books ab, universities u WHERE b.publisher_id = p.id and a.id = ab.author_id and b.id = ab.book_id and a.university_id = u.id" ) after such a request book.author.name or book.author..university.name would be possible AGAIN active record will understand that book_publisher_name is the name of the publisher associated to that book. an associated object must have an id to avoid breaking the DB schema so p.id as book_publisher_id is a prerequisite. perhaps publishers table have much more than these 2 fields (name;id) but that''s not that important while our association is most of the time done just for linking (id) or displaying (name)....if we want the complete object than we can fetch it completely later because we have it''s id.... seems crazy? tell me what?
That''s .......... CRAZY man!!!!!....... A LOT crazy!!!!! On Tue, 8 Mar 2005 08:17:59 +0100, hangonsoft-GANU6spQydw@public.gmane.org <hangonsoft-GANU6spQydw@public.gmane.org> wrote:> > > find_by_sql BREAKS THE OOP BEAUTY and perhaps we can solve that. > NAMING CONVENTIONS can perhaps make find_by_sql much more clever. > > lets imagine that relation: > > publishers <- books <-> authors_books <-> authors -> universities (can''t find > something else than university for that last association :) ) > > imagine we want to fetch every book and every associated object.....down to the > author university > > with rails we can do Books.find_all and then after in the code book.publisher , > book.authors and then author.university.... > this is to me the real beauty of active record that allows us to write > object.associated_object(s). > this is the real beauty that allows us to forget SQL while concentrating on > model construction....and that lower development times A LOT!!!!! > > when there is 10 books everything is OK !!!! there''s only 10 books inside a > development DB > > But when there''s 500 books, in a production DB, that leads to 500 request to > fetch publishers than 500 more for authors and then 500 more to find every > author''s university....DB EXPLOSION!!!! > > the solution with activerecord is to use find_by_sql to optimize a production > app , But what an efficient way to break the OOP BEAUTY i was talking > about......i mean with find by sql, authors are no more book associated objects > but additional informations added to every book object... > with find_by_sql no more object.associated_object synthax :( > > SEEMS LIKE OPTIMIZATION IS BREAKING THE MODEL!!!! no? optimizing a request > means using find_by_sql and breaks the inheritance... > > there is a sql request that can fetch all that information at once: > > SELECT b.*, > p.*, > a.*, > u.* > FROM books b, > publishers p, > authors a, > authors_books ab, > universities u > WHERE b.publisher_id = p.id and > a.id = ab.author_id and > b.id = ab.book_id and > a.university_id = u.id > > that fetch every association in the (publisher book author university) db model > > active record is based on NAMING CONVENTIONS so why can''t we do that? > (no matter if it''s fastidious because it''s optimization time of the dev process > !!!!!!) > > Books.find_by_sql( > "SELECT b.*, > p.name as book_publisher_name, > p.id as book_publisher_id, > a.name as book_author_name, > a.id as book_author_id, > u.name as author_university_name, > u.id as author_university_id > FROM books b, > publishers p, > authors a, > authors_books ab, > universities u > WHERE b.publisher_id = p.id and > a.id = ab.author_id and > b.id = ab.book_id and > a.university_id = u.id" > ) > > after such a request book.author.name or book.author..university.name would be > possible AGAIN > > active record will understand that book_publisher_name is the name of the > publisher associated to that book. > an associated object must have an id to avoid breaking the DB schema so p.id as > book_publisher_id is a prerequisite. > > perhaps publishers table have much more than these 2 fields (name;id) but that''s > not that important while our association is most of the time done just for > linking (id) or displaying (name)....if we want the complete object than we can > fetch it completely later because we have it''s id.... > > seems crazy? > tell me what? > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
hangonsoft-GANU6spQydw@public.gmane.org
2005-Mar-08 16:47 UTC
Re: find_by_sql ON STEROID possible?
thanks> That''s .......... CRAZY man!!!!!....... A LOT crazy!!!!! > > > On Tue, 8 Mar 2005 08:17:59 +0100, hangonsoft-GANU6spQydw@public.gmane.org > <hangonsoft-GANU6spQydw@public.gmane.org> wrote: > > > > > > find_by_sql BREAKS THE OOP BEAUTY and perhaps we can solve that. > > NAMING CONVENTIONS can perhaps make find_by_sql much more clever. > > > > lets imagine that relation: > > > > publishers <- books <-> authors_books <-> authors -> universities (can''t > find > > something else than university for that last association :) ) > > > > imagine we want to fetch every book and every associated object.....down to > the > > author university > > > > with rails we can do Books.find_all and then after in the code > book.publisher , > > book.authors and then author.university.... > > this is to me the real beauty of active record that allows us to write > > object.associated_object(s). > > this is the real beauty that allows us to forget SQL while concentrating on > > model construction....and that lower development times A LOT!!!!! > > > > when there is 10 books everything is OK !!!! there''s only 10 books inside a > > development DB > > > > But when there''s 500 books, in a production DB, that leads to 500 request > to > > fetch publishers than 500 more for authors and then 500 more to find every > > author''s university....DB EXPLOSION!!!! > > > > the solution with activerecord is to use find_by_sql to optimize a > production > > app , But what an efficient way to break the OOP BEAUTY i was talking > > about......i mean with find by sql, authors are no more book associated > objects > > but additional informations added to every book object... > > with find_by_sql no more object.associated_object synthax :( > > > > SEEMS LIKE OPTIMIZATION IS BREAKING THE MODEL!!!! no? optimizing a request > > means using find_by_sql and breaks the inheritance... > > > > there is a sql request that can fetch all that information at once: > > > > SELECT b.*, > > p.*, > > a.*, > > u.* > > FROM books b, > > publishers p, > > authors a, > > authors_books ab, > > universities u > > WHERE b.publisher_id = p.id and > > a.id = ab.author_id and > > b.id = ab.book_id and > > a.university_id = u.id > > > > that fetch every association in the (publisher book author university) db > model > > > > active record is based on NAMING CONVENTIONS so why can''t we do that? > > (no matter if it''s fastidious because it''s optimization time of the dev > process > > !!!!!!) > > > > Books.find_by_sql( > > "SELECT b.*, > > p.name as book_publisher_name, > > p.id as book_publisher_id, > > a.name as book_author_name, > > a.id as book_author_id, > > u.name as author_university_name, > > u.id as author_university_id > > FROM books b, > > publishers p, > > authors a, > > authors_books ab, > > universities u > > WHERE b.publisher_id = p.id and > > a.id = ab.author_id and > > b.id = ab.book_id and > > a.university_id = u.id" > > ) > > > > after such a request book.author.name or book.author..university.name would > be > > possible AGAIN > > > > active record will understand that book_publisher_name is the name of the > > publisher associated to that book. > > an associated object must have an id to avoid breaking the DB schema so > p.id as > > book_publisher_id is a prerequisite. > > > > perhaps publishers table have much more than these 2 fields (name;id) but > that''s > > not that important while our association is most of the time done just for > > linking (id) or displaying (name)....if we want the complete object than we > can > > fetch it completely later because we have it''s id.... > > > > seems crazy? > > tell me what? > > _______________________________________________ > > Rails mailing list > > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > > http://lists.rubyonrails.org/mailman/listinfo/rails > > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
hangonsoft-GANU6spQydw@public.gmane.org wrote:> the solution with activerecord is to use find_by_sql to optimize a production > app , But what an efficient way to break the OOP BEAUTY i was talking > about......i mean with find by sql, authors are no more book associated objects > but additional informations added to every book object... > with find_by_sql no more object.associated_object synthax :(This is an excellent optimization common to object-relational mappers. You use outer joins to "eagerly" pull associated records, nearly eliminating the need for piggyback queries. A similar technique may be used for mapping class-table inheritance. I plan to implement the first (eager joins) when we get into performance tuning with CD Baby, but first I need a convenient tool to track performance metrics for real-world workloads (hint hint ;) I would love to see Active Record becomes as database-efficient as Hibernate without sacrificing the elegance and simplicity that makes it so powerful. There are some wonderful achievements just waiting for willing hackers.. jeremy
> I would love to see Active Record becomes as database-efficient as > Hibernate without sacrificing the elegance and simplicity that makes it > so powerful. There are some wonderful achievements just waiting for > willing hackers..Definitely. A few other optimizations for easy cherry picking: * Per-request finder cache that''s stored on the model class, so you can have a scenario like: x Person.find(1) x Person.find(2) x Person.find(1) # cached x Person.find(1) # cached x Person.find(3) That''s an easy way of cutting down on the number of similar queries often happening in list pages without getting into problems with concurrency (the cache is cleared out after each request). * Plugging in a caching engine that works across requests for something like: Process Action r1 person = Person.find(1) r2 person = Person.find(1) # Cache hit r1 person.save # Expires cache r3 person = Person.find(1) # refetching I believe it would be very easy to make a super powerful caching scheme that''ll work totally behind the scenes for all the default, non-custom sql calls. And then when you need custom SQL, you could have methods like Person.expire(1) or similar for manual controls. But before you charge ahead, please do build some credible test cases that can be used for benchmarking of these improvements. -- David Heinemeier Hansson, http://www.basecamphq.com/ -- Web-based Project Management http://www.rubyonrails.org/ -- Web-application framework for Ruby http://www.loudthinking.com/ -- Broadcasting Brain
* David Heinemeier Hansson (david-OiTZALl8rpK0mm7Ywyx6yg@public.gmane.org) [050308 12:15]:> * Plugging in a caching engine that works across requests for something > like: > > Process Action > r1 person = Person.find(1) > r2 person = Person.find(1) # Cache hit > r1 person.save # Expires cache > r3 person = Person.find(1) # refetching > > I believe it would be very easy to make a super powerful caching scheme > that''ll work totally behind the scenes for all the default, non-custom > sql calls. And then when you need custom SQL, you could have methods > like Person.expire(1) or similar for manual controls.I''d put a bit of thought into doing just this with memcached, though I was told that someone was already working on this ("chrisd"?) a few weeks ago, but have heard nothing else since. It seems like if access via AR is the only means of changing the database, and programs don''t get the raw .connection and do updates/inserts (hence the need for "expire"), and transactions are ignored then (the obvious choice) memcached would work as a query cache for use with find() with very little effort. In fact, I had been pushing Michael Granger to hurry up and release his Ruby memcached months ago for this express purpose (knowing that I''d be working on a sizeable Rails app right about now) -- he showed his gratitude by having me audit his code :-/ (which was of very high quality, btw). The part where I start to get iffy is when considering database (and AR) transactions. Thinking out loud, it seems that if all changes are collected until the end of the transaction then the cache could be updated with the final results if/when the transaction commits. Memcached doesn''t yet appear to support an atomic ''set_many'' (though the Ruby client implements it in anticipation -- not sure if this will ever be imlemented, however), though, so there are likely to be race conditions if a bunch of objects are stored to the cache in quick succession which should really be viewed as an atomic transaction. Rick -- http://www.rickbradley.com MUPRN: 108 | and felt metal. The random email haiku | o''s represent the four bolts | that hold it in place.
Rick Bradley wrote:> The part where I start to get iffy is when considering database (and AR) > transactions. Thinking out loud, it seems that if all changes are > collected until the end of the transaction then the cache could be > updated with the final results if/when the transaction commits. > Memcached doesn''t yet appear to support an atomic ''set_many'' (though the > Ruby client implements it in anticipation -- not sure if this will ever > be imlemented, however), though, so there are likely to be race > conditions if a bunch of objects are stored to the cache in quick > succession which should really be viewed as an atomic transaction.The UnitOfWork pattern opens the door to transaction-safe caching and all manner of persistence optimization. The database is only hit when work is "flushed" on commit or by running a raw SQL query. An identity map (a hash of id -> record) acts as a "free" first-level cache. An application-wide second-level cache is harder because of the transaction issues you mention, but we can start with the easy cases like rarely updated read-only records. To fuel further investigation, check out Fowler''s _Patterns of Enterprise Application Architecture_. Also consider existing implementations: the JDO UnitOfWork is a transaction which mimics database transaction semantics (begin, commit, rollback, etc.); the Hibernate UnitOfWork is an explicit session that you open, work within, and flush. Best, jeremy
* Jeremy Kemper (jeremy-w7CzD/W5Ocjk1uMJSBkQmQ@public.gmane.org) [050308 13:27]:> The UnitOfWork pattern opens the door to transaction-safe caching and > all manner of persistence optimization. The database is only hit when > work is "flushed" on commit or by running a raw SQL query. An identity > map (a hash of id -> record) acts as a "free" first-level cache. An > application-wide second-level cache is harder because of the transaction > issues you mention, but we can start with the easy cases like rarely > updated read-only records. > > To fuel further investigation, check out Fowler''s _Patterns of > Enterprise Application Architecture_. Also consider existing > implementations: the JDO UnitOfWork is a transaction which mimics > database transaction semantics (begin, commit, rollback, etc.); the > Hibernate UnitOfWork is an explicit session that you open, work within, > and flush.Jeremy, I think we''re in agreement here (I have PoEAA and am familiar with UnitOfWork). I think it''s worth my time to go and look again at (the current) AR and see how to most directly integrate caching for the most obvious cases. If anyone else beats me to the punch, though, I won''t be upset -- I''ve got a full plate. ;-) Rick -- http://www.rickbradley.com MUPRN: 44 | for their embedded random email haiku | JVM in Oracle but again | nothing announced.
On Tue, 8 Mar 2005 18:11:01 +0100, David Heinemeier Hansson <david-OiTZALl8rpK0mm7Ywyx6yg@public.gmane.org> wrote:> > I would love to see Active Record becomes as database-efficient as > > Hibernate without sacrificing the elegance and simplicity that makes it > > so powerful. There are some wonderful achievements just waiting for > > willing hackers.. > > Definitely. A few other optimizations for easy cherry picking: > > * Per-request finder cache that''s stored on the model class, so you can > have a scenario like: > > x Person.find(1) > x Person.find(2) > x Person.find(1) # cached > x Person.find(1) # cached > x Person.find(3) > > That''s an easy way of cutting down on the number of similar queries > often happening in list pages without getting into problems with > concurrency (the cache is cleared out after each request). > > * Plugging in a caching engine that works across requests for something > like: > > Process Action > r1 person = Person.find(1) > r2 person = Person.find(1) # Cache hit > r1 person.save # Expires cache > r3 person = Person.find(1) # refetching > > I believe it would be very easy to make a super powerful caching scheme > that''ll work totally behind the scenes for all the default, non-custom > sql calls. And then when you need custom SQL, you could have methods > like Person.expire(1) or similar for manual controls.As Rick mentioned, memcached is probably a great option here. Afterall model level caching is how livejournal uses it right? As an aside? Is this stuff part of the ''performance'' package for 1.0, or is this post 1.0 thinking? I think the caching stuff would be relatively easy to implement before 1.0, but outer joins and the like may not be?> But before you charge ahead, please do build some credible test cases > that can be used for benchmarking of these improvements. > -- > David Heinemeier Hansson, > http://www.basecamphq.com/ -- Web-based Project Management > http://www.rubyonrails.org/ -- Web-application framework for Ruby > http://www.loudthinking.com/ -- Broadcasting Brain > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >-- Cheers Koz
> As Rick mentioned, memcached is probably a great option here. > Afterall model level caching is how livejournal uses it right?True, but in case of webrick memory caching is fine and there are other ways like pstore and drb servers. What i would like to see is a unified caching "hook" in AP. Something like ActionController.cache_manager = whatever in environment.rb There are a lot of different approaches to caching. Fragments, actions are supported currently. Assigned var cache is another great way to get some speed while still running the presentation logic for things like "posted 34secs ago". Another cache I use in typo is called simplecache and caches all the aggregations like tada, flickr and delicious for 60 minutes. All of those caching things should use a common cache manager with a certain interface ( maybe that of a hash? ) As soon as we raise the status quo for caching to a common abstraction much good will come from all fronts. -- Tobi http://www.snowdevil.ca - Snowboards that don''t suck http://www.hieraki.org - Open source book authoring http://blog.leetsoft.com - Technical weblog
> As an aside? Is this stuff part of the ''performance'' package for 1.0, > or is this post 1.0 thinking? I think the caching stuff would be > relatively easy to implement before 1.0, but outer joins and the like > may not be?I''d definitely like to see this being part of the Performance push before 1.0. But the most important thing in that push is to locate the hotspots in Rails. Getting a stereotypical application running (however oxymoronic that may sound) and the get to work with benchmarking and profiling. Rails is probably horrifically slow in a lot of spots we''re not expecting it to be. I''d _very_ much like to see someone volunteer as chief driving a performance optimization venture. Not necessarily doing all the fixes, but at least just finding all the bottlenecks. That would be a huge service to Rails. Having a somewhat standard application that we can use to inform us about performance trade-offs. -- David Heinemeier Hansson, http://www.basecamphq.com/ -- Web-based Project Management http://www.rubyonrails.org/ -- Web-application framework for Ruby http://www.loudthinking.com/ -- Broadcasting Brain
> All of those caching things should use a common cache manager with a > certain interface ( maybe that of a hash? ) > > As soon as we raise the status quo for caching to a common abstraction > much good will come from all fronts.I certainly agree. All the code is already there, it''s just a matter of extracting Active Caching and delegate from there. Heh. I think we just birthed another library and gem for Rails ;) The hash is a good starting point, but our cache should certainly also have the possibility of doing timed expires (this piece lives for 5 minutes). How''s interested in driving Active Caching :)? (Yeah, I''m dishing out delegations like nobodies business :)) -- David Heinemeier Hansson, http://www.basecamphq.com/ -- Web-based Project Management http://www.rubyonrails.org/ -- Web-application framework for Ruby http://www.loudthinking.com/ -- Broadcasting Brain
On Tue, 8 Mar 2005 20:57:33 +0100, David Heinemeier Hansson <david-OiTZALl8rpK0mm7Ywyx6yg@public.gmane.org> wrote:> > All of those caching things should use a common cache manager with a > > certain interface ( maybe that of a hash? ) > > > > As soon as we raise the status quo for caching to a common abstraction > > much good will come from all fronts. > > I certainly agree. All the code is already there, it''s just a matter of > extracting Active Caching and delegate from there. Heh. I think we just > birthed another library and gem for Rails ;) > > The hash is a good starting point, but our cache should certainly also > have the possibility of doing timed expires (this piece lives for 5 > minutes). > > How''s interested in driving Active Caching :)? (Yeah, I''m dishing out > delegations like nobodies business :))What are we after here? A pluggable Caching implementation which supports Expiry? Basically ''memcached'' with an option to use something else when memcached isn''t available? Or are we targetting some amazing distributed awesome cool cache. Basically I''m wondering how far we want this to go? If it''s not too far, I may be interested. but if we''re over reaching ....> -- > David Heinemeier Hansson, > http://www.basecamphq.com/ -- Web-based Project Management > http://www.rubyonrails.org/ -- Web-application framework for Ruby > http://www.loudthinking.com/ -- Broadcasting Brain > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >-- Cheers Koz
On Tue, 8 Mar 2005 20:54:33 +0100, David Heinemeier Hansson <david-OiTZALl8rpK0mm7Ywyx6yg@public.gmane.org> wrote:> I''d _very_ much like to see someone volunteer as chief driving a > performance optimization venture. Not necessarily doing all the fixes, > but at least just finding all the bottlenecks. That would be a huge > service to Rails. Having a somewhat standard application that we can > use to inform us about performance trade-offs.What I think would be ultimately more useful is if this "someone" takes a tools-based approach into this that can get integrated into Rails proper, so that anyone who wants to, can send us profiling data easily for their application on their machine for the slow use case. Then we would just run a script over the data to get pretty pictures. Scope creep, I know, I know. ActiveProfiling? :) Leon
> The hash is a good starting point, but our cache should certainly also > have the possibility of doing timed expires (this piece lives for 5 > minutes).Perhaps ruby-cache (http://raa.ruby-lang.org/project/ruby-cache/) can be used there?
All this sounds great, and I would love to take advantage of automatic joins. I''d like to throw my request into the ring that not break it for thus of us who are working with legacy schemas that are far from the Rails ideal. In other word, make it optional. Also keep in mind that some people have multiple different kinds of relations between the same two tables, so the automatic stuff would have to know which relations to use. BTW I am going to release a production site next week, so I was wondering if you guys are making backwards compatibility a priority with future releases of Rails? Thanks, and thanks for the great framework. -Lee On Tue, 8 Mar 2005 18:11:01 +0100, David Heinemeier Hansson wrote> > I would love to see Active Record becomes as database-efficient as > > Hibernate without sacrificing the elegance and simplicity that makes it > > so powerful. There are some wonderful achievements just waiting for > > willing hackers.. > > Definitely. A few other optimizations for easy cherry picking: > > * Per-request finder cache that''s stored on the model class, so you > can have a scenario like: > > x Person.find(1) > x Person.find(2) > x Person.find(1) # cached > x Person.find(1) # cached > x Person.find(3) > > That''s an easy way of cutting down on the number of similar queries > often happening in list pages without getting into problems with > concurrency (the cache is cleared out after each request). > > * Plugging in a caching engine that works across requests for > something like: > > Process Action > r1 person = Person.find(1) > r2 person = Person.find(1) # Cache hit > r1 person.save # Expires cache > r3 person = Person.find(1) # refetching > > I believe it would be very easy to make a super powerful caching > scheme that''ll work totally behind the scenes for all the default, > non-custom sql calls. And then when you need custom SQL, you could > have methods like Person.expire(1) or similar for manual controls. > > But before you charge ahead, please do build some credible test > cases that can be used for benchmarking of these improvements. > -- > David Heinemeier Hansson, > http://www.basecamphq.com/ -- Web-based Project Management > http://www.rubyonrails.org/ -- Web-application framework for Ruby > http://www.loudthinking.com/ -- Broadcasting Brain > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails-- Naxos Technology
How many Rails people have done anything with Borland''s ECO (or Bold for Delphi)? In this UML<->Code environment, they have several things that conceptually map well to Rails. Its Object Constraint Language implementation seems to encompass explicitly (the relationships are named) what Rails expresses generically (has_one, has_many, belongs_to, etc), and a couple of the ECO objects seem to be reflected in ActiveRecord, etc. Of course, OCL as a language seems slightly non-intuitive to grasp outside of the simple examples I''ve seen, especially when they get complex. I try to map them in my mind to SQL (which is what ECO does: convert OCL expressions to SQL queries). But how close is Rails to fitting in with some broader conceptual frameworks like UML2/MDA/OCL? On Wed, 9 Mar 2005 03:23:02 -0400, lnelson <lnelson-wAkyhNATXT+1Z/+hSey0Gg@public.gmane.org> wrote:> > All this sounds great, and I would love to take advantage of > automatic joins. > > I''d like to throw my request into the ring that not break it > for thus of us who are working with legacy schemas that are > far from the Rails ideal. In other word, make it optional. > > Also keep in mind that some people have multiple different > kinds of relations between the same two tables, so the > automatic stuff would have to know which relations to use. > > BTW I am going to release a production site next week, so > I was wondering if you guys are making backwards compatibility > a priority with future releases of Rails? > > Thanks, and thanks for the great framework. > > -Lee > > On Tue, 8 Mar 2005 18:11:01 +0100, David Heinemeier Hansson wrote > > > I would love to see Active Record becomes as database-efficient as > > > Hibernate without sacrificing the elegance and simplicity that makes it > > > so powerful. There are some wonderful achievements just waiting for > > > willing hackers.. > > > > Definitely. A few other optimizations for easy cherry picking: > > > > * Per-request finder cache that''s stored on the model class, so you > > can have a scenario like: > > > > x Person.find(1) > > x Person.find(2) > > x Person.find(1) # cached > > x Person.find(1) # cached > > x Person.find(3) > > > > That''s an easy way of cutting down on the number of similar queries > > often happening in list pages without getting into problems with > > concurrency (the cache is cleared out after each request). > > > > * Plugging in a caching engine that works across requests for > > something like: > > > > Process Action > > r1 person = Person.find(1) > > r2 person = Person.find(1) # Cache hit > > r1 person.save # Expires cache > > r3 person = Person.find(1) # refetching > > > > I believe it would be very easy to make a super powerful caching > > scheme that''ll work totally behind the scenes for all the default, > > non-custom sql calls. And then when you need custom SQL, you could > > have methods like Person.expire(1) or similar for manual controls. > > > > But before you charge ahead, please do build some credible test > > cases that can be used for benchmarking of these improvements. > > -- > > David Heinemeier Hansson, > > http://www.basecamphq.com/ -- Web-based Project Management > > http://www.rubyonrails.org/ -- Web-application framework for Ruby > > http://www.loudthinking.com/ -- Broadcasting Brain > > > > _______________________________________________ > > Rails mailing list > > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > > http://lists.rubyonrails.org/mailman/listinfo/rails > > -- > Naxos Technology > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
> What are we after here? A pluggable Caching implementation which > supports Expiry? Basically ''memcached'' with an option to use > something else when memcached isn''t available? Or are we targetting > some amazing distributed awesome cool cache. > > Basically I''m wondering how far we want this to go? If it''s not too > far, I may be interested. but if we''re over reaching ....I think ultimately, we could do a lot of interesting things with a separate, generic caching library. But for now, we should just extract the caching approach already used for fragments in Action Pack and add expiration. Of course, it would be even nicer if we could make a bridge for sessions. So we could have a single session store delegating to this caching library instead of duplicating it in separate session stores. But I haven''t yet looked too closely into that. -- David Heinemeier Hansson, http://www.basecamphq.com/ -- Web-based Project Management http://www.rubyonrails.org/ -- Web-application framework for Ruby http://www.loudthinking.com/ -- Broadcasting Brain
> What I think would be ultimately more useful is if this "someone" > takes a tools-based approach into this that can get integrated into > Rails proper, so that anyone who wants to, can send us profiling data > easily for their application on their machine for the slow use case. > > Then we would just run a script over the data to get pretty pictures. > > Scope creep, I know, I know. ActiveProfiling? :)Actually, Florian Weber is working somewhat on this with his benchmarking stuff[1]. But I''d really like for something to go the profiling route too. Basically, just make it easy to attach the Ruby profiler to the execution of a single action a few times. I''ve found that the profiler is an excellent way of finding hot spots. I don''t think it would need to be a separate library, though. Let''s just make it a feature of Action Pack for now. But thanks for signing up for it, Leon ;) With all this volunteering, I can''t but think Rails 1.0 will get a nice increase in speed ;) [1] http://weblog.rubyonrails.com/archives/2005/02/22/benchmark-reports- coming-to-rails/ -- David Heinemeier Hansson, http://www.basecamphq.com/ -- Web-based Project Management http://www.rubyonrails.org/ -- Web-application framework for Ruby http://www.loudthinking.com/ -- Broadcasting Brain
hangonsoft-GANU6spQydw@public.gmane.org
2005-Mar-09 11:20 UTC
Re: find_by_sql ON STEROID possible?
is there an alternative now at those piggy back queries? Selon David Heinemeier Hansson <david-OiTZALl8rpK0mm7Ywyx6yg@public.gmane.org>:> > What I think would be ultimately more useful is if this "someone" > > takes a tools-based approach into this that can get integrated into > > Rails proper, so that anyone who wants to, can send us profiling data > > easily for their application on their machine for the slow use case. > > > > Then we would just run a script over the data to get pretty pictures. > > > > Scope creep, I know, I know. ActiveProfiling? :) > > Actually, Florian Weber is working somewhat on this with his > benchmarking stuff[1]. But I''d really like for something to go the > profiling route too. Basically, just make it easy to attach the Ruby > profiler to the execution of a single action a few times. I''ve found > that the profiler is an excellent way of finding hot spots. > > I don''t think it would need to be a separate library, though. Let''s > just make it a feature of Action Pack for now. But thanks for signing > up for it, Leon ;) > > With all this volunteering, I can''t but think Rails 1.0 will get a nice > increase in speed ;) > > [1] > http://weblog.rubyonrails.com/archives/2005/02/22/benchmark-reports- > coming-to-rails/ > -- > David Heinemeier Hansson, > http://www.basecamphq.com/ -- Web-based Project Management > http://www.rubyonrails.org/ -- Web-application framework for Ruby > http://www.loudthinking.com/ -- Broadcasting Brain > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >