I am working on and will be publicly showing ActiveRecord some love around the time of the RubyConf*MI [1] in late August. My main focus is getting AR to better handle inserts, updates and merges when working large sets of data. It can improve improve performance by 400% to 600% in preliminary benchmarks. I am coding this in a way so it can be patched to AR easily, and with that in mind I''d like to get input from the rails-core team on any requirements, requests or gotcha''s that I should know about. Many months ago I posted on this list in regards to my Temporary Table plugin, which has been refactored and will be more intelligently supported in AR with other optimizations as well. At the time of Temporary Table the API was nice for me, but wouldn''t be for the majority of rails users. It wasn''t people-ready. Since then I have been focusing on achieving the best API for these enhancements to AR''s performance. Until it is able to be reviewed by the core team when it''s released I''ll be releasing it as the ActiveRecord::Optimizations plugin. I wrote a upcoming post on it here: http://blogs.mktec.com/zdennis/articles/category/ruby Zach [1] http://www.rubyconfmi.org
On 14-jul-2006, at 9:50, zdennis wrote:> Until it is able to be reviewed by the core team when it''s released > I''ll > be releasing it as the ActiveRecord::Optimizations plugin. I wrote a > upcoming post on it here: > http://blogs.mktec.com/zdennis/articles/category/rubySo where is the code? Manfred
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Manfred Stienstra wrote:> > On 14-jul-2006, at 9:50, zdennis wrote: > >> Until it is able to be reviewed by the core team when it''s released I''ll >> be releasing it as the ActiveRecord::Optimizations plugin. I wrote a >> upcoming post on it here: >> http://blogs.mktec.com/zdennis/articles/category/ruby > > So where is the code?active_record_optimizations 0.0.1 release can be found at: http://blogs.mktec.com/zdennis/articles/2006/07/15/activerecord-optimizations-0-0-1 The download itself can be found at: http://blogs.mktec.com/zdennis/files/active_record_optimizations-0.0.1.tgz Any questions, feedbacks, etc... just kick me an email. Zach [1] http://www.rubyconfmi.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEuHSFMyx0fW1d8G0RAnp1AJ9Oj9R6ek4JIH3yXrfsP52EfKFDqgCfbrIg D7Za1Vw3jjZY+NLEp2xTfS0=AdHX -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Attached is a patch with the following API change for ActiveRecord::Base.create. Currently ActiveRecord::Base.create takes either a hash of attributes or an array of hashes of attributes. If you pass in an array it will treat each hash individually and create/save each object separately. This is really slow because it invokes a single INSERT statement for each hash of attributes. The current class method ''create'' also returns a model object or an array of model objects for the hash(es) that were passed in. This patch updates ActiveRecord::Base.create to take a third argument which is an options hash. Have this take an :optimize key which can point to one of the values: ''fast'' or ''fastest'' ''fast'' would use the minimum amount of INSERT statements to create the values and would still use the current validations/callback methods ''fastest'' would use the minimum amount of INSERT statements and would return the number of INSERT statements issued. It would not create any model objects, and it would skip all validations/callbacks. Currently ''fastest'' is implemented in the patch. I expect to have ''fast'' done by midweek but I''d like some peer review and core acceptance before I focus on doing to many more patches for this feature. This also involves a patch to MysqlAdapter (and it will require a patch for any other adapter to support this optimization) to support intelligent communication with the database server to determine server values like the maximum allowed packet size (which is used to create the minimum number of INSERT statements). Since dev.rubyonrails.com is down I figured I post this here. Below are some benchmarking statistic. Creation of 100 MyISAM records using AR now took 0.92 seconds Creation of 100 MyISAM records using :optimize=>''fastest'' took 0.10 seconds Creation of 1000 MyISAM records using AR now took 5.41 seconds Creation of 1000 MyISAM records using :optimize=>''fastest'' took 0.17 seconds Creation of 10000 MyISAM records using AR now took 57.46 seconds Creation of 10000 MyISAM records using :optimize=>''fastest'' took 2.45 seconds The benchmarks for Memory and InnoDb are just as good (although InnoDb does get a better speedup). The change in code for the user looks like: MyModel.create array_of_hashes, :optimize=>''fastest'' This is a patch off from trunk tonight, and it includes unit tests for everything. Could someone check this out and give some feedback? I have started to develop this as a separate plugin for ActiveRecord, but the more I think about it the more it seems that this sort of thing should be in core. Thoughts? Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEuuhyMyx0fW1d8G0RAiDTAJ9GceqhS2lyR4OL+FrajQjenLmPOACfVpia Z4tSCY8kl/kncnbRe6CIuHE=4hcE -----END PGP SIGNATURE----- _______________________________________________ Rails-core mailing list Rails-core@lists.rubyonrails.org http://lists.rubyonrails.org/mailman/listinfo/rails-core
* zdennis (zdennis@mktec.com) [060716 20:33]:> Could someone check this out and give some feedback? I have started to > develop this as a separate plugin for ActiveRecord, but the more I think > about it the more it seems that this sort of thing should be in core. > Thoughts?Those are impressive speedups, and it would be interesting if they can be made to apply generally. Not commenting on the viability of the patch as a whole, but perhaps rather than :fast / :fastest it would be better to drop :fast (which as I understand it simply reduces the number of inserts required, behind the scenes, without otherwise changing the create() behavior(?)) in favor of a speedy default, and renaming :fastest (after all, what''s "fastest" now may well not be in the future) to be indicative of function -- i.e., that it doesn''t create the AR objects being created in the database. Rick -- http://www.rickbradley.com MUPRN: 5 | said any color random email haiku | ordering will work as long | as it is straight through.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Rick Bradley wrote:> * zdennis (zdennis@mktec.com) [060716 20:33]: >> Could someone check this out and give some feedback? I have started to >> develop this as a separate plugin for ActiveRecord, but the more I think >> about it the more it seems that this sort of thing should be in core. >> Thoughts? > > Those are impressive speedups, and it would be interesting if they can > be made to apply generally.I tried to extract what could be generalized across ActiveRecord::Base and AbstractAdapter rather then putting things into the concrete Adapter implementations like MysqlAdapter and PostgreSQLAdapter, but one of the things I rely on is asking the server how large a packet can be. After I figure this out I can optimize the number of insert statements based on the number of bytes a generated SQL statement would be. I think at minimum we just need a method to ask each concrete Adapter for the maximum number of allowed bytes. This is assuming that all servers support the standard SQL insert syntax for multi-value inserts of "VALUES ( 1 ), ( 2 ), ( 3 ), ( 4 ), ...." I can make that change and submit an updated patch tomorrow or Tuesday.> > Not commenting on the viability of the patch as a whole, but perhaps > rather than :fast / :fastest it would be better to drop :fast (which as > I understand it simply reduces the number of inserts required, behind > the scenes, without otherwise changing the create() behavior(?)) in > favor of a speedy default, and renaming :fastest (after all, what''s > "fastest" now may well not be in the future) to be indicative of > function -- i.e., that it doesn''t create the AR objects being created in > the database.Yeah it''s been tough to determine what to call them. I had "good", "gooder" and "goodest" originally but I didn''t know if other people would get the silly humor of it all. =) I have two options now (regardless of name) because although ''fast'' will be optimized i don''t have tests or benchmarks for it yet and I didn''t want to submit a patch that would override existing behavior until that was done. I did want to provide complete indication of where this functionality was going though, and that is why I included it in the last email. Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEuvClMyx0fW1d8G0RAr86AJ0Um57ETCqmGUKkjbXWSmTQthwv+QCfTAmU /kqOTUxejRys39pKAc+wHwY=S9d1 -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Rick Bradley wrote:> * zdennis (zdennis@mktec.com) [060716 20:33]:> > Not commenting on the viability of the patch as a whole, but perhaps > rather than :fast / :fastest it would be better to drop :fast (which as > I understand it simply reduces the number of inserts required, behind > the scenes, without otherwise changing the create() behavior(?)) in > favor of a speedy default, and renaming :fastest (after all, what''s > "fastest" now may well not be in the future) to be indicative of > function -- i.e., that it doesn''t create the AR objects being created in > the database. >What about :optimize=>''norecords'' or :return=>''norecords'' ? The later would get rid of optimize key but it probably be a better indicate of what is going on? Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEuwePMyx0fW1d8G0RAtFSAKCCgCm7GggPzV6wZCZTHDHtiH7ZFQCbB4ud +yRCgBJWEb5dWjseA0xOrE8=Y2Mg -----END PGP SIGNATURE-----
Since the behaviour is different from that of create (i.e. no object instances are returned), why not just define a new method which clearly indicates it''s aptitude for inserting large amounts of data? Model.import(hashes) or something... - james On 7/17/06, zdennis <zdennis@mktec.com> wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Rick Bradley wrote: > > * zdennis (zdennis@mktec.com) [060716 20:33]: > > > > > Not commenting on the viability of the patch as a whole, but perhaps > > rather than :fast / :fastest it would be better to drop :fast (which as > > I understand it simply reduces the number of inserts required, behind > > the scenes, without otherwise changing the create() behavior(?)) in > > favor of a speedy default, and renaming :fastest (after all, what''s > > "fastest" now may well not be in the future) to be indicative of > > function -- i.e., that it doesn''t create the AR objects being created in > > the database. > > > > What about :optimize=>''norecords'' or :return=>''norecords'' ? The later > would get rid of optimize key but it probably be a better indicate of > what is going on? > > Zach > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.2.2 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFEuwePMyx0fW1d8G0RAtFSAKCCgCm7GggPzV6wZCZTHDHtiH7ZFQCbB4ud > +yRCgBJWEb5dWjseA0xOrE8> =Y2Mg > -----END PGP SIGNATURE----- > _______________________________________________ > Rails-core mailing list > Rails-core@lists.rubyonrails.org > http://lists.rubyonrails.org/mailman/listinfo/rails-core >-- * J * ~
On 7/17/06, James Adam <james.adam@gmail.com> wrote:> Since the behaviour is different from that of create (i.e. no object > instances are returned), why not just define a new method which > clearly indicates it''s aptitude for inserting large amounts of data? > Model.import(hashes) or something...The behaviour is *significantly* different from create as no validations are performed. I''d second the opinion that this isn''t really a ''create'' scenario, it''s much more of an import. I''m still not convinced that a workaround like this is the right way to solve these ''bulk load'' / ETL scenarios. Why not use your database''s import tools or some really lightweight SQL wrappers? -- Cheers Koz
I agree import is appropriate. However, the reason someone would want this in Rails is to integrate a CSV import through the web interface for an end user. Why would it skip validation though? I would expect: 1. A creation of ActiveRecord objects for all the elements in the hash. 2. Validation ran on all the objects. 3. The mass insertion method is ran against an array holding objects that pass validation. 4. The array of those that fail would be returned to provide feedback to the user on the offending entries. I would think that this optimized method is trying to take advantage of mass INSERTS into a SQL database and not circumvent the ActiveRecord design completely. Michael Koziarski wrote:> On 7/17/06, James Adam <james.adam@gmail.com> wrote: > I''m still not convinced that a workaround like this is the right way > to solve these ''bulk load'' / ETL scenarios. Why not use your > database''s import tools or some really lightweight SQL wrappers? >
> > I''m still not convinced that a workaround like this is the right way > to solve these ''bulk load'' / ETL scenarios. Why not use your > database''s import tools or some really lightweight SQL wrappers?While the ''insert'' method is more universal, I agree that it''s often smarter to use the vendor tools. For example, MySQL''s LOAD command processes data imports many orders of magnitude faster than traditional inserts. Joshua Sierles _______________________________________________ Rails-core mailing list Rails-core@lists.rubyonrails.org http://lists.rubyonrails.org/mailman/listinfo/rails-core
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Michael Koziarski wrote:> On 7/17/06, James Adam <james.adam@gmail.com> wrote: > >> Since the behaviour is different from that of create (i.e. no object >> instances are returned), why not just define a new method which >> clearly indicates it''s aptitude for inserting large amounts of data? >> Model.import(hashes) or something... > > > The behaviour is *significantly* different from create as no > validations are performed. I''d second the opinion that this isn''t > really a ''create'' scenario, it''s much more of an import.Yeah I agree with you and James on this. It does seem much more of an import. I will rename the method that uses my functionality. I also agree with returning the records that don''t validate (when validation is performed).> > I''m still not convinced that a workaround like this is the right way > to solve these ''bulk load'' / ETL scenarios. Why not use your > database''s import tools or some really lightweight SQL wrappers?I don''t think people should have to shell out to run mysqlimport or run a cronjob to process generated SQL files, just to load 1000 records or more in an efficient manner. And I am trying to use lightweight SQL wrappers by adding this functionality to AR. It is a very real requirement that users want to upload data feeds, whether CSV, tab-based or something else entirely. I prefer writing my whole system using AR. I use Rails for the frontend, and then I use other ruby programs for mass (>100,000 records) of data processing at a time. To me it is simple, elegant and just as efficient by running code like: stats = MyModel.insert array_of_hashes, :on_duplicate_key_update=> [:project_number, :description, :est_due_date ] Rather then writing code like: sql = generate_sql_for_projects( array_of_hashes ) File.open( "projects.sql", "w" ) `mysqlimport -fields a b c -etc.... projects.sql` Keeping things simple keeps it flexible for maintenance and customer requests. AR is so well thought out and designed, this seems like a core feature that is just simply missing. Thoughts? Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEu8GfMyx0fW1d8G0RAhC0AJ9baTQQisI7l/4F3JfSRz+B7F2hoACdEc+p Pwgc4x36zYl7FsvcBOZ+SZA=cMO9 -----END PGP SIGNATURE-----
On 7/17/06, zdennis <zdennis@mktec.com> wrote:> AR is so well thought out and designed, this seems > like a core feature that is just simply missing.Since you are looking for speed to the point where you are sacrificing validations, and pretty much anything else vaguely active-record-ish, beyond getting the table name and establishing the DB connection, this functionality is at a distance from what ActiveRecord''s core (which is surely being an ORM). That said, I think providing an ''import'' method as a plugin would certainly be useful to some people (especially those too lazy to connect to a database by hand). Another thing to do might be reworking it to use prepared statements, for an even bigger speed increase. - james -- * J * ~
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 James Adam wrote:> On 7/17/06, zdennis <zdennis@mktec.com> wrote: > >> AR is so well thought out and designed, this seems >> like a core feature that is just simply missing. > > > Since you are looking for speed to the point where you are sacrificing > validations, and pretty much anything else vaguely active-record-ish, > beyond getting the table name and establishing the DB connection, this > functionality is at a distance from what ActiveRecord''s core (which is > surely being an ORM).I don''t want to bypass validations altogether. I think an option should exist for this yes, but I think that user''s should be able to still get validations with the speed of a multi-value insert statement. Granted, AR relies on the insert_id returned by MySQL to set the objec''s id of the AR::Base instance you are saving and with a multi-value insert you can''t guarantee you''ll be able to compute that accurately for every insert value because of threading and multiple connections in the server. You can however, enforce validation and return model objects that didn''t save.> > That said, I think providing an ''import'' method as a plugin would > certainly be useful to some people (especially those too lazy to > connect to a database by hand).I like to automate tasks, if that makes me lazy, then I am lazy.> Another thing to do might be reworking > it to use prepared statements, for an even bigger speed increase.I like the idea of keeping logic outside of the DB itself. If the way AR saved multiple records were a smidgen slower then using multi-value INSERT statements or LOAD DATA INFILE then I probably wouldn''t care. But it''s 20 to 30 times slower. A user will wait 2.45 seconds for 1000 records to process,but 50 seconds. And you can return the success/failure rate of what passed validations and what didn''t. And by wrapping things in a transaction you could completely rollback from any errors and let the user know that. I can also unit test and functionally test the logic and processing that goes into receiving these data feeds. I can do this all within the same unit tests that I use for my models and my controllers. I can test it from the web upload all the way to the database insertion and any processing/validation in between. I see a far greater benefit when everything adds up then by keeping things separate merely for the sake or ORM purism. Not only for me as a developer but for the user. If the mass here doesn''t agree I will continue to develop as a plugin, but wouldn''t these common database tasks be considered feature-completing AR, rather then affecting it''s stature on ORM purity? Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEu/yrMyx0fW1d8G0RAm7zAJ4qfcJnqDobkVCwXUL7NVcS6SfarQCeLVJM 8FMocdeLrCJzy95QP3r28NA=Ueh/ -----END PGP SIGNATURE-----
On 7/17/06, zdennis <zdennis@mktec.com> wrote:> > That said, I think providing an ''import'' method as a plugin would > > certainly be useful to some people (especially those too lazy to > > connect to a database by hand). > > I like to automate tasks, if that makes me lazy, then I am lazy.Me too - I must confess to having an AR model in one of my projects for the explicit purpose of connecting to a DB, and nothing else. Necessity is the mother of innovation; Lazyness is the mother of pragmatism. I certainly have no issue taking Models out to dinner, plying them with rich foods and cheap wines, and then swiping their underlying DB connection using instance_eval before they know what''s happened to them. Don''t hate the player baby, hate the game! Ahem.> > Another thing to do might be reworking > > it to use prepared statements, for an even bigger speed increase. > > I like the idea of keeping logic outside of the DB itself.Perhaps we''re talking at cross purposes, but I don''t think using a prepared statement is pushing logic into the DB; rather it''s taking advantage of cache features so the DB can be a bit smarter (or in some cases a LOT smarter) about how it handles your request. At the ActiveRecord API level, whether you''re using a prepared statement or not should be transparent; your import(array) method needn''t change the argument it accepts or the result it returns. It might be worth investigating in this context, since you essentially on a quest for speed and efficiency. Finally, implementing this as a plugin first will give people (including the core team I''d imagine) the opportunity to evaluate this addition in ''production'' environments, which is an excellent way to highlight its value. Don''t think of doing as a plugin as a ''second-best'' solution... - james -- * J * ~
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 James Adam wrote:> On 7/17/06, zdennis <zdennis@mktec.com> wrote: > >> > That said, I think providing an ''import'' method as a plugin would >> > certainly be useful to some people (especially those too lazy to >> > connect to a database by hand). >> >> I like to automate tasks, if that makes me lazy, then I am lazy. > > > Me too - I must confess to having an AR model in one of my projects > for the explicit purpose of connecting to a DB, and nothing else. > Necessity is the mother of innovation; Lazyness is the mother of > pragmatism. I certainly have no issue taking Models out to dinner, > plying them with rich foods and cheap wines, and then swiping their > underlying DB connection using instance_eval before they know what''s > happened to them. Don''t hate the player baby, hate the game! Ahem.I don''t have a PhD, don''t worry. ;)>> > Another thing to do might be reworking >> > it to use prepared statements, for an even bigger speed increase. >> >> I like the idea of keeping logic outside of the DB itself. > > > Perhaps we''re talking at cross purposes, but I don''t think using a > prepared statement is pushing logic into the DB; rather it''s taking > advantage of cache features so the DB can be a bit smarter (or in some > cases a LOT smarter) about how it handles your request. At the > ActiveRecord API level, whether you''re using a prepared statement or > not should be transparent; your import(array) method needn''t change > the argument it accepts or the result it returns. It might be worth > investigating in this context, since you essentially on a quest for > speed and efficiency.I''ll look into this more, thanks for clarifying.> Finally, implementing this as a plugin first will give people > (including the core team I''d imagine) the opportunity to evaluate this > addition in ''production'' environments, which is an excellent way to > highlight its value. Don''t think of doing as a plugin as a > ''second-best'' solution...Plugin is the way I started, and plugin is the way I will continue to go for the time being. Before I got to far with bringing implementations I wanted to pose this question and have this very discussion to determine where my effort should be applied (patches to AR trunk or at a plugin). Thanks for the discussion and your viewpoints. It is very appreciated as I move towards a faster AR. ;) Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEvAh/Myx0fW1d8G0RAoakAJ4hSElM8Y0nFQ7JDlqztUIkY6BohQCeM5yr NYaK040952E7YqM+Ovqt7Tc=uQ/0 -----END PGP SIGNATURE-----
On 7/17/06, Michael Genereux <mgenereu@simiancodex.com> wrote:> > I agree import is appropriate. However, the reason someone would want > this in Rails is to integrate a CSV import through the web interface for > an end user. Why would it skip validation though?There are cases where skipping validation is desired: I have a set of hourly tasks that create a few hundred thousand hashes and shove them into the database. I would expect:> 1. A creation of ActiveRecord objects for all the elements in the hash. > 2. Validation ran on all the objects. > 3. The mass insertion method is ran against an array holding objects > that pass validation.Would it not make more sense to stop unless the records are all valid? Are you going to ask the user to edit the valid lines out of their CSV file to avoid duplicate records? ;-) 4. The array of those that fail would be returned to provide feedback> to the user on the offending entries. > > I would think that this optimized method is trying to take advantage of > mass INSERTS into a SQL database and not circumvent the ActiveRecord > design completely.I think there are two separate cases: one where you want to insert a *lot* of records that you know to be valid, and another to insert a bunch of records that you do not trust to be valid. In m _______________________________________________ Rails-core mailing list Rails-core@lists.rubyonrails.org http://lists.rubyonrails.org/mailman/listinfo/rails-core
I agree with you that there should be an option as to whether to insert any if some fail. I was thinking of an instance where the user gets 500,000 rows of incomplete data and wants to put in anything that conforms to validation. This was a scenario that I specifically experienced. Sometimes my brain is in the instance and not the big picture. You lost me around "In m". Going somewhere with that? ;-) Nicholas Seckar wrote:> > I think there are two separate cases: one where you want to insert a > *lot* of records that you know to be valid, and another to insert a > bunch of records that you do not trust to be valid. In m >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nicholas Seckar wrote:> > I think there are two separate cases: one where you want to insert a > *lot* of records that you know to be valid, and another to insert a > bunch of records that you do not trust to be valid.I 100% agree with this statement. That is precisely the reason I want to be able to import records with and without validation. I will be releasing an import plugin with this functionality for mysql later tonight. I will work with the other db adapters later this week. Zach -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEvTZfMyx0fW1d8G0RAjU2AJwI9galkc5RxJA8jMXg4gCvLlJANwCffMIX 7Yr7SoeHTSX5ecUL+H7wXsM=qUK9 -----END PGP SIGNATURE-----
Apparently Analagous Threads
- 1337 Speak For Ruby and an ActiveRecord 1337 Speak Extension - 0.0.1
- Speeding up ActiveRecord creation?
- ActiveRecord, using sql functions for some attributes
- Activerecord validation problems
- Samba PDC With LDAP Backend, Failed to initialise SAM_ACCOUNT for user