Is there any support in ActiveRecord to do LOCK TABLES in mysql? I have a situation where i need to do a select and update and I need to write lock the table so that no updates happen between the select and update. I see there is an execute method for the mysql adapter, but the documentation does not provide any examples on how it is to be used. any assistance is appreciated. Chris
On 9/30/05, Chris Hall <christopher.k.hall-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:> I see there is an execute method for the mysql adapter, but the > documentation does not provide any examples on how it is to be used.ActiveRecord::Base.connection.execute "YOUR SQL HERE" should do what you want. It will raise an exception is something goes wrong while executing your SQL. -- Regards, John Wilger ----------- Alice came to a fork in the road. "Which road do I take?" she asked. "Where do you want to go?" responded the Cheshire cat. "I don''t know," Alice answered. "Then," said the cat, "it doesn''t matter." - Lewis Carrol, Alice in Wonderland
Chris Hall wrote:> Is there any support in ActiveRecord to do LOCK TABLES in mysql? I > have a situation where i need to do a select and update and I need to > write lock the table so that no updates happen between the select and > update.Are you doing this as part of a transaction? What table engine are you using? Do you want to lock one record or the whole table? And do you want to prevent selects as well as updates during the lock? -- We develop, watch us RoR, in numbers too big to ignore.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1> Chris Hall wrote: >> Is there any support in ActiveRecord to do LOCK TABLES in mysql? I >> have a situation where i need to do a select and update and I need to >> write lock the table so that no updates happen between the select and >> update.Chris, take a look at the locking selects: http://dev.mysql.com/doc/mysql/en/innodb-locking-reads.html Regards, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDPZ8DAQHALep9HFYRAjOnAJ97adtT3zz4WeVy2a6RnQX449rCPgCgoSsQ BAJuyuu2LaYbro2qMO+pcWY=BAap -----END PGP SIGNATURE-----
basically what I am doing is selecting a limited number of records (10) and then updating those records with a user_id to assign ownership. between the select and update, i can''t let those records get reselected by someone else, hence a write lock on the table is neccessary. the table is type myisam, as i don''t require transaction support. I just wasn''t sure how i would use activerecord to to the ''''LOCK TABLES xyz WRITE'' query. thanks for the help. On 9/30/05, Jeremy Kemper <jeremy-w7CzD/W5Ocjk1uMJSBkQmQ@public.gmane.org> wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > Chris Hall wrote: > >> Is there any support in ActiveRecord to do LOCK TABLES in mysql? I > >> have a situation where i need to do a select and update and I need to > >> write lock the table so that no updates happen between the select and > >> update. > > Chris, take a look at the locking selects: > http://dev.mysql.com/doc/mysql/en/innodb-locking-reads.html > > Regards, > jeremy > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.2 (Darwin) > > iD8DBQFDPZ8DAQHALep9HFYRAjOnAJ97adtT3zz4WeVy2a6RnQX449rCPgCgoSsQ > BAJuyuu2LaYbro2qMO+pcWY> =BAap > -----END PGP SIGNATURE----- > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
On 9/30/05, Chris Hall <christopher.k.hall-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:> basically what I am doing is selecting a limited number of records > (10) and then updating those records with a user_id to assign > ownership. between the select and update, i can''t let those records > get reselected by someone else, hence a write lock on the table is > neccessary.I haven''t personally used it before, and I''m not 100% sure it will do what you need; but have you thought about using AR''s optimistic locking? http://api.rubyonrails.com/classes/ActiveRecord/Locking.html -- Regards, John Wilger ----------- Alice came to a fork in the road. "Which road do I take?" she asked. "Where do you want to go?" responded the Cheshire cat. "I don''t know," Alice answered. "Then," said the cat, "it doesn''t matter." - Lewis Carrol, Alice in Wonderland
Jeremy Kemper wrote:> Chris, take a look at the locking selects: > http://dev.mysql.com/doc/mysql/en/innodb-locking-reads.htmlNote that for MySQL''s InnoDB engine: 1. LOCK TABLES only works within a transaction (AUTOCOMMIT=0). 2. LOCK TABLES and UNLOCK tables do implicit COMMITs. So the proper way to lock InnoDB tables in Rails is: Model.transaction(records) do Model.connection.execute( ''SET AUTOCOMMIT=0'' ) Model.connection.execute( ''LOCK TABLES...'' ) # before any transaction code [TRANSACTION CODE] Model.connection.execute( ''SET AUTOCOMMIT=1'' ) end Model.connection.execute( ''UNLOCK TABLES...'' ) # after any transaction code and commit Ideally the Rails transaction code should be changed to use SET AUTOCOMMIT=0 ... COMMIT; SET AUTOCOMMIT=1 rather than BEGIN ... COMMIT Then you can leave out the AUTOCOMMIT statments from transaction code that uses table locks. I''m currently using such a mod. Also note that reads of an InnoDB table under normal MySQL locks are multi-versioned snapshots. If you want to get an exclusive read lock for a counter or semaphore, use the SELECT ... FOR UPDATE statement mentioned in the above link. -- We develop, watch us RoR, in numbers too big to ignore.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sep 30, 2005, at 7:41 PM, Mark Reginald James wrote:> Note that for MySQL''s InnoDB engine: > 1. LOCK TABLES only works within a transaction (AUTOCOMMIT=0). > 2. LOCK TABLES and UNLOCK tables do implicit COMMITs. > > So the proper way to lock InnoDB tables in Rails is: > Model.transaction(records) do > Model.connection.execute( ''SET AUTOCOMMIT=0'' ) > Model.connection.execute( ''LOCK TABLES...'' ) # before any > transaction code > [TRANSACTION CODE] > Model.connection.execute( ''SET AUTOCOMMIT=1'' ) > end > Model.connection.execute( ''UNLOCK TABLES...'' ) # after any > transaction code and commitBEGIN does an implicit UNLOCK TABLES, so the above code will not do what you expect.> Ideally the Rails transaction code should be changed > to use > SET AUTOCOMMIT=0 ... COMMIT; SET AUTOCOMMIT=1 > rather than > BEGIN ... COMMIT > > Then you can leave out the AUTOCOMMIT statments from > transaction code that uses table locks. I''m currently > using such a mod.BEGIN also does an implicit SET AUTOCOMMIT=0, so the extra statements are superfluous. Why are you using table locks? You are sabotaging InnoDB''s deadlock detection. Perhaps setting transaction isolation level to serializable or using locking reads would be more appropriate. Regards, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQNM9AQHALep9HFYRApvGAJ42tKbRjPJhzrtXFzXU5ect1XbAEgCglzeR ROdtHdZkqNguuX2z6D5u0Ks=YsOp -----END PGP SIGNATURE-----
Jeremy Kemper wrote:> On Sep 30, 2005, at 7:41 PM, Mark Reginald James wrote: >> So the proper way to lock InnoDB tables in Rails is: >> Model.transaction(records) do >> Model.connection.execute( ''SET AUTOCOMMIT=0'' ) >> Model.connection.execute( ''LOCK TABLES...'' ) # before any >> transaction code >> [TRANSACTION CODE] >> Model.connection.execute( ''SET AUTOCOMMIT=1'' ) >> end >> Model.connection.execute( ''UNLOCK TABLES...'' ) # after any >> transaction code and commit > > BEGIN does an implicit UNLOCK TABLES, so the above code will not do > what you expect.LOCK TABLES is after the BEGIN. And the implicit UNLOCK TABLES only affects the current connection, so transactions begun by other connections won''t release the locks.>> Ideally the Rails transaction code should be changed >> to use >> SET AUTOCOMMIT=0 ... COMMIT; SET AUTOCOMMIT=1 >> rather than >> BEGIN ... COMMIT >> >> Then you can leave out the AUTOCOMMIT statments from >> transaction code that uses table locks. I''m currently >> using such a mod. > > > BEGIN also does an implicit SET AUTOCOMMIT=0, so the extra statements > are superfluous.Using BEGIN, a LOCK TABLES statement will COMMIT and set AUTOCOMMIT back to 1, preventing the transaction statements that follow from being processed together. Using an explicit SET AUTOCOMMIT=0 keeps the transaction open until the end.> Why are you using table locks? You are sabotaging InnoDB''s deadlock > detection. Perhaps setting transaction isolation level to serializable > or using locking reads would be more appropriate.At present I''m only using SELECT...FOR UPDATE locks to do atomic processing based on a counter value. I''ll read up on transaction isolation levels. Thanks. -- We develop, watch us RoR, in numbers too big to ignore.
OK, i''ve changed my table from MyISAM to Innodb. I''ve experimented with SELECT ... FOR UPDATE to replace my LOCK TABLES ... WRITE implemtation with the MyISAM table but it doesn''t appear to do what i want to do...perhaps I am doing someting wrong here, so let me explain again why I am using the LOCK TABLES ... WRITE and I ask how I might use SELECT ... FOR UPDATE to accomplish what I need to do. I have to select a limited number of rows (10) from the table, meeting certain conditions. this batch of rows are then assigned to a user for ownership of the data. picks = find_by_sql" select * from table where ... limit 10") for pick in picks pick.update_attribute(:user_id, @session[:user].id) end with no locking, there exists the possiblity that another user can select those same records before the first user can update them. I can''t let this happen. under my original implementation, using a MyISAM table, i just issued a "LOCK TABLES ... WRITE" which locked the entire table. i then ran my select/update, then issued an "UNLOCK TABLES" query. works well except for the fact that it locks the entire table. to test this, i opened 2 client connections and everything works....if i lock the table in client A, then client B waits to do the select until client A unlocked the table. with the new approach using Innodb and "SELECT ... FOR UPDATE", i tried the same thing. I opened 2 client connections. in client A i issued "select * from table where ... limit 10 for update". then in client B issued the same query. i would expect client B to select a different set of rows, but that was not the case, it selected the exact same set of rows. so perhaps i am doing something wrong. any help would be appreciated. Chris On 10/3/05, Mark Reginald James <mrj-bzGI/hKkdgQnC9Muvcwxkw@public.gmane.org> wrote:> Jeremy Kemper wrote: > > > On Sep 30, 2005, at 7:41 PM, Mark Reginald James wrote: > >> So the proper way to lock InnoDB tables in Rails is: > >> Model.transaction(records) do > >> Model.connection.execute( ''SET AUTOCOMMIT=0'' ) > >> Model.connection.execute( ''LOCK TABLES...'' ) # before any > >> transaction code > >> [TRANSACTION CODE] > >> Model.connection.execute( ''SET AUTOCOMMIT=1'' ) > >> end > >> Model.connection.execute( ''UNLOCK TABLES...'' ) # after any > >> transaction code and commit > > > > BEGIN does an implicit UNLOCK TABLES, so the above code will not do > > what you expect. > > LOCK TABLES is after the BEGIN. And the implicit UNLOCK TABLES > only affects the current connection, so transactions begun by > other connections won''t release the locks. > > > >> Ideally the Rails transaction code should be changed > >> to use > >> SET AUTOCOMMIT=0 ... COMMIT; SET AUTOCOMMIT=1 > >> rather than > >> BEGIN ... COMMIT > >> > >> Then you can leave out the AUTOCOMMIT statments from > >> transaction code that uses table locks. I''m currently > >> using such a mod. > > > > > > BEGIN also does an implicit SET AUTOCOMMIT=0, so the extra statements > > are superfluous. > > Using BEGIN, a LOCK TABLES statement will COMMIT and set > AUTOCOMMIT back to 1, preventing the transaction statements > that follow from being processed together. > > Using an explicit SET AUTOCOMMIT=0 keeps the transaction open > until the end. > > > > Why are you using table locks? You are sabotaging InnoDB''s deadlock > > detection. Perhaps setting transaction isolation level to serializable > > or using locking reads would be more appropriate. > > At present I''m only using SELECT...FOR UPDATE locks to do atomic > processing based on a counter value. I''ll read up on transaction > isolation levels. Thanks. > > -- > We develop, watch us RoR, in numbers too big to ignore. > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 3, 2005, at 3:21 AM, Mark Reginald James wrote:> Jeremy Kemper wrote: >> On Sep 30, 2005, at 7:41 PM, Mark Reginald James wrote: >>> So the proper way to lock InnoDB tables in Rails is: >>> Model.transaction(records) do >>> Model.connection.execute( ''SET AUTOCOMMIT=0'' ) >>> Model.connection.execute( ''LOCK TABLES...'' ) # before any >>> transaction code >>> [TRANSACTION CODE] >>> Model.connection.execute( ''SET AUTOCOMMIT=1'' ) >>> end >>> Model.connection.execute( ''UNLOCK TABLES...'' ) # after any >>> transaction code and commit >>> >> BEGIN does an implicit UNLOCK TABLES, so the above code will not >> do what you expect. > > LOCK TABLES is after the BEGIN. And the implicit UNLOCK TABLES > only affects the current connection, so transactions begun by > other connections won''t release the locks.My bad! My eyes deceived me; I missed the Model.transaction entirely, thinking the BEGIN ... END was in [TRANSACTION CODE].>>> Ideally the Rails transaction code should be changed >>> to use >>> SET AUTOCOMMIT=0 ... COMMIT; SET AUTOCOMMIT=1 >>> rather than >>> BEGIN ... COMMIT >>> >>> Then you can leave out the AUTOCOMMIT statments from >>> transaction code that uses table locks. I''m currently >>> using such a mod. >>> >> BEGIN also does an implicit SET AUTOCOMMIT=0, so the extra >> statements are superfluous. > > Using BEGIN, a LOCK TABLES statement will COMMIT and set > AUTOCOMMIT back to 1, preventing the transaction statements > that follow from being processed together. > > Using an explicit SET AUTOCOMMIT=0 keeps the transaction open > until the end.That seems contrary to the docs, but I''ll test it out. Thanks.>> Why are you using table locks? You are sabotaging InnoDB''s >> deadlock detection. Perhaps setting transaction isolation level >> to serializable or using locking reads would be more appropriate. > > At present I''m only using SELECT...FOR UPDATE locks to do atomic > processing based on a counter value. I''ll read up on transaction > isolation levels. Thanks.A big MySQL gotcha is that InnoDB transaction isolation defaults to REPEATABLE READ. Most folks expect READ COMMITTED. Best, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQU29AQHALep9HFYRArgNAKCr+AOJdhVL6beSd04oczxGurobawCgmQaH VjEazaS3GnXcCFwt88rmVh0=aZ0w -----END PGP SIGNATURE-----
Chris Hall wrote:> with the new approach using Innodb and "SELECT ... FOR UPDATE", i > tried the same thing. I opened 2 client connections. in client A i > issued "select * from table where ... limit 10 for update". then in > client B issued the same query. i would expect client B to select a > different set of rows, but that was not the case, it selected the > exact same set of rows. so perhaps i am doing something wrong.InnoDB will only set locks if AUTOCOMMIT=0. For each connection execute ''BEGIN'' or ''SET AUTOCOMMIT=0;'' before running the SELECT ... FOR UPDATE query. -- We develop, watch us RoR, in numbers too big to ignore.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 3, 2005, at 7:33 AM, Chris Hall wrote:> OK, i''ve changed my table from MyISAM to Innodb. I''ve experimented > with SELECT ... FOR UPDATE to replace my LOCK TABLES ... WRITE > implemtation with the MyISAM table but it doesn''t appear to do what i > want to do...perhaps I am doing someting wrong here, so let me explain > again why I am using the LOCK TABLES ... WRITE and I ask how I might > use SELECT ... FOR UPDATE to accomplish what I need to do.SELECT ... FOR UPDATE takes exclusive row locks on all the rows that are selected by your query, so you should be able to achieve the desired result.> I have to select a limited number of rows (10) from the table, meeting > certain conditions. this batch of rows are then assigned to a user > for ownership of the data. > > picks = find_by_sql" select * from table where ... limit 10") > for pick in picks > pick.update_attribute(:user_id, @session[:user].id) > endFor what it''s worth, in PostgreSQL I''d do Pick.connection.update <<-end_sql UPDATE table SET user_id = #{session[:user].id} WHERE id IN ( SELECT id FROM table WHERE ... LIMIT 10 ) end_sql You can do the same with MySQL, but, as far as I know, MySQL won''t use indexes for the subselect. Do an EXPLAIN ... to verify.> with the new approach using Innodb and "SELECT ... FOR UPDATE", i > tried the same thing. I opened 2 client connections. in client A i > issued "select * from table where ... limit 10 for update". then in > client B issued the same query. i would expect client B to select a > different set of rows, but that was not the case, it selected the > exact same set of rows. so perhaps i am doing something wrong.MySQL''s default transaction isolation level is REPEATABLE READ. In this case you want READ COMMITTED. In client A, do: begin; set transaction isolation level read committed; select ... for update; update ...; Then in client B: begin; set transaction isolation level read committed; select ... for update; This select will hang, waiting for client A to commit. Then in client A: commit and client B''s select will return with the updated values. You can set the READ COMMITTED isolation level globally so you don''t have to do it for every transaction. Best, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQVLnAQHALep9HFYRAsGSAJ9khyyOcaDauU/qUYFVT37FbV2QSgCfdpsh s+eEZ4txVQsk021P4vInios=WQsu -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 3, 2005, at 8:26 AM, Jeremy Kemper wrote:> On Oct 3, 2005, at 3:21 AM, Mark Reginald James wrote: >> Using BEGIN, a LOCK TABLES statement will COMMIT and set >> AUTOCOMMIT back to 1, preventing the transaction statements >> that follow from being processed together. >> >> Using an explicit SET AUTOCOMMIT=0 keeps the transaction open >> until the end. > > That seems contrary to the docs, but I''ll test it out. Thanks.Verified: http://dev.mysql.com/doc/mysql/en/innodb-deadlocks.html Regards, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQVNMAQHALep9HFYRAhbBAKCbdfmB9UYQsfhH6VcXPhaxb5pMMQCdEhPK Y1Ke/oEPo5dwGcRRL/gIk9I=C6i0 -----END PGP SIGNATURE-----
Jeremy Kemper wrote:> On Oct 3, 2005, at 3:21 AM, Mark Reginald James wrote: >> Using BEGIN, a LOCK TABLES statement will COMMIT and set >> AUTOCOMMIT back to 1, preventing the transaction statements >> that follow from being processed together. >> >> Using an explicit SET AUTOCOMMIT=0 keeps the transaction open >> until the end. > > That seems contrary to the docs, but I''ll test it out. Thanks.My statement was just an untested interpretation of the consequences of LOCK TABLES doing an implicit commit. However I have found that SELECT...FOR UPDATE works with BEGIN, so this statement must not do an implicit COMMIT, despite setting locks. The following quote from http://dev.mysql.com/doc/mysql/en/innodb-restrictions.html suggests the LOCK TABLES statement is another story: "Sometimes it would be useful to lock further tables in the course of a transaction. Unfortunately, LOCK TABLES in MySQL performs an implicit COMMIT and UNLOCK TABLES. An InnoDB variant of LOCK TABLES has been planned that can be executed in the middle of a transaction."> A big MySQL gotcha is that InnoDB transaction isolation defaults to > REPEATABLE READ. Most folks expect READ COMMITTED.Yep, reading up on that now at http://dev.mysql.com/doc/mysql/en/innodb-transaction-isolation.html There''s a comment there from a stung Hibernate user. -- We develop, watch us RoR, in numbers too big to ignore.
Doing this locks the whole table. I was under the impression that using SELECT...FOR UPDATE basically tells mysql "Hey, i''m selecting these rows in order to update them, don''t let anyone else select them until i''m done." client A: BEGIN; client A: select * from xyz where user_id is NULL limit 10 for update; # gets 10 records client B: BEGIN; client B: select * from xyz where user_id is NULL limit 10 for update; # waits for client A to commit/rollback client A; update xyz set user_id = 1 where id = x; # update each row from select query client A; commit; client B: finally gets its results ... This is exaclty what happens when I was doing it with MyISAM and LOCK TABLE ... WRITE method. Setting global transaction isolation level is set to READ COMMITTED or REPEATABLE READ makes no difference. So how would I get client B to retrieve a different set of rows immediately rather than wait for client A to commit/rollback? Again, I apologize if this is getting longwinded, I just want to make sure I understand and I am choosing the correct way to implement this. On 10/3/05, Mark Reginald James <mrj-bzGI/hKkdgQnC9Muvcwxkw@public.gmane.org> wrote:> Chris Hall wrote: > > > with the new approach using Innodb and "SELECT ... FOR UPDATE", i > > tried the same thing. I opened 2 client connections. in client A i > > issued "select * from table where ... limit 10 for update". then in > > client B issued the same query. i would expect client B to select a > > different set of rows, but that was not the case, it selected the > > exact same set of rows. so perhaps i am doing something wrong. > > InnoDB will only set locks if AUTOCOMMIT=0. For each connection execute > ''BEGIN'' or ''SET AUTOCOMMIT=0;'' before running the SELECT ... FOR UPDATE > query. > > > -- > We develop, watch us RoR, in numbers too big to ignore. > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
Chris Hall wrote:> So how would I get client B to retrieve a different set of rows > immediately rather than wait for client A to commit/rollback?It looks like MySQL/InnoDB is not smart enough to try to get its quota of 10 from other records when the first lot it comes across are read-locked. You''ll have to follow the SELECT...FOR UPDATE, by an update of the user_ids to get exclusive ownwership of the group of ten. Then you can take your sweet time processing them. For extra speed you may be able to do something with random or sequenced select offsets to reduce the chance of collision. -- We develop, watch us RoR, in numbers too big to ignore.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 3, 2005, at 9:35 AM, Chris Hall wrote:> Doing this locks the whole table. I was under the impression that > using SELECT...FOR UPDATE basically tells mysql "Hey, i''m selecting > these rows in order to update them, don''t let anyone else select them > until i''m done."It says "don''t let anyone else *read* them until I commit." MySQL isn''t going to give you a different set of 10 records on the second select. This does not lock the whole table: it locks those rows touched by the index scan for the query. If there is no index on the field in your where clause, MySQL does a full table scan, and thus every row is locked since every row is touched. It is crucial to correctly index your tables to reduce lock contention. Note that I was wrong about READ COMMITTED. All you need is the SELECT ... FOR UPDATE in a transaction to take an exclusive lock on the affected rows. Best, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQWsWAQHALep9HFYRAiHFAKC2qKmnyGUE5NRDGA8NEjwXOdm9sQCfaAYK HIKBczaX0cfkKQVVr/QH5e8=kIQE -----END PGP SIGNATURE-----
still the same result. client A: BEGIN; client A: select * from xyz where ... order by ... limit 10 for update; client A: update xyz set user_id = [x] where id in ([list of id column values]); client B: BEGIN; client B: select * from xyz where ... order by ... limit 10 for update; client B: (waits) client A: commit; client B: (gets different rowset) ... after doing some research on using LIMIT with FOR UPDATE, it appears that LIMIT acts after FOR UPDATE...ie, say i have 50 matches to my query. all 50 matches get locked by FOR UPDATE even tho I lmit the results to 10. client B matches the same 50, so it has to wait until they are unlocked. At least this is the way I am understanding it. i can test this by using client B to update a row that IS NOT part of the limit 10 rows returned to client A. client B will still wait to update until client A closes it''s transaction. On 10/3/05, Mark Reginald James <mrj-bzGI/hKkdgQnC9Muvcwxkw@public.gmane.org> wrote:> Chris Hall wrote: > > > So how would I get client B to retrieve a different set of rows > > immediately rather than wait for client A to commit/rollback? > > It looks like MySQL/InnoDB is not smart enough to try to > get its quota of 10 from other records when the first > lot it comes across are read-locked. > > You''ll have to follow the SELECT...FOR UPDATE, by an update of > the user_ids to get exclusive ownwership of the group of > ten. Then you can take your sweet time processing them. > > For extra speed you may be able to do something with > random or sequenced select offsets to reduce the chance > of collision. > > -- > We develop, watch us RoR, in numbers too big to ignore. > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
Chris Hall wrote:> still the same result. > > client A: BEGIN; > client A: select * from xyz where ... order by ... limit 10 for update; > client A: update xyz set user_id = [x] where id in ([list of id column values]); > client B: BEGIN; > client B: select * from xyz where ... order by ... limit 10 for update; > client B: (waits) > client A: commit; > client B: (gets different rowset) > ... > > after doing some research on using LIMIT with FOR UPDATE, it appears > that LIMIT acts after FOR UPDATE...ie, say i have 50 matches to my > query. all 50 matches get locked by FOR UPDATE even tho I lmit the > results to 10. client B matches the same 50, so it has to wait until > they are unlocked. At least this is the way I am understanding it. > > i can test this by using client B to update a row that IS NOT part of > the limit 10 rows returned to client A. client B will still wait to > update until client A closes it''s transaction.Yes, as Jeremy said, the lock is taken on the index, and so will lock all the user=null records, preventing any of them from being selected. Is it not sufficiently fast to just commit immediately after the update, then begin a new transaction on the user-locked records? -- We develop, watch us RoR, in numbers too big to ignore.
Mark, basically what happens is this. the records in the table represent boxes in a warehouse that are to be retrieved off the shelf for shipping. a number of users (less than 10) sign in to the app and they are each assigned a ''batch'' of 10 records at a time. they then go out and retrieve those boxes off the shelf. when they are done, they get another batch, and so on. obviously i can''t have users getting the same batch of records as they would all be going out to get the same boxes, hence why i need a locking mechanism. there may be upwards of 1000 boxes to be picked on any given day. so what is happening is that all 1000 rows are being locked, 10 are being assigned ownership, and then the remaining unassigned records are now available for the others to select/update after a transaction is completed. what i would reall like is a way to only lock those 10 records, but i''m seeing that this cannot be done because the ''limit 10'' only limits what is returned from the query. the query matches 1000 rows, so that''s what gets locked. and yes, i''ve indexed all the rows i''m searching against. Chris> Yes, as Jeremy said, the lock is taken on the index, and so will > lock all the user=null records, preventing any of them from being > selected. > > Is it not sufficiently fast to just commit immediately after the > update, then begin a new transaction on the user-locked records? > > > -- > We develop, watch us RoR, in numbers too big to ignore. > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 4, 2005, at 4:26 AM, Chris Hall wrote:> basically what happens is this. the records in the table represent > boxes in a warehouse that are to be retrieved off the shelf for > shipping. a number of users (less than 10) sign in to the app and > they are each assigned a ''batch'' of 10 records at a time. they then > go out and retrieve those boxes off the shelf. when they are done, > they get another batch, and so on. > > obviously i can''t have users getting the same batch of records as they > would all be going out to get the same boxes, hence why i need a > locking mechanism.# create table boxes ( ..., picker_id integer, ... ); class Box < ActiveRecord::Base belongs_to :picker, :class_name => ''User'', :foreign_key => ''picker_id'' def self.find_pickable(options = {}) find_all_by_user_id(nil, options) end def picked! self.picker = nil save! end end class User < ActiveRecord::Base has_many :picks, :class_name => ''Box'', :foreign_key => ''picker_id'' def pick!(limit = 10) transaction do # Go single file so multiple users don''t pick the same boxes. User.connection.execute ''set transaction level serializable'' picks.clear picks << Box.find_pickable(:limit => limit) end end end class PickerController < ApplicationController def index if current_user.picks.empty? finished_picking else @picks = current_user.picks end end def finished_picking current_user.pick! redirect_to :action => ''index'' end end Now, realistically, do your users have 10 separate terminals? Or do they have one terminal and print out their pick list? In other words, this locking/isolation is very useful knowledge, but is perhaps of little relevance to your application. Best, jeremy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin) iD8DBQFDQqduAQHALep9HFYRAl4pAKDZ82gbcqtpADNHTYP9zEPps+1R/ACgoTF2 nTf+yWTRQL1Jz5SkDeeO4IE=y0ZR -----END PGP SIGNATURE-----
I would reverse the order of the sql statements: - update todos set user_id = ? where user_id is null limit 10 - select * from todos where user_id = ? That way you have no locking issues. The records are assigned to the user before you select them. I don''t know if you can do a LIMIT on an update, but if you can''t you should be able to do a simple subselect like: - update todos set user_id = ? where id in (select id from todos where user_id is null limit 10) - select * from todos where user_id = ? Am I in left field on this? On 9/30/05, Chris Hall <christopher.k.hall-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:> > basically what I am doing is selecting a limited number of records > (10) and then updating those records with a user_id to assign > ownership. between the select and update, i can''t let those records > get reselected by someone else, hence a write lock on the table is > neccessary. the table is type myisam, as i don''t require transaction > support. I just wasn''t sure how i would use activerecord to to the > ''''LOCK TABLES xyz WRITE'' query. > > thanks for the help. > > On 9/30/05, Jeremy Kemper <jeremy-w7CzD/W5Ocjk1uMJSBkQmQ@public.gmane.org> wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > > Chris Hall wrote: > > >> Is there any support in ActiveRecord to do LOCK TABLES in mysql? I > > >> have a situation where i need to do a select and update and I need to > > >> write lock the table so that no updates happen between the select and > > >> update. > > > > Chris, take a look at the locking selects: > > http://dev.mysql.com/doc/mysql/en/innodb-locking-reads.html > > > > Regards, > > jeremy > > -----BEGIN PGP SIGNATURE----- > > Version: GnuPG v1.4.2 (Darwin) > > > > iD8DBQFDPZ8DAQHALep9HFYRAjOnAJ97adtT3zz4WeVy2a6RnQX449rCPgCgoSsQ > > BAJuyuu2LaYbro2qMO+pcWY> > =BAap > > -----END PGP SIGNATURE----- > > _______________________________________________ > > Rails mailing list > > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > > http://lists.rubyonrails.org/mailman/listinfo/rails > > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >-- Brock Weaver brockweaver-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org /* you are not expected to understand this */ _______________________________________________ Rails mailing list Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org http://lists.rubyonrails.org/mailman/listinfo/rails
Brock Weaver wrote:> I would reverse the order of the sql statements: > > - update todos set user_id = ? where user_id is null limit 10Yes, I think such updates are atomic, so this should do the trick. You only have to use SELECT...FOR UPDATE if you need to do more complex processing based on what you read. The query could be coded in Rails using AR''s update_all method: num_boxes = Box.update_all( "user_id = #{@user.id}", ''user_id = null limit 10'' ) -- We develop, watch us RoR, in numbers too big to ignore.
Jeremy Kemper wrote:> def pick!(limit = 10) > transaction do > # Go single file so multiple users don''t pick the same boxes. > User.connection.execute ''set transaction level serializable'' > picks.clear > picks << Box.find_pickable(:limit => limit) > endIs making transactions serial enough here? The records will only be locked in share mode, so other connections can still pick the same records. -- We develop, watch us RoR, in numbers too big to ignore.
Unfortunately i cannot do an "update ... order by ... limit" because the query involves multiple tables. mysql does not allow this. http://dev.mysql.com/doc/mysql/en/update.html "Note: You cannot use ORDER BY or LIMIT with multiple-table UPDATE." Also, I cannot use subqueries as the production database is mysql 4.0 (subqueries are only available in 4.1+) So I am limited with what I can do. I appreciate all the help and advice on this. I think I will stick with the transactions and select...for update method. it''s defintely better choice over MyISAM and LOCK TABLES. Chris On 10/4/05, Mark Reginald James <mrj-bzGI/hKkdgQnC9Muvcwxkw@public.gmane.org> wrote:> Brock Weaver wrote: > > I would reverse the order of the sql statements: > > > > - update todos set user_id = ? where user_id is null limit 10 > > Yes, I think such updates are atomic, so this should do the > trick. You only have to use SELECT...FOR UPDATE if you > need to do more complex processing based on what you read. > > The query could be coded in Rails using AR''s update_all method: > > num_boxes = Box.update_all( "user_id = #{@user.id}", ''user_id = null limit 10'' ) > > -- > We develop, watch us RoR, in numbers too big to ignore. > > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >