Alexandre Beletti Ferreira Higuita
2006-Jan-13 09:21 UTC
[Xapian-discuss] Trying to use db.reopen()
I'm trouble... I read this message: http://thread.gmane.org/gmane.comp.search.xapian.general/2089 This explain the problem and how to deal with this, but I didn't understand how to use db.reopen(). How it works? Where I have to put it? I'm trying to modify omega.cc to deal with this, but I don't know where I have to put the db.reopen(). May be after the db.add_database(Xapian::Database(.........))? Please, someone can help me? Thanks. Alexandre (Higuita)
I'm in trouble... I read this message: http://thread.gmane.org/gmane.comp.search.xapian.general/2089 This explain the problem and how to deal with this, but I didn't understand how to use db.reopen(). How it works? Where I have to put it? I'm trying to modify omega.cc to deal with this, but I don't know where I have to put the db.reopen(). May be after the db.add_database(Xapian::Database(.........))? Please, someone can help me? Thanks. -- SDM - www.Garimpar.com/noticias http://sdm.zapto.org/ -- Seja esperto, seja livre, seja Linux Be smart, be free, be Linux Soyez fut?, soyez libre, soyez Linux -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.tartarus.org/pipermail/xapian-discuss/attachments/20060112/9c576609/attachment.htm
On Thu, Jan 12, 2006 at 11:10:42AM -0200, Alexandre Beletti Ferreira Higuita wrote:> http://thread.gmane.org/gmane.comp.search.xapian.general/2089 > This explain the problem and how to deal with this, but I didn't > understand how to use db.reopen(). How it works? Where I have to put > it?You need to call it *in response to* catching a DatabaseModifiedError exception. And then retry the operation which threw the exception (or the series of operations if inconsistent state is an issue). But as I said in the thread you point to, this isn't the best way to address the issue in Omega. And I really meant that! If you're getting this error with Omega, you're feeding in updates at such a rate that a search takes longer than twice the time between updates. In this situation, retrying the search is a hazardous - it increases the load on the server, which can lead to concurrent searches taking longer, so more and more will fail to complete before two updates go in, and the load spirals up and up... Muscat 3.6 had a similar issue, and I've seen search systems built on that which retried in this situation get into an unusable state like this. That's why I didn't implement the approach in Omega. The better way to address this is to batch updates and apply them every few minutes. Then a search will never trigger the error unless the server is under insane load, at which point you don't want to be retrying searches anyway. Perhaps Omega should catch the error and give a more useful (to the user) message, such as "excessive search load" or something. This whole situation will go away when I finish implementing the new flint backend. There a reader will be able to signal that it's using a particular B-tree revision, and the writer will then know not to discard that revision until the reader has finished using it. Cheers, Olly
> From: Olly Betts <olly@survex.com> > This whole situation will go away when I finish implementing the new > flint backend. There a reader will be able to signal that it's using a > particular B-tree revision, and the writer will then know not to discard > that revision until the reader has finished using it. >Does this mean it will be safe to have N readers and 1 writer accessing the index concurrently without extra locking ? Fabrice