Displaying 4 results from an estimated 4 matches for "use_compound_file".
2005 Dec 19
17
Indexing so slow......
I am indexing over 10,000 rows of data, it is very slow when it is
indexing the 100,1000,10000 row, and now it is over 1 hour passed on
the row 10,000.
how to make it faster?
here is my code:
==================
doc = Document.new
doc << Field.new("id", t.id, Field::Store::YES,
Field::Index::UNTOKENIZED)
doc << Field.new("title", t.title,
2006 Oct 11
0
Memory allocation bug with index.search
...help you, i''m trying to reproduce it in a little
code...
Ah, just another detail, this is how our index is created :
INDEX_OPTIONS = {
:path => path,
:auto_flush => false,
:max_buffer_memory => 0x4000000,
:max_buffered_docs => 100000,
:use_compound_file => false}
INDEX = Ferret::Index::Index.new(INDEX_OPTIONS)
but i tried without any max_* and it''s the same...
Tell me what i can do to help if i don''t manage to reproduce it in a
pastable code.
Thanks by advance,
Regards,
Jeremie ''ahFeel'' BORDIER
--
Poste...
2007 Mar 12
2
Too many open files error
Hi Dave,
i just stumbled across a new error i haven''t seen before :)
caught error inside loop: IO Error occured at <except.c>:93 in xraise
Error occured in fs_store.c:264 - fs_new_output
couldn''t create OutStream /var/www/localhost/rails/current/
script/backgroundrb/../../config/../db/ferret.index.production/
_jei_0.f0: <Too many open files>
my ulimit is
2007 Nov 16
18
Multithreading / multiprocessing woes
I''ve been running some multithreaded tests on Ferret. Using a single
Ferret::Index::Index inside a DRb server, it definitely behaves for me
as if all readers are locked out of the index when writing is going on
in that index, not just optimization -- at least when segment merging
happens, which is when the writes take the longest and you can
therefore least afford to lock out all reads.