search for: max_merge_docs

Displaying 5 results from an estimated 5 matches for "max_merge_docs".

2006 Jun 02
1
Indexing fails -- _ntc6.tmp exceeds 2 gigabyte maximum
Ferret 0.9.3 Ruby 1.8.2 NOT storing file contents in the index. Only indexing first 25k of each file. Very large data set (1 million files, 350 Gb) Code based on snippet from David Balmain''s forum posts. After 6 hours, Ferret bails out with Ruby "exceeds max file size". Cache: -rw-r--r-- 1 bill bill 2147483647 2006-06-01 22:45 _ntc6.tmp -rw-r--r-- 1 bill bill 1690862924
2005 Dec 19
17
Indexing so slow......
I am indexing over 10,000 rows of data, it is very slow when it is indexing the 100,1000,10000 row, and now it is over 1 hour passed on the row 10,000. how to make it faster? here is my code: ================== doc = Document.new doc << Field.new("id", t.id, Field::Store::YES, Field::Index::UNTOKENIZED) doc << Field.new("title", t.title,
2007 Jan 18
5
corrupt index immediately after rebuild
Hello, I''m usin gferret and I''ve just attempted to build an index that contains 15,968,046 documents. I''ve rebuild the index from scratch, but when I try to search for some items I get this error: IOError: IO Error occured at <except.c>:79 in xraise Error occured in fs_store.c:289 - fsi_seek_i seeking pos -1284143798: <Invalid argument> This is
2007 Feb 26
7
Problem with large index file
Hello, Ferret created a 4.5GB> index file. $ 4534029210 2007-02-26 12:46 _el.cfs The creation of the index went smoothly. Searching through this index also works fine. However whenever I try to get the contents of an indexed document I get an error when the document number is above 621108: irb(main):080:0> searcher[621108].load IOError: IO Error occured at <except.c>:79 in xraise
2007 Mar 23
7
Multiple servers for one index
...lock") write_lock.obtain index << {:id => id, :type => ''create_test_type''}\ index.flush write_lock.release [...] but it makes the processes freezes or raise a Ferret::Store::Lock::LockError in my different attempts. I tried to play with IndexWriter options like max_merge_docs, merge_factor... but without success. Maybe there is a way to merge all the Compound files every couple of writes instead of doing it on the fly. Is there a way to achieve my goal? Dave please tell me you have an idea:-P Thanks Seb -- Sebastien Pahl - Rift Technologies spahl at rift.fr -- Po...