Folks, I have many (~150) perl spider processes that gathers data from targeted URLs running concurrently. They use mysql to maintain state, and locking is handled only if two spider processes happen to randomly select the same URL to process. I want to use omindex to add the data to the index being built from the data source HTML downloaded near the start of the spidering process. However, each omindex process creates a db_lock file before modifying the index data, so other processes are locked out. Also, it would be nice to be able to search the index while it is updated and grows. It seems the db_lock does not prevent searching. What is a better way to do this than waiting for omindex to exit? One thought is to modify omindex to create a server that accepts IPC named pipe or socket connections. The spiders would connect and send data over then close the connection. Has this been done anywhere so that I need not write it? Thanks, OSC -- ___________________________________________________ Play 100s of games for FREE! http://games.mail.com/