Dear All, I am benchmarking NFS and GlusterFS . I running Iozone in multi thread mode .from the test conclued that GlusterFS performs better then NFS in single server and single client at file size 100MB. Any commant and Idea . Is it correct? .Here I am using file size 100MB and 128 KB record size .In 100 MB file size GlusterFS performs better then NFS . but when I am testing 128 KB file size and 4KB record size ,in this case NFS performs better then GlusterFS .what is the reson for that? GlusterFS will perform better only in case of large file size ? ==============================================NFS Performance in Iozone Thread mode: [root at localhost nfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 Iozone: Performance Test of File I/O Version $Revision: 3.239 $ Compiled for 32 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Jean-Marc Zucconi, Jeff Blomberg, Erik Habbinga, Kris Strecker, Walter Wong. Run began: Mon Jan 12 20:29:37 2009 Excel chart generation enabled Record Size 128 KB File size set to 102400 KB Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 5 processes Each process writes a 102400 Kbyte file in 128 Kbyte records Children see throughput for 5 initial writers = 9269.31 KB/sec Parent sees throughput for 5 initial writers = 8506.71 KB/sec Min throughput per process = 1802.63 KB/sec Max throughput per process = 1914.05 KB/sec Avg throughput per process = 1853.86 KB/sec Min xfer = 96384.00 KB Children see throughput for 5 rewriters = 7869.36 KB/sec Parent sees throughput for 5 rewriters = 7533.62 KB/sec Min throughput per process = 1499.05 KB/sec Max throughput per process = 1650.54 KB/sec Avg throughput per process = 1573.87 KB/sec Min xfer = 93312.00 KB Children see throughput for 5 readers = 8779.71 KB/sec Parent sees throughput for 5 readers = 8753.18 KB/sec Min throughput per process = 1741.74 KB/sec Max throughput per process = 1781.11 KB/sec Avg throughput per process = 1755.94 KB/sec Min xfer = 100608.00 KB Children see throughput for 5 re-readers = 8785.78 KB/sec Parent sees throughput for 5 re-readers = 8765.13 KB/sec Min throughput per process = 1732.58 KB/sec Max throughput per process = 1788.77 KB/sec Avg throughput per process = 1757.16 KB/sec Min xfer = 99584.00 KB "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 128 Kbytes " "Output is in Kbytes/sec" " Initial write " 9269.31 " Rewrite " 7869.36 " Read " 8779.71 " Re-read " 8785.78 iozone test complete. =====================================================GlusterFS performance in Iozone Tread mode : [root at localhost glusterfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 Iozone: Performance Test of File I/O Version $Revision: 3.239 $ Compiled for 32 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Jean-Marc Zucconi, Jeff Blomberg, Erik Habbinga, Kris Strecker, Walter Wong. Run began: Mon Jan 12 20:35:56 2009 Excel chart generation enabled Record Size 128 KB File size set to 102400 KB Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 5 processes Each process writes a 102400 Kbyte file in 128 Kbyte records Children see throughput for 5 initial writers = 11305.49 KB/sec Parent sees throughput for 5 initial writers = 10658.86 KB/sec Min throughput per process = 2113.47 KB/sec Max throughput per process = 2320.99 KB/sec Avg throughput per process = 2261.10 KB/sec Min xfer = 93312.00 KB Children see throughput for 5 rewriters = 10245.67 KB/sec Parent sees throughput for 5 rewriters = 10052.28 KB/sec Min throughput per process = 1984.79 KB/sec Max throughput per process = 2150.17 KB/sec Avg throughput per process = 2049.13 KB/sec Min xfer = 94592.00 KB Children see throughput for 5 readers = 10172.49 KB/sec Parent sees throughput for 5 readers = 10124.65 KB/sec Min throughput per process = 1128.39 KB/sec Max throughput per process = 2271.28 KB/sec Avg throughput per process = 2034.50 KB/sec Min xfer = 51072.00 KB Children see throughput for 5 re-readers = 10253.22 KB/sec Parent sees throughput for 5 re-readers = 10193.55 KB/sec Min throughput per process = 1137.78 KB/sec Max throughput per process = 2290.00 KB/sec Avg throughput per process = 2050.64 KB/sec Min xfer = 51072.00 KB "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 128 Kbytes " "Output is in Kbytes/sec" " Initial write " 11305.49 " Rewrite " 10245.67 " Read " 10172.49 " Re-read " 10253.22 iozone test complete. Thanks for your time . Thanks L.Mohan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090112/9b76fed7/attachment.html>
check back to some of the previous messages in this list about performance. also, which version of Gluster you use makes a difference. As for smaller block sizes being a performance issue, I think mostly fuse is the problem with small block sizes. I think 4K is what fuse' block size is? SO I'm thinking that should be ok for testing. I also believe gluster 2.0 is much faster with smaller files than 1.3, so you should get better results with 2.0 if you benchmarked with 1.3, please try again with 2.0 and let us know your results. At 02:09 AM 1/12/2009, mohan L wrote:>Dear All, > >I am benchmarking NFS and GlusterFS . I running Iozone in multi >thread mode .from the test conclued that GlusterFS performs better >then NFS in single server and single client at file size 100MB. Any >commant and Idea . Is it correct? .Here I am using file size 100MB >and 128 KB record size .In 100 MB file size GlusterFS performs >better then NFS . but when I am testing 128 KB file size and 4KB >record size ,in this case NFS performs better then GlusterFS .what >is the reson for that? GlusterFS will perform better only in case >of large file size ? > > >==============================================>NFS Performance in Iozone Thread mode: > >[root at localhost nfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 > Iozone: Performance Test of File I/O > Version $Revision: 3.239 $ > Compiled for 32 bit mode. > Build: linux > > Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins > Al Slater, Scott Rhine, Mike Wisner, Ken Goss > Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, > Randy Dunlap, Mark Montague, Dan Million, > Jean-Marc Zucconi, Jeff Blomberg, > Erik Habbinga, Kris Strecker, Walter Wong. > > Run began: Mon Jan 12 20:29:37 2009 > > Excel chart generation enabled > Record Size 128 KB > File size set to 102400 KB > Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n > Output is in Kbytes/sec > Time Resolution = 0.000001 seconds. > Processor cache size set to 1024 Kbytes. > Processor cache line size set to 32 bytes. > File stride size set to 17 * record size. > Throughput test with 5 processes > Each process writes a 102400 Kbyte file in 128 Kbyte records > > Children see throughput for 5 initial writers = 9269.31 KB/sec > Parent sees throughput for 5 initial writers = 8506.71 KB/sec > Min throughput per process = 1802.63 KB/sec > Max throughput per process = 1914.05 KB/sec > Avg throughput per process = 1853.86 KB/sec > Min xfer = 96384.00 KB > > Children see throughput for 5 rewriters = 7869.36 KB/sec > Parent sees throughput for 5 rewriters = 7533.62 KB/sec > Min throughput per process = 1499.05 KB/sec > Max throughput per process = 1650.54 KB/sec > Avg throughput per process = 1573.87 KB/sec > Min xfer = 93312.00 KB > > Children see throughput for 5 readers = 8779.71 KB/sec > Parent sees throughput for 5 readers = 8753.18 KB/sec > Min throughput per process = 1741.74 KB/sec > Max throughput per process = 1781.11 KB/sec > Avg throughput per process = 1755.94 KB/sec > Min xfer = 100608.00 KB > > Children see throughput for 5 re-readers = 8785.78 KB/sec > Parent sees throughput for 5 re-readers = 8765.13 KB/sec > Min throughput per process = 1732.58 KB/sec > Max throughput per process = 1788.77 KB/sec > Avg throughput per process = 1757.16 KB/sec > Min xfer = 99584.00 KB > > > >"Throughput report Y-axis is type of test X-axis is number of processes" >"Record size = 128 Kbytes " >"Output is in Kbytes/sec" > >" Initial write " 9269.31 > >" Rewrite " 7869.36 > >" Read " 8779.71 > >" Re-read " 8785.78 > > >iozone test complete. > > >=====================================================>GlusterFS performance in Iozone Tread mode : > >[root at localhost glusterfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 > Iozone: Performance Test of File I/O > Version $Revision: 3.239 $ > Compiled for 32 bit mode. > Build: linux > > Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins > Al Slater, Scott Rhine, Mike Wisner, Ken Goss > Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, > Randy Dunlap, Mark Montague, Dan Million, > Jean-Marc Zucconi, Jeff Blomberg, > Erik Habbinga, Kris Strecker, Walter Wong. > > Run began: Mon Jan 12 20:35:56 2009 > > Excel chart generation enabled > Record Size 128 KB > File size set to 102400 KB > Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n > Output is in Kbytes/sec > Time Resolution = 0.000001 seconds. > Processor cache size set to 1024 Kbytes. > Processor cache line size set to 32 bytes. > File stride size set to 17 * record size. > Throughput test with 5 processes > Each process writes a 102400 Kbyte file in 128 Kbyte records > > Children see throughput for 5 initial writers = 11305.49 KB/sec > Parent sees throughput for 5 initial writers = 10658.86 KB/sec > Min throughput per process = 2113.47 KB/sec > Max throughput per process = 2320.99 KB/sec > Avg throughput per process = 2261.10 KB/sec > Min xfer = 93312.00 KB > > Children see throughput for 5 rewriters = 10245.67 KB/sec > Parent sees throughput for 5 rewriters = 10052.28 KB/sec > Min throughput per process = 1984.79 KB/sec > Max throughput per process = 2150.17 KB/sec > Avg throughput per process = 2049.13 KB/sec > Min xfer = 94592.00 KB > > Children see throughput for 5 readers = 10172.49 KB/sec > Parent sees throughput for 5 readers = 10124.65 KB/sec > Min throughput per process = 1128.39 KB/sec > Max throughput per process = 2271.28 KB/sec > Avg throughput per process = 2034.50 KB/sec > Min xfer = 51072.00 KB > > Children see throughput for 5 re-readers = 10253.22 KB/sec > Parent sees throughput for 5 re-readers = 10193.55 KB/sec > Min throughput per process = 1137.78 KB/sec > Max throughput per process = 2290.00 KB/sec > Avg throughput per process = 2050.64 KB/sec > Min xfer = 51072.00 KB > > > >"Throughput report Y-axis is type of test X-axis is number of processes" >"Record size = 128 Kbytes " >"Output is in Kbytes/sec" > >" Initial write " 11305.49 > >" Rewrite " 10245.67 > >" Read " 10172.49 > >" Re-read " 10253.22 > > >iozone test complete. > >Thanks for your time . > >Thanks >L.Mohan > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
check back to some of the previous messages in this list about performance. also, which version of Gluster you use makes a difference. As for smaller block sizes being a performance issue, I think mostly fuse is the problem with small block sizes. I think 4K is what fuse'' block size is? SO I''m thinking that should be ok for testing. I also believe gluster 2.0 is much faster with smaller files than 1.3, so you should get better results with 2.0 if you benchmarked with 1.3, please try again with 2.0 and let us know your results. At 02:09 AM 1/12/2009, mohan L wrote:>Dear All, > >I am benchmarking NFS and GlusterFS . I running Iozone in multi >thread mode .from the test conclued that GlusterFS performs better >then NFS in single server and single client at file size 100MB. Any >commant and Idea . Is it correct? .Here I am using file size 100MB >and 128 KB record size .In 100 MB file size GlusterFS performs >better then NFS . but when I am testing 128 KB file size and 4KB >record size ,in this case NFS performs better then GlusterFS .what >is the reson for that? GlusterFS will perform better only in case >of large file size ? > > >==============================================>NFS Performance in Iozone Thread mode: > >[root at localhost nfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 > Iozone: Performance Test of File I/O > Version $Revision: 3.239 $ > Compiled for 32 bit mode. > Build: linux > > Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins > Al Slater, Scott Rhine, Mike Wisner, Ken Goss > Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, > Randy Dunlap, Mark Montague, Dan Million, > Jean-Marc Zucconi, Jeff Blomberg, > Erik Habbinga, Kris Strecker, Walter Wong. > > Run began: Mon Jan 12 20:29:37 2009 > > Excel chart generation enabled > Record Size 128 KB > File size set to 102400 KB > Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n > Output is in Kbytes/sec > Time Resolution = 0.000001 seconds. > Processor cache size set to 1024 Kbytes. > Processor cache line size set to 32 bytes. > File stride size set to 17 * record size. > Throughput test with 5 processes > Each process writes a 102400 Kbyte file in 128 Kbyte records > > Children see throughput for 5 initial writers = 9269.31 KB/sec > Parent sees throughput for 5 initial writers = 8506.71 KB/sec > Min throughput per process = 1802.63 KB/sec > Max throughput per process = 1914.05 KB/sec > Avg throughput per process = 1853.86 KB/sec > Min xfer = 96384.00 KB > > Children see throughput for 5 rewriters = 7869.36 KB/sec > Parent sees throughput for 5 rewriters = 7533.62 KB/sec > Min throughput per process = 1499.05 KB/sec > Max throughput per process = 1650.54 KB/sec > Avg throughput per process = 1573.87 KB/sec > Min xfer = 93312.00 KB > > Children see throughput for 5 readers = 8779.71 KB/sec > Parent sees throughput for 5 readers = 8753.18 KB/sec > Min throughput per process = 1741.74 KB/sec > Max throughput per process = 1781.11 KB/sec > Avg throughput per process = 1755.94 KB/sec > Min xfer = 100608.00 KB > > Children see throughput for 5 re-readers = 8785.78 KB/sec > Parent sees throughput for 5 re-readers = 8765.13 KB/sec > Min throughput per process = 1732.58 KB/sec > Max throughput per process = 1788.77 KB/sec > Avg throughput per process = 1757.16 KB/sec > Min xfer = 99584.00 KB > > > >"Throughput report Y-axis is type of test X-axis is number of processes" >"Record size = 128 Kbytes " >"Output is in Kbytes/sec" > >" Initial write " 9269.31 > >" Rewrite " 7869.36 > >" Read " 8779.71 > >" Re-read " 8785.78 > > >iozone test complete. > > >=====================================================>GlusterFS performance in Iozone Tread mode : > >[root at localhost glusterfs]# iozone -R -t 5 -r 128K -s 100M +-n -i 0 -i 1 > Iozone: Performance Test of File I/O > Version $Revision: 3.239 $ > Compiled for 32 bit mode. > Build: linux > > Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins > Al Slater, Scott Rhine, Mike Wisner, Ken Goss > Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, > Randy Dunlap, Mark Montague, Dan Million, > Jean-Marc Zucconi, Jeff Blomberg, > Erik Habbinga, Kris Strecker, Walter Wong. > > Run began: Mon Jan 12 20:35:56 2009 > > Excel chart generation enabled > Record Size 128 KB > File size set to 102400 KB > Command line used: iozone -R -t 5 -r 128K -s 100M -i 0 -i 1 +-n > Output is in Kbytes/sec > Time Resolution = 0.000001 seconds. > Processor cache size set to 1024 Kbytes. > Processor cache line size set to 32 bytes. > File stride size set to 17 * record size. > Throughput test with 5 processes > Each process writes a 102400 Kbyte file in 128 Kbyte records > > Children see throughput for 5 initial writers = 11305.49 KB/sec > Parent sees throughput for 5 initial writers = 10658.86 KB/sec > Min throughput per process = 2113.47 KB/sec > Max throughput per process = 2320.99 KB/sec > Avg throughput per process = 2261.10 KB/sec > Min xfer = 93312.00 KB > > Children see throughput for 5 rewriters = 10245.67 KB/sec > Parent sees throughput for 5 rewriters = 10052.28 KB/sec > Min throughput per process = 1984.79 KB/sec > Max throughput per process = 2150.17 KB/sec > Avg throughput per process = 2049.13 KB/sec > Min xfer = 94592.00 KB > > Children see throughput for 5 readers = 10172.49 KB/sec > Parent sees throughput for 5 readers = 10124.65 KB/sec > Min throughput per process = 1128.39 KB/sec > Max throughput per process = 2271.28 KB/sec > Avg throughput per process = 2034.50 KB/sec > Min xfer = 51072.00 KB > > Children see throughput for 5 re-readers = 10253.22 KB/sec > Parent sees throughput for 5 re-readers = 10193.55 KB/sec > Min throughput per process = 1137.78 KB/sec > Max throughput per process = 2290.00 KB/sec > Avg throughput per process = 2050.64 KB/sec > Min xfer = 51072.00 KB > > > >"Throughput report Y-axis is type of test X-axis is number of processes" >"Record size = 128 Kbytes " >"Output is in Kbytes/sec" > >" Initial write " 11305.49 > >" Rewrite " 10245.67 > >" Read " 10172.49 > >" Re-read " 10253.22 > > >iozone test complete. > >Thanks for your time . > >Thanks >L.Mohan > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
On Mon, Jan 12, 2009 at 6:57 PM, Keith Freedman <freedman at freeformit.com>wrote:> check back to some of the previous messages in this list about performance.Ok , I will check too .> > also, which version of Gluster you use makes a difference.I am using GlusterFS version: glusterfs 1.4.0rc7 FUSE Version :fuse-2.7.3glfs10 My Client and Server machine RAM size : 384MB(128MB+256MB) This the volume file I am using for benchmarking. ### file: client-volume.vol.sample ### Add client feature and attach to remote subvolume volume client type protocol/client option transport-type tcp option remote-host 192.168.1.167 option remote-subvolume brick end-volume ### Add readahead feature volume readahead type performance/read-ahead option page-size 128kB option page-count 4 subvolumes client end-volume ### Add IO-Cache feature volume iocache type performance/io-cache subvolumes readahead #option page-size 1MB # 128KB is default option cache-size 300MB # 32MB is default option cache-timeout 5 # 1second is default #option priority *.html:2,*:1 # default is *:0 end-volume ### Add writeback feature volume writeback type performance/write-behind option aggregate-size 128KB option window-size 2MB option flush-behind on subvolumes iocache end-volume ### file: server-volume.vol.sample volume brick type storage/posix option directory /bench end-volume volume plocks type features/posix-locks subvolumes brick # option mandatory on end-volume volume iot type performance/io-threads subvolumes plocks option thread-count 8 # option cache-size 4MB end-volume volume server type protocol/server subvolumes iot brick option transport-type tcp option auth.addr.brick.allow 192.168. * option auth.addr.iot.allow 192.168.* end-volume I am using io-threads on server side only .can I use same on client side? It will increase the performance ? How to tune translator parameters? I mean I am using page-size in iocache 128K how to know which parameter will performs better ? Next I am aiming to set up more server and client to get the performance gain . we are aiming at solving the problems of Millions/Billions of small files inside a single directory, Though, the files are not saved as files inside each directory, they will be saved inside a db file .We are using MySQL . How to achive this for testing perpose? As know I don't have enough skill to do same . Please give me some guide line to achive this kind of set up . I am not able to get how to export db file over GlusterFS. what is your Idea?> > > As for smaller block sizes being a performance issue, I think mostly fuse > is the problem with small block sizes. > I think 4K is what fuse' block size is? SO I'm thinking that should be ok > for testing.> > > I also believe gluster 2.0 is much faster with smaller files than 1.3, so > you should get better results with 2.0I saw last week one thread about 2.0 .But I am not able to find where I have to download 2.0? Please give me the download URL of 2.0 .> > > if you benchmarked with 1.3, please try again with 2.0 and let us know your > results.Sure , I will send 2.0 benchmark also . Thanks for your time Thanks L.Mohan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090113/3481369f/attachment.html>
>we are aiming at solving the problems of Millions/Billions of small >files inside a single directory, Though, the files are not saved as >files inside each directory, they will be saved inside a db file .We >are using MySQL . How to achive this for testing perpose? As know I >don''t have enough skill to do same . Please give me some guide line >to achive this kind of set up . I am not able to get how to export >db file over GlusterFS. what is your Idea?well, you have a couple options here.. one is to use the BDB translator and have gluster manage the small files inside a berkeley database container instead of MySQL. If you want to use MySQL (because you have some other application that is going to be manageing these files), and you want to present those files as a filesystem, then you can use MySQLfs to "mount" the datbase as a filesystem, then use that filesystem as a posix brick within gluster. this will add 2 levels of indirection, and you''re going through fuse twice, so any fuse bottlenecks will be multiplied, but it will technically solve your problem. how to set up MySQLfs: http://www.linux.com/feature/127055 ideally, if you dont really need MySQL to store these things, you''re much better using glusters BDB translator for those particular files.>As for smaller block sizes being a performance issue, I think mostly >fuse is the problem with small block sizes. >I think 4K is what fuse'' block size is? SO I''m thinking that >should be ok for testing. > >I also believe gluster 2.0 is much faster with smaller files than >1.3, so you should get better results with 2.0 > >I saw last week one thread about 2.0 .But I am not able to >find where I have to download 2.0? Please give me the download URL of 2.0 .1.4rc7 became 2.0rc1 so I think you''re ok. Keith
On Mon, 2009-01-12 at 15:39 +0530, mohan L wrote:> Dear All, > > I am benchmarking NFS and GlusterFS . I running Iozone in multi > thread mode .from the test conclued that GlusterFS performs better > then NFS in single server and single client at file size 100MB. Any > commant and Idea . Is it correct? .Here I am using file size 100MB > and 128 KB record size .In 100 MB file size GlusterFS performs better > then NFS . but when I am testing 128 KB file size and 4KB record > size ,in this case NFS performs better then GlusterFS .what is the > reson for that? GlusterFS will perform better only in case of large > file size ? >GlusterFS 1.3.x uses a simple but inefficient protocol, which has quite large overhead for each operation - so bulk throughput is fine but lots of little operations have high latency. GlusterFS 1.4/2.0 uses a binary protocol that is much more efficient, you'll find that much more competitive for smaller files. I think you can expect version 2.0 to be released within the next few weeks, though if you get and test the release candidate and report your success or any bugs it might happen sooner :) John. -- Serious Rails Hosting: http://www.brightbox.co.uk