Server: T5120 on 10 U5 Storage: Internal 8 drives on SAS HW RAID (R5) Oracle: ZFS fs, recordsize=8K and atime=off Tape: LTO-4 (half height) on SAS interface. Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work. Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting bit (apart from it being a rather slow speed) is the fact that the speed fluctuates from the disk area.. but stays constant to the tape. I see up to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it''s steady 25 MB/s. There has been NO tuning .. above is absolutely standard. Where should I investigate to increase throughput ... -- This message posted from opensolaris.org
Louwtjie Burger wrote:> Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work. > > Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting bit (apart from it being a rather slow speed) is the fact that the speed fluctuates from the disk area.. but stays constant to the tape. I see up to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it''s steady 25 MB/s. > > There has been NO tuning .. above is absolutely standard. > > Where should I investigate to increase throughput ...Does your tape drive compress (most do)? If so, you may be seeing compressible vs. uncompressible data effects.
Louwtjie Burger <burgerw at zaber.org.za> wrote:> Server: T5120 on 10 U5 > Storage: Internal 8 drives on SAS HW RAID (R5) > Oracle: ZFS fs, recordsize=8K and atime=off > Tape: LTO-4 (half height) on SAS interface. > > Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work.What is the speed of the LTO? If you are talking about "tar", it is unclea which TAR implementation you are referring to. Sun tar is not very fast. GNU tar is not very fast. Star is optimized for best speed. I recommend to check star. The standard blocksize of tar (10 kB) is not optimal for tape drives. If you like to get speed and best portability of the tapes, use a block size of 63 kB and if you like to get best speed, use 256 kB as blocksize. I recommend to use: star -c -time bs=256k f=/dev/rmt/.... files... Star should be able to give you the native LTO speed. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Carson Gaspar <carson at taltos.org> wrote:> Louwtjie Burger wrote: > > Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work. > > > > Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting bit (apart from it being a rather slow speed) is the fact that the speed fluctuates from the disk area.. but stays constant to the tape. I see up to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it''s steady 25 MB/s....> Does your tape drive compress (most do)? If so, you may be seeing > compressible vs. uncompressible data effects.HW Compression in the tape drive usually increases the speed of the drive. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Joerg Schilling wrote:> Carson Gaspar<carson at taltos.org> wrote: > >> Louwtjie Burger wrote: >>> Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work. >>> >>> Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting bit (apart from it being a rather slow speed) is the fact that the speed fluctuates from the disk area.. but stays constant to the tape. I see up to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it''s steady 25 MB/s. > ... >> Does your tape drive compress (most do)? If so, you may be seeing >> compressible vs. uncompressible data effects. > > HW Compression in the tape drive usually increases the speed of the drive.Yes. Which is exactly what I was saying. The tar data might be more compressible than the DB, thus be faster. Shall I draw you a picture, or are you too busy shilling for star at every available opportunity? -- Carson
Carson Gaspar <carson at taltos.org> wrote:> Yes. Which is exactly what I was saying. The tar data might be more > compressible than the DB, thus be faster. Shall I draw you a picture, or > are you too busy shilling for star at every available opportunity?If you did never compare Sun tar speed with star speed, it would not help if you draw pictures. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Carson Gaspar <carson at taltos.org> writes:> Joerg Schilling wrote: >> Carson Gaspar<carson at taltos.org> wrote: >> >>> Louwtjie Burger wrote: >>>> Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it''s a single thread doing all the work. >>>> >>>> Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting bit (apart from it being a rather slow speed) is the fact that the speed fluctuates from the disk area.. but stays constant to the tape. I see up to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it''s steady 25 MB/s. >> ... >>> Does your tape drive compress (most do)? If so, you may be seeing >>> compressible vs. uncompressible data effects. >> >> HW Compression in the tape drive usually increases the speed of the drive. > > Yes. Which is exactly what I was saying. The tar data might be more > compressible than the DB, thus be faster. Shall I draw you a picture, or > are you too busy shilling for star at every available opportunity?Sheesh, calm down, man. Boyd
I would look at what size IOs you are doing in each case. I have been playing with a T5240 and got 400Mb/s read and 200Mb/s write speeds with iozone throughput tests on a 6 disk mirror pool, so the box and ZFS can certainly push data around - but that was using 128k blocks. You mention the disks are doing bursts of 50-60M which suggests they have more bandwidth and are not flat out trying to prefetch data. I suspect you might be IOPS bound - if you are doing a serial read then write workload and only doing small blocks to the tape it might lead to higher service times on the tape device hence slowing down your overall read speed. It its LTO-4 try and up your block size as big as you can go - 256k, 512k or higher and maybe use truss on the process to see what read/write sizes its doing. I also found the iosnoop dtrace tool from Brendan Greg''s dtrace toolkit to be very helpful in tracking down these sorts of issues. HTH. Cheers, Adrian -- This message posted from opensolaris.org
Ta on the comments.... I''m going to use Jorg''s ''star'' to simulate some sequential backup workloads, using different blocksizes and see what the system do. I''ll save some output and post for people that might match the same config, now or in the future. To be clear though: (currently) #tar cvfE /dev/rmt/0cbn /tmp/foobar (42 MB/s to tape sustained, 70% util on tape device) #tar cvfE /dev/rmt/0cbn /oracle/datafiles/* ( 24 MB/s to tape sustained, 24-60 MB/s zfs fs bursts) I''ll post the star, iostat and other findings later. -- This message posted from opensolaris.org
Louwtjie Burger <burgerw at zaber.org.za> wrote:> Ta on the comments.... > > I''m going to use Jorg''s ''star'' to simulate some sequential backup workloads, using different blocksizes and see what the system do. > > I''ll save some output and post for people that might match the same config, now or in the future. > > To be clear though: (currently) > > #tar cvfE /dev/rmt/0cbn /tmp/foobar > (42 MB/s to tape sustained, 70% util on tape device) > > #tar cvfE /dev/rmt/0cbn /oracle/datafiles/* > ( 24 MB/s to tape sustained, 24-60 MB/s zfs fs bursts) > > I''ll post the star, iostat and other findings later.If you have plenty of RAM and if you see that the tape is not streaming with star, give star plenty of FIFO size. If the tape drive supports 60 MB/s, give star 1800 MB for the FIFo (fs=1800m) in case that your physical RAM is at least 4 GB. This gives star 30 seconds reserve data for keeping the tape streaming. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily