Hello zfs-discuss,
A file system with a lot of small files.
zfs send fsA | ssh a at b zfs recv fsB
On a sending site nothing else is running or touching the disks.
Yet still the performance is far from being satisfactionary.
When serving data the same pool/fs can read over 100MB/s.
This is s10u3.
Maybe it''s due that zfs send is single threaded and traversing lot
of small blocks just take a time and can''t saturate underlying LUNs?
bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 992 117 3.54M 1.76M
f3-3 270K 272G 0 0 46 86
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 165 0 384K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 166 0 453K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 176 0 423K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 169 0 402K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 159 0 356K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 441G 783G 154 0 371K 0
f3-3 270K 272G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
^C
bash-3.00#
bash-3.00# zpool status
pool: f3-1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
f3-1 ONLINE 0 0 0
c5t600C0FF000000000098FD516E4403200d0 ONLINE 0 0 0
c5t600C0FF000000000098FD55DBA4EA000d0 ONLINE 0 0 0
c5t600C0FF000000000098FD57F9DA83C00d0 ONLINE 0 0 0
errors: No known data errors
pool: f3-3
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
f3-3 ONLINE 0 0 0
c5t600C0FF000000000098FD514A1D9AF00d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Hello Robert,
Wednesday, February 14, 2007, 1:02:08 AM, you wrote:
RM> Hello zfs-discuss,
RM> A file system with a lot of small files.
RM> zfs send fsA | ssh a at b zfs recv fsB
RM> On a sending site nothing else is running or touching the disks.
RM> Yet still the performance is far from being satisfactionary.
RM> When serving data the same pool/fs can read over 100MB/s.
RM> This is s10u3.
RM> Maybe it''s due that zfs send is single threaded and traversing
lot
RM> of small blocks just take a time and can''t saturate underlying
LUNs?
Just to be clear - making in mt wouldn''t be easy ''coz I guess
transactions have to be send in order. So rather some kind of parallel
read-ahead would be better.
??
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Robert Milkowski wrote:> Hello zfs-discuss, > > A file system with a lot of small files. > zfs send fsA | ssh a at b zfs recv fsB > > On a sending site nothing else is running or touching the disks. > Yet still the performance is far from being satisfactionary. > When serving data the same pool/fs can read over 100MB/s. > This is s10u3. > > Maybe it''s due that zfs send is single threaded and traversing lot > of small blocks just take a time and can''t saturate underlying LUNs?Yes, there are two problems: 1. as you noticed, zfs send only issues one read i/o at a time. 2. there are some inefficiencies with the way we process the files, so that many small files is considerably slower than a few large files (even with the same size and number of blocks) We''re working on these... --matt
Hello Matthew, Wednesday, February 14, 2007, 1:50:28 AM, you wrote: MA> Robert Milkowski wrote:>> Hello zfs-discuss, >> >> A file system with a lot of small files. >> zfs send fsA | ssh a at b zfs recv fsB >> >> On a sending site nothing else is running or touching the disks. >> Yet still the performance is far from being satisfactionary. >> When serving data the same pool/fs can read over 100MB/s. >> This is s10u3. >> >> Maybe it''s due that zfs send is single threaded and traversing lot >> of small blocks just take a time and can''t saturate underlying LUNs?MA> Yes, there are two problems: MA> 1. as you noticed, zfs send only issues one read i/o at a time. MA> 2. there are some inefficiencies with the way we process the files, so MA> that many small files is considerably slower than a few large files MA> (even with the same size and number of blocks) MA> We''re working on these... Any BUG IDs so I can track them? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com