senthil ramanujam
2007-Mar-07 00:23 UTC
[dtrace-discuss] puzzling io size distribution data
Hi,
I have been using dtrace-io provider to track io size distribution.
The data that I am getting is very puzzling and hoping to get better
understanding.
The scenario is as follows: I created a SVM device (Raid-0) using 2
disks. I ran dd command against the raw raid-0 device. Simultaneously,
I tracked io size distributions using dtrace-io provider. The data
that dtrace is reporting is where I got puzzled. Dtrace is reporting 2
IO sizes, one corresponds to interlace size and another corresponds to
requested IO size.
See email below for an example that demonstrates the issue. The
example has got 2 parts, although the issue can be seen on the first
part. The second part is just to demonstrate that changing the
interlace size has no impact on above reporting issue.
The data that I am not clear about is that why dtrace is reporting 2
IO sizes. Is this an expected behavior? I didn''t look into the io
provider for all details, but, I thought that the io provider tracks
IOs between OS and the storage layer. If it isn''t true, can someone
explain the correct behavior or point me to the right place?
many thanks.
senthil
$ head -1 /etc/release
Solaris 10 6/06 s10x_u2wos_09a X86
$ metainit d0 1 2 c3t1d0s1 c3t3d0s1 -i 64k
d0: Concat/Stripe is setup
$ metastat d0
d0: Concat/Stripe
Size: 6409935 blocks (3.1 GB)
Stripe 0: (interlace: 128 blocks)
Device Start Block Dbase Reloc
c3t1d0s1 0 No Yes
c3t3d0s1 0 No Yes
Device Relocation Information:
Device Reloc Device ID
c3t1d0 Yes id1,sd at SSEAGATE_ST973401LSUN72G_0110BDXG____________3LB0BDXG
c3t3d0 Yes id1,sd at SSEAGATE_ST973401LSUN72G_0110BF2S____________3LB0BF2S
$ dtrace -n ''io:::done{
@["io_distribution"]=quantize(args[0]->b_bcount) }'' -c
"dd
if=/dev/md/rdsk/d0 of=/dev/null bs=1048576 count=10"
dtrace: description ''io:::done'' matched 4 probes
10+0 records in
10+0 records out
dtrace: pid 2014 has exited
io_distribution
value ------------- Distribution ------------- count
4 | 0
8 | 1
16 | 0
32 |@ 4
64 | 0
128 | 0
256 | 0
512 |@@ 7
1024 | 0
2048 | 0
4096 | 0
8192 | 0
16384 | 1
32768 | 0
65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 160
131072 | 0
262144 | 0
524288 | 0
1048576 |@@ 10
2097152 | 0
$ metaclear d0
d0: Concat/Stripe is cleared
$ metainit d0 1 2 c3t1d0s1 c3t3d0s1 -i 512k d0: Concat/Stripe is setup
$ metastat d0 d0: Concat/Stripe
Size: 6409935 blocks (3.1 GB)
Stripe 0: (interlace: 1024 blocks)
Device Start Block Dbase Reloc
c3t1d0s1 0 No Yes
c3t3d0s1 0 No Yes
Device Relocation Information:
Device Reloc Device ID
c3t1d0 Yes id1,sd at SSEAGATE_ST973401LSUN72G_0110BDXG____________3LB0BDXG
c3t3d0 Yes id1,sd at SSEAGATE_ST973401LSUN72G_0110BF2S____________3LB0BF2S
$ dtrace -n ''io:::done{
@["io_distribution"]=quantize(args[0]->b_bcount) }'' -c
"dd
if=/dev/md/rdsk/d0 of=/dev/null bs=1048576 count=10"
dtrace: description ''io:::done'' matched 4 probes
10+0 records in
10+0 records out
dtrace: pid 2019 has exited
io_distribution
value ------------- Distribution ------------- count
4 | 0
8 |@ 1
16 | 0
32 |@@@@ 4
64 | 0
128 | 0
256 | 0
512 |@@@@@@@ 7
1024 | 0
2048 | 0
4096 | 0
8192 | 0
16384 |@ 1
32768 | 0
65536 | 0
131072 | 0
262144 | 0
524288 |@@@@@@@@@@@@@@@@@@@ 20
1048576 |@@@@@@@@@ 10
2097152 | 0
$
Brendan Gregg - Sun Microsystems
2007-Mar-07 00:50 UTC
[dtrace-discuss] puzzling io size distribution data
G''Day Senthil, On Tue, Mar 06, 2007 at 07:23:19PM -0500, senthil ramanujam wrote:> Hi, > > I have been using dtrace-io provider to track io size distribution. > The data that I am getting is very puzzling and hoping to get better > understanding. > > The scenario is as follows: I created a SVM device (Raid-0) using 2 > disks. I ran dd command against the raw raid-0 device. Simultaneously, > I tracked io size distributions using dtrace-io provider. The data > that dtrace is reporting is where I got puzzled. Dtrace is reporting 2 > IO sizes, one corresponds to interlace size and another corresponds to > requested IO size.DTrace is observing both the meta device, and the physical device. Here I add the device name as a key to the aggregate, and repeart your test, # dd if=/dev/md/rdsk/d0 of=/dev/null bs=1024k count=10 10+0 records in 10+0 records out # dtrace -n ''io:::done { @[args[1]->dev_statname] = quantize(args[0]->b_bcount); }'' ^C cmdk3 value ------------- Distribution ------------- count 8192 | 0 16384 |@@ 10 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 180 65536 | 0 md1 value ------------- Distribution ------------- count 524288 | 0 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 10 2097152 | 0 The meta device md1 (which is my d0) receives the 1 Mbyte requests, which are then broken down to the 32 to 64 Kbyte size for the physical device cmdk3 (which is my c4d0s0, which d0 contains). The physical size would be 56 Kbyte (DTrace can of course confirm) due to the maxphys tunable on my x86 workstation. It looks like you were very close to finding this out yourself. :) no worries, Brendan -- Brendan [CA, USA]