I''ve always seen this curve in my tests (local disk or iscsi) and just
think its zfs as designed. I haven''t seen much parallelism when I have
multiple i/o jobs going, the filesystem seems to go mostly into one or
the other mode. Perhaps per vdev (in iscsi I''m only exposing one or
two), there is only one performance characterist at a time, write or
read.
On 6/30/06, Nathan Kroenert <Nathan.Kroenert at sun.com>
wrote:> Hey all -
>
> Was playing a little with zfs today and noticed that when I was
> untarring a 2.5gb archive both from and onto the same spindle in my
> laptop, I noticed that the bytes red and written over time was seesawing
> between approximately 23MB/s and 0MB/s.
>
> It seemed like we read and read and read till we were all full up, then
> wrote until we were empty, and so the cycle went.
>
> Now: as it happens, 31MB/s is about as fast as it gets on this disk at
> that part of the platter (using dd and large block size on the rdev).
> (iirc, it actually started out closer to 30MB, so the slower speed might
> be a red herring...)
> So, it seems to be below what I would hope to get out of the platter,
> but it''s not too bad.
> Whether I:
> read at 23, write at 0 then read at 0, write at 23
> or
> read at 15 and write at 15
> it work out the same(ish)...
>
> The question is: Is this deliberate? (I''m guessing it''s
the txg flushing
> that''s causing this behaviour)
>
> iostat output is at the end of this email...
>
> Is this a deliberate attempt to reduce the number of seeks and
IO''s to
> the disk, (and especially competing read/writes on PATA)?
>
> I guess in the back of my mind is: Is this the fastest / best way we can
> approach this?
>
> Also - When dding the raw slice that zfs is using, I noticed that my IO
> rate also seesawed up and down between 31MB/s and 28MB/s, over a 5
> second interval... I was not expecting that... Thoughts?
>
> Thanks! :)
>
> Nathan.
>
> Here is the iostat example -
>
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 201.5 0.0 23908.7 33.0 2.0 173.5 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 200.0 0.0 24822.5 33.0 2.0 174.9 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 184.0 0.0 22413.1 33.0 2.0 190.2 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 42.0 247.9 5246.9 8753.2 20.1 1.6 74.9 66 95
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 159.0 6.0 20290.8 4.0 13.4 1.9 92.7 90 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 186.0 0.0 23809.8 0.0 31.2 2.0 178.5 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 172.0 30.0 22017.2 3016.2 31.5 2.0 166.0 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 176.0 0.0 21109.0 33.0 2.0 198.8 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 189.0 0.0 23422.8 33.0 2.0 185.1 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 0.0 182.0 0.0 23288.6 33.0 2.0 192.3 100 100
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 33.0 364.0 3904.0 7765.6 19.8 1.6 53.9 70 92
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 146.0 6.0 18563.9 4.0 18.2 1.4 129.1 69 74
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> cmdk0 131.0 0.0 16768.9 0.0 18.0 1.8 150.8 67 90
> nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
>
>
>
> --
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>