Hi forum,
I''m currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s4 ONLINE 0 0 0
c0t2d0s4 ONLINE 0 0 0
Then i created a ZFS with no extra options:
# zfs create mypool/zfs01
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 106K 27,8G 25,5K /mypool
mypool/zfs01 24,5K 27,8G 24,5K /mypool/zfs01
When I now send a mkfile on the new FS, the performance of the whole system
breaks down near zero:
# mkfile 5g test
last pid: 25286; load avg: 3.54, 2.28, 1.29; up 0+01:44:26
16:16:24
66 processes: 61 sleeping, 3 running, 1 zombie, 1 on cpu
CPU states: 0.0% idle, 2.1% user, 97.9% kernel, 0.0% iowait, 0.0% swap
Memory: 512M phys mem, 65M free mem, 2050M swap, 2050M free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
25285 root 1 8 4 1184K 752K run 0:09 66.28% mkfile
It seams that some kind of kernel activity while writing to ZFS blocks the
system.
Is this a known problem? Do you need additional information?
regards
Mathias
This message posted from opensolaris.org
Jürgen Keil
2006-Sep-15 15:07 UTC
[zfs-discuss] Re: [Blade 150] ZFS: extreme low performance
The disks in that Blade 100, are these IDE disks? The performance problem is probably bug 6421427: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427 A fix for the issue was integrated into the Opensolaris 20060904 source drop (actually closed binary drop): http://dlc.sun.com/osol/on/downloads/20060904/on-changelog-20060904.html ... but has been removed in the next update: http://dlc.sun.com/osol/on/downloads/20060911/on-changelog-20060911.html This message posted from opensolaris.org