Hi,
Running stable/10 at r273159 on FC disks (through isp) I can't create a zfs
pool.
What I have is a simple device, accessible on /dev/multipath/sas0,
backed by da0 and da4:
# gmultipath status
Name Status Components
multipath/sas0 OPTIMAL da0 (ACTIVE)
da4 (READ)
When I issue a
zpool create data /dev/multipath/sas0
command, zpool starts to eat 100% CPU:
# procstat -k 3924
PID TID COMM TDNAME KSTACK
3924 100128 zpool - <running>
gstat shows that there is one uncompleted read IO on multipath/sas0 and
the queue length constantly grows on da0:
# gstat -b
dT: 1.030s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
124402 146 0 0 0.0 0 0 0.0 100.5 da0
0 0 0 0 0.0 0 0 0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0.0 da3
1 0 0 0 0.0 0 0 0.0 0.0 multipath/sas0
0 0 0 0 0.0 0 0 0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0.0 da5
0 0 0 0 0.0 0 0 0.0 0.0 da6
0 0 0 0 0.0 0 0 0.0 0.0 da7
0 0 0 0 0.0 0 0 0.0 0.0 da8
0 0 0 0 0.0 0 0 0.0 0.0 da9
0 0 0 0 0.0 0 0 0.0 0.0 multipath/sas1
0 0 0 0 0.0 0 0 0.0 0.0 multipath/sata0
0 0 0 0 0.0 0 0 0.0 0.0 multipath/sata1
0 0 0 0 0.0 0 0 0.0 0.0 md0
0 0 0 0 0.0 0 0 0.0 0.0 md1
I can use these devices finely with dd.
What's going on here?
On 12/18/2014 12:04, Nagy, Attila wrote:> Hi, > > Running stable/10 at r273159 on FC disks (through isp) I can't create a zfs > pool. > > What I have is a simple device, accessible on /dev/multipath/sas0, > backed by da0 and da4: > # gmultipath status > Name Status Components > multipath/sas0 OPTIMAL da0 (ACTIVE) > da4 (READ) > > When I issue a > zpool create data /dev/multipath/sas0 > command, zpool starts to eat 100% CPU: > # procstat -k 3924 > PID TID COMM TDNAME KSTACK > 3924 100128 zpool - <running> > > gstat shows that there is one uncompleted read IO on multipath/sas0 and > the queue length constantly grows on da0: > # gstat -b > dT: 1.030s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 124402 146 0 0 0.0 0 0 0.0 100.5 da0 > I can use these devices finely with dd. > > What's going on here?I have a hunch. Try sysctl vfs.zfs.vdev.trim_on_init=0 before zpool create?