Displaying 20 results from an estimated 400 matches similar to: "ZFS pool fragmentation"
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e
2008 Jun 10
3
ZFS space map causing slow performance
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of small (~3KB) reads each second, presumably reading space map data from
2007 Sep 14
3
space allocation vs. thin provisioning
Short question:
I''m curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning may give?
Background & more detailed questions:
In Jeff Bonwick''s blog[1], he
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors.
The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14).
But still when trying to do zpool import
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the platter while pushing
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
how does one free segment(offset=77984887808 size=66560)
on a pool that won''t import?
looks like I found
http://bugs.opensolaris.org/view_bug.do?bug_id=6580715
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html
when I luupgrade a ufs partition with a
dvd-b62 that was bfu to b68 with a dvd of b74
it booted fine and I was doing the same thing
that I had done on
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss to much time and money with this experimental filesystem.
My version is Zpool
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror):
[root at einstein;0]~# zpool status poolm
pool: poolm
state: FAULTED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolm UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 corrupted data
c2t0d0s0 ONLINE 0
2011 May 03
4
multipl disk failures cause zpool hang
Hi,
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external LUNs via a fiber
cable. When the cable is unplugged while there is IO in the pool,
All zpool related command hang (zpool status, zpool list, etc.), put the
cable back does not solve the problem.
Eventually, I
2008 Jul 29
2
Unexpected b_hdr change.
Hi.
We''re testing the most recent ZFS version from OpenSolaris ported to
FreeBSD. Kris (CCed) observed strange situation. In function arc_read()
he had a panic on assertion that we try to unlock a lock which is not
beeing held:
rw_enter(&pbuf->b_hdr->b_datalock, RW_READER);
err = arc_read_nolock(pio, spa, bp, done, private, priority,
flags, arc_flags, zb);
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that
2006 Oct 31
0
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
Author: billm
Repository: /hg/zfs-crypto/gate
Revision: 33640e100342f4a847c599f1a1671dda6faf4e05
Log message:
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
6410700 zdb should support reading raw blocks out of storage pool
6410709 ztest: spa config can change before pool export
Files:
update: usr/src/cmd/mdb/common/modules/zfs/zfs.c
update: usr/src/cmd/zdb/zdb.c
update: