Displaying 20 results from an estimated 300 matches similar to: "zpool import panics"
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation.
I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0
I found workaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator tool, but I can see that it is not easy.
When ZFS pool is fragmented then:
1. spa_sync function is
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss to much time and money with this experimental filesystem.
My version is Zpool
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum,
I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps.
Now space maps, intent log, spa history are compressed.
Not I''m thinking about disabling checksums. All
2007 Sep 14
3
space allocation vs. thin provisioning
Short question:
I''m curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning may give?
Background & more detailed questions:
In Jeff Bonwick''s blog[1], he
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the platter while pushing
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -
2006 Oct 31
0
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
Author: billm
Repository: /hg/zfs-crypto/gate
Revision: 33640e100342f4a847c599f1a1671dda6faf4e05
Log message:
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
6410700 zdb should support reading raw blocks out of storage pool
6410709 ztest: spa config can change before pool export
Files:
update: usr/src/cmd/mdb/common/modules/zfs/zfs.c
update: usr/src/cmd/zdb/zdb.c
update:
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2010 Jul 24
2
Severe ZFS corruption, help needed.
I''m running FreeBSD 8.1 with ZFS v15. Recently some time after moving my mirrored pool from one device to another system crashes. From that time on zpool cannot be used/imported - any attempt fails with:
solaris assert: sm->space + size <= sm->size, file: /usr/src/sys/moules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c, line: 93
Debugging reveals that:
2012 Jan 15
0
ZFS Metadata on-disk grouping
Does ZFS currently attempt to group metadata in large sector-ranges
on the disk? Can this be expected to happen "automagically" - i.e.
during each TXG close we have to COW-update whole branches of the
blockpointer tree, so these new blocks might "just happen" to always
coalesce into larger sector groups?
Rationale: larger regions dedicated to smallish block pointers might
be
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that
2012 Dec 20
3
Pool performance when nearly full
Hi
I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking
(and I''d check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
zpool iostat:
used
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have
some completely static data sets that need to be as fast as possible.
One of the scenarios I''m testing is the addition of vdevs to a pool.
Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more
vdevs and would like to balance this data across the pool for
performance. The data may be
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all
I did some test about MySQL''s Insert performance on ZFS, and met a big
performance problem,*i''m not sure what''s the point*.
Environment
2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel).
A Java client run 8 threads concurrency insert into one Innodb table:
*~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1
~600 qps when sync_binlog=10
2006 Jun 15
4
devid support for EFI partition improved zfs usibility
Hi, guys,
I have add devid support for EFI, (not putback yet) and test it with a
zfs mirror, now the mirror can recover even a usb harddisk is unplugged
and replugged into a different usb port.
But there is still something need to improve. I''m far from zfs expert,
correct me if I''m wrong.
First, zfs should sense the hotplug event.
I use zfs status to check the status of the