Displaying 20 results from an estimated 23 matches for "metaslab".
2010 Nov 11
8
zpool import panics
...:
type: ''disk''
id: 0
guid: 5041131819915543280
phys_path:
''/pci at 0,0/pci8086,3410 at 9/pci1077,138 at 0/fp at 0,0/disk at w2100001378ac0253,0:a''
whole_disk: 1
metaslab_array: 23
metaslab_shift: 38
ashift: 9
asize: 28001025916928
is_log: 0
DTL: 261
create_txg: 4
path: ''/dev/dsk/c3t2100001378AC0253d0s0''
devid: ...
2007 Sep 14
3
space allocation vs. thin provisioning
...ded by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning may give?
Background & more detailed questions:
In Jeff Bonwick''s blog[1], he talks about free space management
and metaslabs. Of particular interest is the statement: "ZFS
divides the space on each virtual device into a few hundred
regions called metaslabs."
1. http://blogs.sun.com/bonwick/entry/space_maps
In Hu Yoshida''s (CTO, Hitachi Data Systems) blog[2] there is a
discussion of thin provisionin...
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the plat...
2007 Jul 10
1
ZFS pool fragmentation
...rkaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator tool, but I can see that it is not easy.
When ZFS pool is fragmented then:
1. spa_sync function is executing very long ( > 5 seconds )
2. spa_sync thread often takes 100% CPU
3. metaslab space map is very big
There are some changes hidding the problem like this
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6512391
and I hope there will be available in Solaris 10 update 4
But I suggest that:
1. in sync phase when for the first time we did not found block we need
( f...
2006 Oct 31
0
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
...pool_main.c
update: usr/src/cmd/ztest/ztest.c
update: usr/src/uts/common/fs/zfs/arc.c
update: usr/src/uts/common/fs/zfs/dbuf.c
update: usr/src/uts/common/fs/zfs/dmu.c
update: usr/src/uts/common/fs/zfs/dmu_objset.c
update: usr/src/uts/common/fs/zfs/dsl_pool.c
update: usr/src/uts/common/fs/zfs/metaslab.c
update: usr/src/uts/common/fs/zfs/spa.c
update: usr/src/uts/common/fs/zfs/spa_misc.c
update: usr/src/uts/common/fs/zfs/sys/arc.h
update: usr/src/uts/common/fs/zfs/sys/dmu.h
update: usr/src/uts/common/fs/zfs/sys/metaslab.h
update: usr/src/uts/common/fs/zfs/sys/spa.h
update: usr/src/uts/comm...
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -
2008 Jun 10
3
ZFS space map causing slow performance
...blocks. The result is that it can take several minutes for an spa_sync() to complete, even if I''m only writing a single 128KB block.
Using DTrace, I can see that space_map_alloc() frequently returns -1 for 128KB blocks. From my understanding of the ZFS code, that means that one or more metaslabs has no 128KB blocks available. Because of that, it seems to be spending a lot of time going through different space maps which aren''t able to all be cached in RAM at the same time, thus causing bad performance as it has to read from the disks. The on-disk space map size seems to be abou...
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have
some completely static data sets that need to be as fast as possible.
One of the scenarios I''m testing is the addition of vdevs to a pool.
Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more
vdevs and would like to balance this data across the pool for
performance. The data may be
2012 Jan 15
0
ZFS Metadata on-disk grouping
...t
be better cached (i.e. with VDEV prefetch) and speed up housekeeping
including scrubs and zfs sends. They could also reduce fragmentation
of "bulk" userdata stored in larger blocks.
These could also prove useful to implement my "Feature #1044: ZFS:
Allow specifying minimum dataset/metaslab block size AND alignment"
posted in https://www.illumos.org/issues/1044
Industry examples of grouped metadata could include copies of the
FAT table in FAT/pcfs, and the $MFT pseudofile on NTFS. Difference
is that ZFS metadata size is dynamic, so metadata regions should
not rely on any predefi...
2012 Dec 20
3
Pool performance when nearly full
Hi
I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking
(and I''d check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
zpool iostat:
used
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2009 Feb 13
3
Strange performance loss
I''m moving some data off an old machine to something reasonably new.
Normally, the new machine performs better, but I have one case just now
where the new system is terribly slow.
Old machine - V880 (Solaris 8) with SVM raid-5:
# ptime du -kds foo
15043722 foo
real 6.955
user 0.964
sys 5.492
And now the new machine - T5140 (latest Solaris 10) with ZFS
2006 Jun 15
4
devid support for EFI partition improved zfs usibility
Hi, guys,
I have add devid support for EFI, (not putback yet) and test it with a
zfs mirror, now the mirror can recover even a usb harddisk is unplugged
and replugged into a different usb port.
But there is still something need to improve. I''m far from zfs expert,
correct me if I''m wrong.
First, zfs should sense the hotplug event.
I use zfs status to check the status of the
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with
about 1 TB of mailboxes on ZFS filesystems. Recently, when under
load, we''ve had incidents where IMAP operations became very slow. The
general symptoms are that the number of imapd, pop3d, and lmtpd
processes increases, the CPU load average increases, but the ZFS I/O
bandwidth decreases. At the same time, ZFS
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false?
B) If I buy larger drives and resilver, does defrag happen?
C) Does zfs send zfs receive mean it will defrag?
--
This message posted from opensolaris.org
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message posted from opensolaris.org