Displaying 20 results from an estimated 500 matches similar to: "Extremely slow zpool scrub performance"
2009 Mar 01
8
invalid vdev configuration after power failure
What does it mean for a vdev to have an invalid configuration and how
can it be fixed or reset? As you can see, the following pool can no
longer be imported: (Note that the "last accessed by another system"
warning is because I moved these drives to my test workstation.)
~$ zpool import -f pool0
cannot import ''pool0'': invalid vdev configuration
~$ zpool import
pool:
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e
2011 Oct 26
1
Problem running zilstat script on a core install server.
We cannot run the zilstat utility (from
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat) on
an Solaris 10 u9 / x86 installed according SUNWCreq (which is a core
install)
while it works fine on a similar config but installed with all
packages (SUNWCXall)
So we know it works in this environment but it seems we miss an
essential package here. That causes the
dtrace: failed to
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2007 Sep 14
3
space allocation vs. thin provisioning
Short question:
I''m curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning may give?
Background & more detailed questions:
In Jeff Bonwick''s blog[1], he
2006 Sep 22
1
Linux Dom0 <-> Solaris prepared Volume
Hi all
heve been trying (in vain) to get a Solaris b44 DomU (dowloaded from
Sun) running on a Linux Xenhost
I followed exactly how, and it looked ok when it starts booting...
But it never boots .
adapted the configfile to boot with -v (that I can see at least
something) and this is what I get
===SNIP===
root@Xen-VT02:/export/xc/xvm/solaris-b44# xm create solaris-b44-64.py -c
Using config
2010 Jan 18
18
Is ZFS internal reservation excessive?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at jcea.es -
2014 Dec 18
1
Virtual machine removal through command line.
Hi,
Until today, I hadn't found a way to cleanly remove a KVM virtual machine
through command line on CentOS 6 or 7! I had to run 'systemctl restart
libvirtd' or 'service libvirtd restart'
After several months (!!!), I found this thread:
https://github.com/pradels/vagrant-libvirt/issues/107
Now, I know how to cleanly remove a KVM virtual machine (with default file
location):
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the platter while pushing
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation.
I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0
I found workaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator tool, but I can see that it is not easy.
When ZFS pool is fragmented then:
1. spa_sync function is
2008 Apr 29
4
Finding Pool ID
Folks,
How can I find out zpool id without using zpool import? zpool list
and zpool status does not have option as of Solaris 10U5.. Any back door
to grab this property will be helpful.
Thank you
Ajay
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be).
A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2012 Dec 20
3
Pool performance when nearly full
Hi
I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking
(and I''d check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
zpool iostat:
used
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few