similar to: Block Pointer Rewrite status -also, zfs version upgrades

Displaying 20 results from an estimated 20000 matches similar to: "Block Pointer Rewrite status -also, zfs version upgrades"

2019 Jul 01
1
Was, Re: raid 5 install, is ZFS
Speaking of ZFS, got a weird one: we were testing ZFS (ok, it was on Ubuntu, but that shouldn't make a difference, I would think). and I've got a zpool z2. I pulled one drive, to simulate a drive failure, and it rebuilt with the hot spare. Then I pushed the drive I'd pulled back in... and it does not look like I've got a hot spare. zpool status shows config: NAME STATE
2007 Jun 22
1
Implicit storage tiering w/ ZFS
I''m curious if there has been any discussion of or work done toward implementing storage classing within zpools (this would be similar to the storage foundation QoSS feature). I''ve searched the forum and inspected the documentation looking for a means to do this, and haven''t found anything, so pardon the post if this is redundant/superfluous. I would imagine this would
2007 Dec 23
11
RAIDZ(2) expansion?
I skimmed the archives and found a thread from July earlier this year about RAIDZ expansion. Not adding more RAIDZ stripes to a pool, but adding more drives to the stripe itself. I''m wondering if an RFE has been submitted for this and if any progress has been made, or is expected? I find myself out of space on my current RAID5 setup and would love to flip over to a ZFS raidz2 solution
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All, over the last couple of weeks, I had to boot from my rpool from various physical machines because some component on my laptop mainboard blew up (you know that burned electronics smell?). I can''t retrospectively document all I did, but I am sure I recreated the boot-archive, ran devfsadm -C and deleted /etc/zfs/zpool.cache several times. Now zpool status is referring to a
2009 May 13
2
With RAID-Z2 under load, machine stops responding to local or remote login
Hi world, I have a 10-disk RAID-Z2 system with 4 GB of DDR2 RAM and a 3 GHz Core 2 Duo. It''s exporting ~280 filesystems over NFS to about half a dozen machines. Under some loads (in particular, any attempts to rsync between another machine and this one over SSH), the machine''s load average sometimes goes insane (27+), and it appears to all be in kernel-land (as nothing in
2007 Apr 18
2
zfs block allocation strategy
Hi, quoting from zfs docs "The SPA allocates blocks in a round-robin fashion from the top-level vdevs. A storage pool with multiple top-level vdevs allows the SPA to use dynamic striping to increase disk bandwidth. Since a new block may be allocated from any of the top-level vdevs, the SPA implements dynamic striping by spreading out writes across all available top-level vdevs" Now,
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2012 Jan 06
4
ZFS Upgrade
Dear list, I''m about to upgrade a zpool from 10 to 29 version, I suppose that this upgrade will improve several performance issues that are present on 10, however inside that pool we have several zfs filesystems all of them are version 1 my first question is is there a problem with performance or any other problem if you operate a zpool 29 with zfs filesystems version 1 ? Is it better
2008 May 09
1
News on Single Drive RaidZ Expansion?
I cannot recall if it was this (-discuss) or (-code) but a post a few months ago caught my attention. In it someone detailed having worked out the math and algorithms for a flexible expansion scheme for ZFS. Clearly this is very exciting to me, and most people who use ZFS on purpose. I am wondering if there is currently any work in progress to implement that - or any other method of
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2017 Apr 14
2
ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Hi, I’m new here so apologies if this has been answered before. I have a box that uses ZFS for everything (ubuntu 17.04) and I want to create a libvirt pool on that. My ZFS pool is named „big" So i do: > zfs create big/zpool > virsh pool-define-as --name zpool --source-name big/zpool --type zfs > virsh pool-start zpool > virsh pool-autostart zpool > virsh pool-list >
2017 Apr 24
1
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thank you for your reply. I have managed to create a virtual machine on my ZFS-filesystem using virt-install:-) It seems to me that my version of libvirt (Ubuntu 17.04) has problems enumerating the devices when "virsh vol-list“ is used. The volumes are available for virt-install but not thru virsh or virt-manager. As to when the volumes disappear in virsh vol-list - I have no idea. I’m not
2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar
2010 Feb 15
3
zfs questions wrt unused blocks
Gents, We want to understand the mechanism of zfs a bit better. Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks? Q: what criteria is there for zfs to start reclaiming blocks Issue at hand is an LDOM or zone running in a virtual (thin-provisioned) disk on a NFS server and a zpool inside that vdisk. This vdisk tends to grow in size even if the user writes and deletes
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi. I''m all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are the bottleneck, so I''m going to use not more than 4 disks, probably.
2007 Jul 04
3
zfs dynamic lun expansion
Hi, I had 2 luns in a zfs mirrored config. I increased the size of both the luns by x gig and offlined/online the individual luns in the zpool, also tried an export/import of the zpool, but i am unable to see the increased size....what would i need to do to see the increased size?...or is it not possible yet? This message posted from opensolaris.org
2013 Oct 24
4
ZFS on Linux in production?
We are a CentOS shop, and have the lucky, fortunate problem of having ever-increasing amounts of data to manage. EXT3/4 becomes tough to manage when you start climbing, especially when you have to upgrade, so we're contemplating switching to ZFS. As of last spring, it appears that ZFS On Linux http://zfsonlinux.org/ calls itself production ready despite a version number of 0.6.2, and
2009 Feb 12
2
Solaris and zfs versions
We''ve been experimenting with zfs on OpenSolaris 2008.11. We created a pool in OpenSolaris and filled it with data. Then we wanted to move it to a production Solaris 10 machine (generic_137138_09) so I "zpool exported" in OpenSolaris, moved the storage, and "zpool imported" in Solaris 10. We got: Cannot import ''deadpool'': pool is formatted
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember