similar to: Upgrade a degraded pool

Displaying 20 results from an estimated 7000 matches similar to: "Upgrade a degraded pool"

2010 Oct 02
3
out of HDD space - zfs degraded
Overnight I was running a zfs send | zfs receive (both within the same system / zpool). The system ran out of space, a drive went off line, and the system is degraded. This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 23:43:48 EDT 2010. The following logs are also available at http://www.langille.org/tmp/zfs-space.txt <- no line wrapping This is what was running: #
2012 Nov 27
6
How to clean up /
Hello. I recently upgraded to 9.1-RC3, everything went fine, however the / partition its about to get full. Im really new to FreeBSD so I don?t know what files can be deleted safely. # find -x / -size +10000 -exec du -h {} \; 16M /boot/kernel/kernel 60M /boot/kernel/kernel.symbols 6.7M /boot/kernel/if_ath.ko.symbols 6.4M /boot/kernel/vxge.ko.symbols 9.4M
2013 Jan 24
2
RFC: Suggesting ZFS "best practices" in FreeBSD
>> #1. Map the physical drive slots to how they show up in FBSD so if a >> disk is removed and the machine is rebooted all the disks after that >> removed one do not have an 'off by one error'. i.e. if you have >> ada0-ada14 and remove ada8 then reboot - normally FBSD skips that >> missing ada8 drive and the next drive (that used to be ada9) is now
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2012 Aug 16
2
Geom label lost after expanding partition
I have a GPT formatted disk where I recently expanded the size of a partition. I used "gpart resize -i 6 ada1" first to expand the partition to use the remaining free space and then growfs to modify the FFS file system to use the full partition. This was all done in single-user mode, of course, but when I enter "exit" to bring the system up, it failed to mount /usr. This was
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2009 Dec 23
14
Moving a pool from FreeBSD 8.0 to opensolaris
I was wondering what the best method of moving a pool from FreeBSD 8.0 to OpenSolaris is. When i originally built my system, it was using hardware which wouldn''t work in opensolairs, but i''m about to do an upgrade so i should be able to use Opensolaris when i''m done. My current system uses a Highpoint RocketRaid 2340. It has 12 1TB hard drives an intel core2 quad
2013 Jun 13
1
zpool labelclear destroys GPT data
When i use zpool labelclear, it wipes the whole disk including gpt data. So the whole disk is empty and i need to create the gpt partitions again. Is this supposed to work like this? The man page suggests that it only wipes the ZFS metadata. zpool labelclear [-f] device Removes ZFS label information from the specified device. The device must not be part of an active pool
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I''d ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some behind a 6920 DSP engine). There are many LUNs, all about 500 GB and mirrored via ZFS. The LUNs
2013 Mar 22
1
Virtio and GEOM labels
I'm running FreeBSD 9-STABLE as a guest under RHEL 6.4 KVM virtualisation. I have networking and storage in the FreeBSD guest using the Virtio drivers (with the virtual disk set to "Virtio" in the definition on the host). Everything is working nicely: I have a vtnet network adapter and see vtbd devices for my virtual disks in FreeBSD. Performance is much better compared with an
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2008 Sep 21
3
UNEXPECTED SOFT UPDATE INCONSISTENCY; RUN fsck MANUALLY
Sep 21 08:57:54 belle fsck: /dev/ad4s1d: 1 DUP I=190 Sep 21 08:57:54 belle fsck: /dev/ad4s1d: UNEXPECTED SOFT UPDATE INCONSISTENCY; RUN fsck MANUALLY. Ok, so I ran fsck manually (even with -y), but yet it refuses to clear/fix whatever to the questions posed as fsck runs. What does this all mean? Thanks, -Clint -- This message has been scanned for viruses and dangerous content by MailScanner,
2017 Apr 14
2
ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Hi, I’m new here so apologies if this has been answered before. I have a box that uses ZFS for everything (ubuntu 17.04) and I want to create a libvirt pool on that. My ZFS pool is named „big" So i do: > zfs create big/zpool > virsh pool-define-as --name zpool --source-name big/zpool --type zfs > virsh pool-start zpool > virsh pool-autostart zpool > virsh pool-list >
2017 Apr 24
1
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thank you for your reply. I have managed to create a virtual machine on my ZFS-filesystem using virt-install:-) It seems to me that my version of libvirt (Ubuntu 17.04) has problems enumerating the devices when "virsh vol-list“ is used. The volumes are available for virt-install but not thru virsh or virt-manager. As to when the volumes disappear in virsh vol-list - I have no idea. I’m not
2009 Jan 25
2
Unable to destory a pool
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT jira-app-zpool 272G 330K 272G 0% ONLINE - The following command hangs forever. If I reboot the box , zpool list shows online as I mentioned the output above. # zpool destroy -f jira-app-zpool How can get rid of this pool and any reference to it. bash-3.00# zpool status pool: jira-app-zpool state: UNAVAIL
2008 Apr 29
4
Finding Pool ID
Folks, How can I find out zpool id without using zpool import? zpool list and zpool status does not have option as of Solaris 10U5.. Any back door to grab this property will be helpful. Thank you Ajay
2007 Aug 14
2
restore lost pool after vtoc re-label
hi all, i''ve been using a SAN LUN as the sole member of a zpool with one additional zfs filesystem. this is a flat SAN fabric, so this LUN was available to other systems on the fabric, and one of them came up with "wrong magic number" for several drives, and, as best i can tell, the vtoc for my zpool LUN was over-written on that host via format labeling to correct the error.
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all, I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command: # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-sized disks). I thought one of the disks might have been to blame, so I tried swapping it out