similar to: Failure to zfs destroy - after interrupting zfs receive

Displaying 20 results from an estimated 2000 matches similar to: "Failure to zfs destroy - after interrupting zfs receive"

2010 Aug 03
1
snapshot space - miscalculation?
zfs get all claims that i have 523G used by snapshot. i want to get rid of it. but when i look at the space used by each snapshot i can''t find the one that can occupy so much space daten/backups used 959G - daten/backups
2015 Jan 07
0
rsync splits filenames, creates special characters where none are, weird permissions
On Wed 07 Jan 2015, Lenz Weber wrote: > Where the local destination /data/snapshots is an NFS volume mounted with the flags > (rw,noatime,addr=192.168.1.XX) > and the source is a symlink to a zfs snapshot - that looks like this: > /var/backups/mail -> /tank/mail/.zfs/snapshot/zfs-auto-snap_hourly-2015-01-07-1417 Why not skip the NFS part and run rsync to the destination
2018 Apr 04
0
Shadow_copy2 and exposing multiple levels of snapshots
Hello Group, Looking for a little assistance on this as I have been unsuccessful is getting the shadow: snapprefix, shadow:delimiter and shadow:format to work as I expect. I have no issue with getting previous versions in Windows to show me either hourly, daily or weekly, etc snapshots as previous versions. But I owuld like to be able to expose all. The shadow:snapprefix, etc seems to be the
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I''ll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I''ve changed recently is that I''ve created and destroyed a snapshot, and I used
2015 Jan 07
3
rsync splits filenames, creates special characters where none are, weird permissions
Hello, I have a quite unusual encoding problem (?). I call rsync with the following parameters: /usr/bin/rsync -a --delete --numeric-ids --delete-excluded \ --rsh="/usr/bin/ssh -o StrictHostKeyChecking=no -i \ /etc/rsnapshot_ssh_certs/mykey" \ --link-dest=/data/snapshots/hourly.1/folder/mail/ \ rsyncbackup at server:/var/backups/mail/. \
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process: hydra# zpool import pool: tank id:
2012 Jan 08
0
Pool faulted in a bad way
Hello, I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2009 Oct 31
1
Kernel panic on zfs import
Hi, I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever I try to import one of my pools. What''s the best practice for recovering (before I resort to nuking the pool and restoring from backup)? There are two pools on the system: rpool and tank. The rpool seems to be fine, since I can boot from a 2009.06 CD and ''zpool import -f rpool''; I can
2015 Jan 07
1
rsync splits filenames, creates special characters where none are, weird permissions
Hi, Am 07.01.2015 um 18:25 schrieb Paul Slootman: > On Wed 07 Jan 2015, Lenz Weber wrote: > >> Where the local destination /data/snapshots is an NFS volume mounted with the flags >> (rw,noatime,addr=192.168.1.XX) >> and the source is a symlink to a zfs snapshot - that looks like this: >> /var/backups/mail ->
2009 Jan 15
2
zfs drive keeps failing between export and import
I have a zpool that consists for a two-drive mirror. The two times I took the zpool offline, I had to resilver one of the drives (the same drive both times) when I imported it back. All drives in the pool show no read, write, or checksum errors and are new, so I'm looking to a software problem before hardware. Both drives are encrypted geli devices. I tried to reproduce the error with 1GB
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file server in it for learning purposes, and I moved almost all of my data to it. Yesterday, and naturally after no longer having backups of the data in the server, I had a controller failure (SiS 180 (oh, the quality)) and the HDD was considered unplugged. When I noticed a few checksum failures on `zfs status` (including two on
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo. The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie. I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi, I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help. Now it looks like that: # zpool status pool: tank state: UNAVAIL status:
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2010 Sep 30
3
Cannot destroy snapshots: dataset does not exist
Hello, I have a ZFS filesystem (zpool version 26 on Nexenta CP 3.01) which I''d like to rollback but it''s having an existential crisis. Here''s what I see: root at bambi:/# zfs rollback bambi/faline/userdocs at AutoD-2010-09-28 cannot rollback to ''bambi/faline/userdocs at AutoD-2010-09-28'': more recent snapshots exist use ''-r'' to
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier