Displaying 20 results from an estimated 20000 matches similar to: "ZFS auto-snapshot in zone"
2010 Aug 03
1
snapshot space - miscalculation?
zfs get all claims that i have 523G used by snapshot.
i want to get rid of it.
but when i look at the space used by each snapshot i can''t find the one that can occupy so much space
daten/backups used 959G -
daten/backups
2010 Jan 23
0
zfs destroy snapshot: dataset already exists?
Hi,
I recently upgraded from 2009.06 to b131 (mainly to get dedup
support). The upgrade to b131 went fairly smoothly, but then I ran
into an issue trying to get the old datasets snapshotted and
send/recv''d to dedup the existing data. Here''s the steps I ran:
zfs snapshot -r data/media at prereplicate
zfs create -o mountpoint=none data/media.new
zfs send -R data/media at
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2020 May 18
1
Shadow Copy2 & zfs Snapshots
Hi there
I'm having some troubles with Shadow Copy2 & zfs Snapshots. I have
hourly and daily snapshots. If I use the following settings it works
(but omits daily snapshots):
vfs objects = shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = easysnap-hourly_%Y-%m-%d-%H-%M-UTC
However when I try to use the BRE with the prefix,
2008 Dec 28
0
Snapshot manager service dependency error
I got this in the file system-filesystem-zfs-auto-snapshot:daily.log:
...
[ Dez 28 23:13:44 Enabled. ]
[ Dez 28 23:13:53 Executing start method ("/lib/svc/method/zfs-auto-snapshot start"). ]
Checking for non-recursive missed // snapshots rpool
Checking for recursive missed // snapshots home rpool/firefox rpool/ROOT
Last snapshot for svc:/system/filesystem/zfs/auto-snapshot:daily taken
2008 Jun 26
2
Oops: zfs-auto-snapshot with at scheduling
Hi all,
I''ll attach a new version zfs-auto-snapshot including some more
improvements, and probably some new bugs. Seriously, I have
tested it, but certainly not all functionality, so please let me know
about any (new) problems you come across.
Except from the change log:
- Added support to schedule using at(1), see
README.zfs-auto-snapshot.txt
- take_snapshot will only run if
2006 Jun 22
1
zfs snapshot restarts scrubbing?
Hi,
yesterday I implemented a simple hourly snapshot on my filesystems. I also
regularly initiate a manual "zpool scrub" on all my pools. Usually the
scrubbing will run for about 3 hours.
But after enabling hourly snapshots I noticed that zfs scrub is always
restarted if a new snapshot is created - so basically it will never have the
chance to finish:
# zpool scrub scratch
# zpool
2009 Aug 19
0
zfs+nfs: scary nfs log entries?
I have a zfs dataset that I use for network home directories. The box is running 2008.11 with the auto-snapshot service enabled. To help debug some mysterious file deletion issues, I''ve enabled nfs logging (all my clients are NFSv3 Linux boxes).
I keep seeing lines like this in the nfslog:
<br>
<br>
<pre>
Wed Aug 19 10:20:48 2009 0 host.name.domain.com 1168
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All,
I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2010 Mar 03
0
[osol-help] ZFS two way replication
Sorry for double-post. This thread was posted separately to
opensolaris-help and zfs-discuss. So I''m replying to both lists.
> I''m wondering what the possibilities of two-way replication are for a
> ZFS storage pool.
Based on all the description you gave, I wouldn''t call this two-way
replication. Because two-way replication implies changes are happening at
2017 Jul 13
0
Snapshot auto-delete unmount problem
Incase anyone is interested this issue was caused by turning on brick
multiplexing.
Switching it off made the problem go away...
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>
________________________________________________
On 31 May 2017
2009 Jan 16
2
Problem setting quotas on a zfs pool
Solaris 10 5/08
Customer migrated to a new emc array with a snap shot and did a send and
receive.
He is now trying to set quotas on the zfs file system and getting the
following error.
[root at osprey /] # zfs set quota=800g target/u05
cannot set property for ''target/u05'': size is less than current used or
reserved space
[root at osprey /] # zfs list -o
2017 May 31
1
Snapshot auto-delete unmount problem
Hi I am having a problem deleting snapshots, gluster is failing to unmount
them. I am running centos 7.3 with gluster-3.10.2-1
here is some log output:
[2017-05-31 09:21:39.961371] W [MSGID: 106057]
[glusterd-snapshot-utils.c:410:glusterd_snap_volinfo_find] 0-management:
Snap volume
331ec972f90d494d8a86dd4f69d718b7.glust01-li.run-gluster-snaps-331ec972f90d494d8a86dd4f69d718b7-brick1-b
not found
2007 Oct 08
2
safe zfs-level snapshots with a UFS-on-ZVOL filesystem?
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I used a
ZVOL-backed UFS filesystem
for the zonepath.
I thought I''d push on with the experiment (in the hope Live Upgrade
would be able to upgrade such a zone).
It''s a bit unwieldy, but everything worked reasonably well -
performance isn''t much worse than straight
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that''s already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
zfs send -R <old fs>@snap | zfs recv -d <new fs>
However, according to the man page,
2017 Apr 28
1
Re: Live migration with non-shared ZFS volume
Hi Martin,
in the meantime, I've found a solution which I consider at least acceptable:
1. create zfs snapshot of domain disk (/dev/zstore/test-volume)
2. save original XML domain definition
3. create snapshot in libvirt like this:
virsh snapshot-create --xmlfile snap.xml --disk-only --no-metadata
test-domain
snap.xml:
<domainsnapshot>
<disks>
<disk
2010 May 05
0
zfs destroy -f and dataset is busy?
We have a pair of opensolaris systems running snv_124. Our main zpool
''z'' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy ''z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00'':
dataset is busy
I have tried:
Unable to destroy numerous datasets even with a -f option.
2009 Jan 02
3
ZFS iSCSI (For VirtualBox target) and SMB
Hey all,
I''m setting up a ZFS based fileserver to use both as a shared network drive and separately to have an iSCSI target to be used as the "Hard disk" of a windows based VM runninf on another machine.
I''ve built the machine, installed the OS, created the RAIDZ pool and now have a couple of questions (I''m pretty much new to Solaris by the way but have been
2017 Jun 22
4
recovering from deleted snapshot
I have an automatic process setup. It's still pretty new and obviously
in need of better error handling because now I find myself in a bad state.
I run snapshot-create-as across all my vms, then do zfs replication to
the target backup system, then blockcommit everything.
virsh snapshot-create-as --domain $vm snap --diskspec
$DISK,file=$VMPREFIX/"$vm"-snap.qcow2 --disk-only --atomic
2008 Oct 09
2
ZFS Replication Question
All;
I have a question about ZFS and how it protects data integrity in the
context of a replication scenario.
First, ZFS is designed such that all data on disk is in a consistent
state. Likewise, all data in a ZFS snapshot on disk is in a consistent
state. Further, ZFS, by virtue of its 256 bit checksums is capable of
finding and repairing data corruption should it occur.
In the case of a