Displaying 20 results from an estimated 40000 matches similar to: "Backing up a ZFS pool"
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false?
B) If I buy larger drives and resilver, does defrag happen?
C) Does zfs send zfs receive mean it will defrag?
--
This message posted from opensolaris.org
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file
zfs snapshot -r rpool at 0908
zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908
INCREMENTAL backup to a file
zfs snapshot -i rpool at 0908 rpool at 090822
zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822
As I understand the latter gives a file with changes between 0908 and
090822. Is this correct?
How do I restore those files? I know
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi,
I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G.
Could you please help me to resolve this issue, why zfs destroy takes this much time.
While taking snapshot, it''s done within few seconds.
I have tried with removing with old snapshot but still problem is same.
===========================
I am using :
Release : OpenSolaris
2010 Apr 27
7
Mapping inode numbers to file names
Let''s suppose you rename a file or directory.
/tank/widgets/a/rel2049_773.13-4/somefile.txt
Becomes
/tank/widgets/b/foogoo_release_1.9/README
Let''s suppose you are now working on widget B, and you want to look at the
past zfs snapshot of README, but you don''t remember where it came from.
That is, you don''t know the previous name or location where that
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2012 Oct 05
24
Building an On-Site and Off-Size ZFS server, replication question
Good morning.
I am in the process of planning a system which will have 2 ZFS servers, one
on site, one off site. The on site server will be used by workstations and
servers in house, and most of that will stay in house. There will, however,
be data i want backed up somewhere else, which is where the offsite server
comes in... This server will be sitting in a Data Center and will have some
storage
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation.
Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors.
Since I hadn''t physically touched the machine, it seems a
2010 Dec 18
10
a single nfs file system shared out twice with different permissions
I am trying to configure a system where I have two different NFS shares
which point to the same directory. The idea is if you come in via one path,
you will have read-only access and can''t delete any files, if you come in
the 2nd path, then you will have read/write access.
For example, create the read/write nfs share:
zfs create tank/snapshots
zfs set sharenfs=on tank/snapshots
root
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2010 Oct 13
40
Running on Dell hardware?
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and in what configuration?
The failure seems to be related to the perc 6i. For some period around the
time
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated;
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE -
It doesn''t look like
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi,
I want to move all the ZFS fs from one pool to another, but I don''t want
to "gain" an extra level in the folder structure on the target pool.
On the source zpool I used zfs snapshot -r tank at moveTank on the root fs
and I got a new snapshot in all sub fs, as expected.
Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/...
which would place all zfs fs
2012 Nov 07
45
Dedicated server running ESXi with no RAID card, ZFS for storage?
Morning all...
I have a Dedicated server in a data center in Germany, and it has 2 3TB
drives, but only software RAID. I have got them to install VMWare ESXi and
so far everything is going ok... I have the 2 drives as standard data
stores...
But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
to boot off and 2 1Tb disks on separate physical drives... I have created a
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS will crash, and the whole
zpool is permanently gone, even after reboots.
Using opensolaris,
2009 Nov 12
8
"zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine ''zfs receive''
However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)
Is it crazy for me to try the send/receive with these two different versions
of OSes?
Is it possible the
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i
can''t get send/recv to work over ssh. I just built a new media server and
i''d like to move a few filesystem from my old server to my new server but
for some reason i keep getting strange errors...
At first i''d see something like this:
pfexec: can''t get real path of