similar to: About snapshots or versioned backups

Displaying 20 results from an estimated 10000 matches similar to: "About snapshots or versioned backups"

2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you have several zfs filesystems under one top level directory like this: rpool rpool/ROOT/osol-112 rpool/export rpool/export/home rpool/export/home/reader you could do a shapshot encompassing everything below zpool instead of having to do it at each level. (Maybe it was in a dream...)
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I''ll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I''ve changed recently is that I''ve created and destroyed a snapshot, and I used
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Mar 28
2
Have an idea about ZFS improvements - step to the user. Is it actual?
Hi, I have an idea about improvements of ZFS from the one side and improvements for of the GUI interface from the other side. Let me describe my idea. >From the one side we have an ZFS on-th-fly snapshots, and starting from OpenSolaris 2008.11 we have TimeSlider user feature based on ZFS snaphots. So, we made a big step to the user. But from the other side we still have in GUI interface
2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got a full Bash trace of it, so I know exactly what was done. There are a moderate number of snapshots on the zp1 pool, and I''m intending to replicate the whole thing into the backup pool. After housekeeping, I take make a current snapshot on the data pool (zp1). Since this is a new full backup, I then
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2010 Sep 12
3
Failed zfs send "invalid backup stream".............
I''m trying to replicate a 300 GB pool with this command zfs send alpha at 3 | zfs receive -F omega about 2 hours in to the process it fails with this error "cannot receive new filesystem stream: invalid backup stream" I have tried setting the target read only (zfs set readonly=on omega) also disable Timeslider thinking it might have something to do with it. What could be
2009 Mar 30
3
Data corruption during resilver operation
I''m in well over my head with this report from zpool status saying: root # zpool status z3 pool: z3 state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2011 Jul 03
2
about leaving zpools exported for future use
I''ve posted this some time ago but lost track of the subject line and answers. My zfs machine has croaked to the point that it just quits after some 10 15 minutes of uptime. No interesting logs or messages what so ever. At least not that I''ve found. It just quietly quits. I''m not interested in dinking around with this setup... its well ready for upgrade. What
2010 Dec 16
6
AHCI or IDE?
Hello All, I want to build a home file and media server now. After experiment with a Asus Board and running in unsolve problems I have bought this Supermicro Board X8SIA-F with Intel i3-560 and 8 GB Ram http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIA.cfm?IPMI=Y also the LSI HBA SAS 9211-8i
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots? Example: --($ ~)-- sudo zfs snapshot rpool/ROOT at 01 --($ ~)-- sudo zfs snapshot rpool/ROOT at 02 --($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02 --($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01 cannot rollback to ''rpool/ROOT at 01'': more
2009 Jan 13
6
mirror rpool
Hi Host: VirtualBox 2.1.0 (WinXP SP3) Guest: OSol 5.11snv_101b IDE Primary Master: 10 GB, rpool IDE Primary Slave: 10 GB, empty format output: AVAILABLE DISK SELECTIONS: 0. c3d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0 1. c3d1 <drive unknown> /pci0,0/pci-ide at 1,1/ide at 0/cmdk at 1,0 # ls
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better) when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded. i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi, I''m trying to let zfs users to create and destroy snapshots in their zfs filesystems. So rpool/vm has the permissions: osol137 19:07 ~: zfs allow rpool/vm ---- Permissions on rpool/vm ----------------------------------------- Permission sets: @virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop Create time permissions: