similar to: [on-discuss] Reliability at power failure?

Displaying 20 results from an estimated 10000 matches similar to: "[on-discuss] Reliability at power failure?"

2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2008 Jun 17
6
mirroring zfs slice
Hi All, I had a slice with zfs file system which I want to mirror, I followed the procedure mentioned in the amin guide I am getting this error. Can you tell me what I did wrong? root # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT export 254G 230K 254G 0% ONLINE - root # echo |format Searching for disks...done
2010 Jan 24
4
zfs streams
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ; ZFS filesystem version 4)? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131 + All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Jan 28
11
destroy means destroy, right?
Hi, I just said zfs destroy pool/fs, but meant to say zfs destroy pool/junk. Is ''fs'' really gone? thx jake
2009 Feb 17
5
scrub on snv-b107
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009 This is about twice as slow as the same srub on a solaris 10 box with a mirrored zfs root pool. Has scrub become that much slower? And if so, why? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce snv107 ++ + All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Feb 27
3
luactive question
After a liveupgrade and luactivate I can login to the -new- BE. My question is: do I have to luactive the -old- BE again if I want to chose that one from the grub menu or can I just run it if I want to. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce snv107 ++ + All that''s really worth doing is what we do for others (Lewis Carrol)
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All, I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2008 Nov 16
8
Mirror and RaidZ on only 3 disks
Hi, I have a small Linux server PC at home (Intel Core2 Q9300, 4 GB RAM), and I''m seriously considering switching to OpenSolaris (Indiana, 2008.11) in the near future, mainly because of ZFS. The idea is to run the existing CentOS 4.7 system inside a VM and let it NFS mount home directories and other filesystems from OpenSolaris. I might migrate more services from Linux over time, but for
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I''ll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I''ve changed recently is that I''ve created and destroyed a snapshot, and I used
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading; Documents = 147MB Videos = 11G Software= 1.4G By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated; NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE - It doesn''t look like
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi, After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer revision, all arrays I have using ZFS mirroring are displaying errors. This started happening immediately after ZFS upgrades. Here is an example: ormandj at neutron.corenode.com:~$ zpool status pool: rpool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was
2009 Oct 13
14
How to resize ZFS partion or add a new one?
Hi, I have the following partions on my laptop, Inspiron 6000, from fdisk: 1 Other OS 0 11 12 0 2 EXT LBA 12 2561 2550 26 3 Active Solaris2 2562 9728 7167 74 First one is for Dell utilities. Second one is NTFS and the third is ZFS. I am currently using OpenSolaris 2009.06
2008 Dec 17
11
zpool detach on non-mirrored drive
I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Dec 06
20
Accidentally added disk instead of attaching
Hi, I wanted to add a disk to the tank pool to create a mirror. I accidentally used zpool add ? instead of zpool attach ? and now the disk is added. Is there a way to remove the disk without loosing data? Or maybe change it to mirror? Thanks, Martijn -- This message posted from opensolaris.org