similar to: When to Scrub..... ZFS That Is

Displaying 20 results from an estimated 2000 matches similar to: "When to Scrub..... ZFS That Is"

2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot: root at pluto#zfs snapshot datapool/mars at backup1 cannot create snapshot ''datapool/mars at backup1'': out of space root at pluto#zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 556G 110G 446G 19% ONLINE - rpool 278G 12.5G 265G 4% ONLINE - Any ideas??? -------------- next part
2010 Feb 16
2
ZFS Mount Errors
Why would I get the following error: Reading ZFS config: done. Mounting ZFS filesystems: (1/6)cannot mount ''/data/apache'': directory is not empty (6/6) svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1 And yes, there is data in the /data/apache file system....... This was created during the jumpstart process. Thanks --------------
2010 Jun 01
1
Solaris 10U8 and ZFS Encryption
Is it currently possible (Solaris 10 u8) to encrypt a ZFS pool? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100601/c508b6cd/attachment.html>
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all, I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3). I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2011 Aug 03
3
Saving data across install
I installed a Solaris 10 development box on a 500G root mirror and later I received some smaller drives. I learned from this list its better to have the root mirror on the smaller small drives and then create another mirror on the original 500G drives so I copied everything that was on the small drives onto the 500G mirror to free up the smaller drives for a new install. After my install
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2010 Jan 22
1
Remove ZFS Mount Points
Can I move the below mounts under / ? rpool/export /export rpool/export/home /export/home It was a result of the default install....... Thaks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100122/ec856173/attachment.html>
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http:// www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the letter. I tried first with a mirror zfsroot, when I try to boot to zfsboot the screen is flooded with "init(1M) exited on fatal signal 9" Than I tried with a simple zfs pool (not mirrored) and it just reboots right away. If I try to setup grub
2023 Jan 19
1
really large number of skipped files after a scrub
Hi, Just to follow up my first observation from this email from december: automatic scheduled scrubs that not happen. We have now upgraded glusterfs from 7.4 to 10.1, and now see that the automated scrubs ARE running now. Not sure why they didn't in 7.4, but issue solved. :-) MJ On Mon, 12 Dec 2022 at 13:38, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb at gmail.com> wrote: > Hi, > > I
2008 Jan 23
4
Synchronous scrub?
Say I''m firing off an at(1) or cron(1) job to do scrubs, and say I want to scrub two pools sequentially because they share one device. The first pool, BTW, is a mirror comprising of a smaller disk and a subset of a larger disk. The other pool is the remainder of the larger disk. I see no documentation mentioning how to scrub, then wait-until-completed. I''m happy to be pointed
2010 Jan 10
5
Repeating scrub does random fixes
I''ve been using a 5-disk raidZ for years on SXCE machine which I converted to OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which was fixed. So, now I''m at OSOL snv_111b and I''m finding that scrub repairs errors on random disks. If I repeat the scrub, it will fix errors on other disks. Occasionally it runs cleanly. That it doesn''t
2009 Feb 17
5
scrub on snv-b107
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009 This is about twice as slow as the same srub on a solaris 10 box with a mirrored zfs root pool. Has scrub become that much slower? And if so, why? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce snv107 ++ + All that''s really worth doing is what we do for others (Lewis Carrol)
2014 Oct 20
1
2.2.14 Panic in sync_expunge_range()
I am getting some panics after upgrading from 2.2.13 to 2.2.14 This panic happens for one user only, he is subscribed to 86 folders, on two of them this panic happens quite often - several times a day. The mbox folders seems OK, less than 30M with 30 and 200 messages. Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0) hmk GNU gdb 6.8
2011 Dec 13
1
question regarding samba permissions
I want to make a subfolder read only for certain users. for example: /data/pool is public rwx for all users. and now i would like to make a /data/pool/subfolder only rwx for user1 and grant read only permissions to user2 and user3 how do i do this? any links or direct tips on that? my suggestion would be something like this, but as you can imagine it didn't work: # The general datapool
2011 Dec 14
1
Fwd: Re: question regarding samba permissions
woudln't work because all the users are in one group anyway. and i am not allowed to to give read rights do "any" (i.e. 755) but theres really no option in smb.conf like "read only users = " or something like that? Am 13.12.2011 17:56, schrieb Raffael Sahli: > On Tue, 13 Dec 2011 16:38:41 +0100, "skull"<skull17 at gmx.ch> wrote: >> I want to
2010 Feb 20
1
scrub in 132
uname -a SunOS 5.11 snv_132 i86pc i386 i86pc Solaris scrub made my system unresponsive. could not login with ssh. had to do a hard reboot. -- This message posted from opensolaris.org
2008 Jun 12
2
Getting Batch mode to continue running a script after running into errors
I'm invoking R in batch mode from a bash script as follows: R --no-restore --no-save --vanilla <$TARGET/$directory/o2sat-$VERSION.R> $TARGET/$directory/o2sat-$VERSION.Routput When R comes across some error in the script however it seems to halt instead of running subsequent lines in the script: Error in file(file, "r") : cannot open the connection Calls: read.table ->