Displaying 20 results from an estimated 300 matches similar to: "snapshot out of space"
2010 Mar 13
3
When to Scrub..... ZFS That Is
When would it be necessary to scrub a ZFS filesystem?
We have many "rpool", "datapool", and a NAS 7130, would you consider to
schedule monthly scrubs at off-peak hours or is it really necessary?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Feb 16
2
ZFS Mount Errors
Why would I get the following error:
Reading ZFS config: done.
Mounting ZFS filesystems: (1/6)cannot mount ''/data/apache'': directory is not
empty
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
failed: exit status 1
And yes, there is data in the /data/apache file system.......
This was created during the jumpstart process.
Thanks
--------------
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simple zfs pool (not mirrored) and it just
reboots right away.
If I try to setup grub
2011 Dec 08
1
Can't create striped replicated volume
Hi,
I'm trying to create striped replicated volume but getting tis error:
gluster volume create cloud stripe 4 replica 2 transport tcp
nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool
wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path>
Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2010 Jun 01
1
Solaris 10U8 and ZFS Encryption
Is it currently possible (Solaris 10 u8) to encrypt a ZFS pool?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100601/c508b6cd/attachment.html>
2014 Oct 20
1
2.2.14 Panic in sync_expunge_range()
I am getting some panics after upgrading from 2.2.13 to 2.2.14
This panic happens for one user only, he is subscribed to 86 folders,
on two of them this panic happens quite often - several times a day.
The mbox folders seems OK, less than 30M with 30 and 200 messages.
Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0)
hmk
GNU gdb 6.8
2011 Dec 13
1
question regarding samba permissions
I want to make a subfolder read only for certain users.
for example: /data/pool is public rwx for all users.
and now i would like to make a /data/pool/subfolder only rwx for user1 and grant read only permissions to user2 and user3
how do i do this? any links or direct tips on that?
my suggestion would be something like this, but as you can imagine it didn't work:
# The general datapool
2011 Dec 14
1
Fwd: Re: question regarding samba permissions
woudln't work because all the users are in one group anyway.
and i am not allowed to to give read rights do "any" (i.e. 755)
but theres really no option in smb.conf like "read only users = " or
something like that?
Am 13.12.2011 17:56, schrieb Raffael Sahli:
> On Tue, 13 Dec 2011 16:38:41 +0100, "skull"<skull17 at gmx.ch> wrote:
>> I want to
2017 Jul 25
1
memory snapshot save error
libvirt
version: 3.4.0
architecture: x86_64 ubuntu16.04-server
hypervisor: kvm,qemu
when I want make a memery snapshot for a vm ,and I call virsh save ,but tell me this error:
error: Failed to save domain 177 to /datapool/mm.img
error: operation failed: domain save job: unexpectedly failed
xml configure:
<domain type='kvm' id='177'>
<name>virt23</name>
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this
2008 Jun 12
2
Getting Batch mode to continue running a script after running into errors
I'm invoking R in batch mode from a bash script as follows:
R --no-restore --no-save --vanilla
<$TARGET/$directory/o2sat-$VERSION.R>
$TARGET/$directory/o2sat-$VERSION.Routput
When R comes across some error in the script however it seems to halt
instead of running subsequent lines in the script:
Error in file(file, "r") : cannot open the connection
Calls: read.table ->
2018 Nov 15
1
libvirt call qemu to create vm need more than 10 seconds
Hi all:
It takes more than 10 seconds to create a vm on a Dell R830 machine, but it takes less than 2 seconds on other machines. This is not normal, so I turned on the debug log for libvirtd. I analyzed the log and found that the time was spent on libvirtd calling qemu. Thread 95225 calls the qemuProcessLaunch interface at 14:22:30.129 and then builds the emulator command line, but the
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?
2014 Oct 20
0
2.2.14 Panic in imap_fetch_more()
This panic happens with different users, and it also occured in 2.2.13
Panic: file imap-fetch.c: line 556 (imap_fetch_more): assertion failed:
(ctx->client->output_cmd_lock == NULL || ctx->client->output_cmd_lock == cmd)
hmk
GNU gdb 6.8
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all,
I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3).
I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2011 Aug 03
3
Saving data across install
I installed a Solaris 10 development box on a 500G root mirror and later I
received some smaller drives. I learned from this list its better to have
the root mirror on the smaller small drives and then create another mirror
on the original 500G drives so I copied everything that was on the small
drives onto the 500G mirror to free up the smaller drives for a new install.
After my install
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2014 Oct 29
2
2.2.15 Panic in mbox_sync_read_next_mail()
It might not be a fault in dovecot, as the user is accessing the folder locally
with alpine while also running imap-sessions. However it would have been nice
with a more graceful action than panic?
The panic is preceeded by
Error: Next message unexpectedly corrupted in mbox file PATH
Panic: file mbox-sync.c: line 152 (mbox_sync_read_next_mail): assertion failed: