Displaying 20 results from an estimated 100 matches similar to: "2.2.14 Panic in sync_expunge_range()"
2014 Oct 20
0
2.2.14 Panic in imap_fetch_more()
This panic happens with different users, and it also occured in 2.2.13
Panic: file imap-fetch.c: line 556 (imap_fetch_more): assertion failed:
(ctx->client->output_cmd_lock == NULL || ctx->client->output_cmd_lock == cmd)
hmk
GNU gdb 6.8
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is
2014 Dec 10
1
Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0)
We're seeing this:
% doveadm force-resync -u USERNAME INBOX
doveadm(USERNAME): Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0)
doveadm(USERNAME): Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x817ad) [0x33ab08317ad] -> /usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x3a)
[0x33ab08318ba] ->
2014 Jul 16
0
Dovecot 2.2.13+ Assertion failed in sync_expunge_range
Hi!
I got this error when build dovecot from source (rev 17627), configure it as imapc proxy (http://wiki2.dovecot.org/HowTo/ImapcProxy) and run imaptest with clients=2 or more, on latest stable version (2.2.13) and with clients=1 this error does not occurred.
Jul 16 17:22:26 imap(user771): Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0)
Jul
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simple zfs pool (not mirrored) and it just
reboots right away.
If I try to setup grub
2011 Dec 08
1
Can't create striped replicated volume
Hi,
I'm trying to create striped replicated volume but getting tis error:
gluster volume create cloud stripe 4 replica 2 transport tcp
nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool
wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path>
Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot:
root at pluto#zfs snapshot datapool/mars at backup1
cannot create snapshot ''datapool/mars at backup1'': out of space
root at pluto#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 556G 110G 446G 19% ONLINE -
rpool 278G 12.5G 265G 4% ONLINE -
Any ideas???
-------------- next part
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all,
I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3).
I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.
2011 Dec 13
1
question regarding samba permissions
I want to make a subfolder read only for certain users.
for example: /data/pool is public rwx for all users.
and now i would like to make a /data/pool/subfolder only rwx for user1 and grant read only permissions to user2 and user3
how do i do this? any links or direct tips on that?
my suggestion would be something like this, but as you can imagine it didn't work:
# The general datapool
2007 Oct 25
1
How to have ZFS root with /usr on a separate datapool
Ref: http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
Ref: http://mediacast.sun.com/share/timf/zfs-actual-root-install.sh
This is my errata for Tim Foster''s zfs root install script:
1/ Correct mode for /tmp should be 1777.
2/ The zfs boot install should allow you to have /usr on a separate zpool:
a/ We need to create /zfsroot/usr/lib in the root partition and
2011 Dec 14
1
Fwd: Re: question regarding samba permissions
woudln't work because all the users are in one group anyway.
and i am not allowed to to give read rights do "any" (i.e. 755)
but theres really no option in smb.conf like "read only users = " or
something like that?
Am 13.12.2011 17:56, schrieb Raffael Sahli:
> On Tue, 13 Dec 2011 16:38:41 +0100, "skull"<skull17 at gmx.ch> wrote:
>> I want to
2017 Jul 25
1
memory snapshot save error
libvirt
version: 3.4.0
architecture: x86_64 ubuntu16.04-server
hypervisor: kvm,qemu
when I want make a memery snapshot for a vm ,and I call virsh save ,but tell me this error:
error: Failed to save domain 177 to /datapool/mm.img
error: operation failed: domain save job: unexpectedly failed
xml configure:
<domain type='kvm' id='177'>
<name>virt23</name>
2010 Mar 13
3
When to Scrub..... ZFS That Is
When would it be necessary to scrub a ZFS filesystem?
We have many "rpool", "datapool", and a NAS 7130, would you consider to
schedule monthly scrubs at off-peak hours or is it really necessary?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Aug 03
3
Saving data across install
I installed a Solaris 10 development box on a 500G root mirror and later I
received some smaller drives. I learned from this list its better to have
the root mirror on the smaller small drives and then create another mirror
on the original 500G drives so I copied everything that was on the small
drives onto the 500G mirror to free up the smaller drives for a new install.
After my install
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this
2014 Oct 29
2
2.2.15 Panic in mbox_sync_read_next_mail()
It might not be a fault in dovecot, as the user is accessing the folder locally
with alpine while also running imap-sessions. However it would have been nice
with a more graceful action than panic?
The panic is preceeded by
Error: Next message unexpectedly corrupted in mbox file PATH
Panic: file mbox-sync.c: line 152 (mbox_sync_read_next_mail): assertion failed:
2008 Jun 12
2
Getting Batch mode to continue running a script after running into errors
I'm invoking R in batch mode from a bash script as follows:
R --no-restore --no-save --vanilla
<$TARGET/$directory/o2sat-$VERSION.R>
$TARGET/$directory/o2sat-$VERSION.Routput
When R comes across some error in the script however it seems to halt
instead of running subsequent lines in the script:
Error in file(file, "r") : cannot open the connection
Calls: read.table ->
2010 May 31
3
zfs permanent errors in a clone
$ zfs list -t filesystem
NAME USED AVAIL REFER MOUNTPOINT
datapool 840M 25.5G 21K /datapool
datapool/virtualbox 839M 25.5G 839M /virtualbox
mypool 8.83G 6.92G 82K /mypool
mypool/ROOT 5.48G 6.92G 21K legacy
mypool/ROOT/May25-2010-Image-Update
2018 Nov 15
1
libvirt call qemu to create vm need more than 10 seconds
Hi all:
It takes more than 10 seconds to create a vm on a Dell R830 machine, but it takes less than 2 seconds on other machines. This is not normal, so I turned on the debug log for libvirtd. I analyzed the log and found that the time was spent on libvirtd calling qemu. Thread 95225 calls the qemuProcessLaunch interface at 14:22:30.129 and then builds the emulator command line, but the
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?