Displaying 20 results from an estimated 3000 matches similar to: "NFS and unsafe migration"
2017 Feb 17
2
Libvirt behavior when mixing io=native and cache=writeback
Hi all,
I write about libvirt inconsistent behavior when mixing io=native and
cache=writeback. This post can be regarded as an extension, or
clarification request, of BZ 1086704
(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
On a fully upgraded CentOS6 x86-64 machine, starting a guest with
io=native and cache=writeback is permitted: no errors are raised and the
VM (qemu, really)
2017 Feb 20
0
Re: Libvirt behavior when mixing io=native and cache=writeback
On Fri, Feb 17, 2017 at 02:52:06PM +0100, Gionatan Danti wrote:
>Hi all,
>I write about libvirt inconsistent behavior when mixing io=native and
>cache=writeback. This post can be regarded as an extension, or
>clarification request, of BZ 1086704
>(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
>
>On a fully upgraded CentOS6 x86-64 machine, starting a guest with
2017 Feb 17
2
Unsafe migration with copy-storage-all (non shared storage) and writeback cache
Hi list,
I would like to understand if, and why, the --unsafe flag is needed when
using --copy-storage-all when migrating guests which uses writeback
cache mode.
Background: I want to live migrate guests with writeback cache from host
A to host B and these hosts only have local storage (ie: no shared
storage at all).
From my understanding, --unsafe should be only required when migrating
2017 Feb 21
1
Re: Unsafe migration with copy-storage-all (non shared storage) and writeback cache
On 17/02/2017 19:50, Jiri Denemark wrote:
>
> Yeah storage migration should be safe regardless of the cache mode used.
>
Very good news, thanks. Let me be paranoid: is it *surely* safe?
>
> This looks like a bug in libvirt. The code doesn't check whether an
> affected disk is going to be migrated (--copy-storage-all) or accessed
> on the shared storage.
>
> Jirka
2015 Nov 30
1
Questions about hardlinks, alternate storage and compression]
On 30 Nov 2015, at 17:48, Gionatan Danti <g.danti at assyoma.it> wrote:
>
> Hi Timo,
> glad to know it is in your TODO list ;)
It's been for many years.
> Any rough ETA on that?
Right now it doesn't seem likely to be developed anytime soon.
> Thanks.
>
> On 30/11/2015 14:23, Timo Sirainen wrote:
>> On 30 Nov 2015, at 10:21, Gionatan Danti <g.danti
2017 Jun 05
2
Cache auth credentials on Samba domain member
Il 01-06-2017 19:42 Jeremy Allison ha scritto:
> On Thu, Jun 01, 2017 at 03:11:53PM +0200, Gionatan Danti wrote:
>> However, *no* user authentication is possible on samba shares when
>> the VPN tunnel is down?
>>
>> Do you have any suggestions?
>
> I think Uri and Volker did the work on this. Uri, can you
> give an update on where we stand with offline auth
2017 Aug 23
0
GlusterFS as virtual machine storage
Hi, after many VM crashes during upgrades of Gluster, losing network
connectivity on one node etc. I would advise running replica 2 with
arbiter.
I once even managed to break this setup (with arbiter) due to network
partitioning - one data node never healed and I had to restore from
backups (it was easier and kind of non-production). Be extremely
careful and plan for failure.
-ps
On Mon, Aug
2015 Jul 14
3
Questions about hardlinks, alternate storage and compression
On 14/07/15 08:17, Steffen Kaiser wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Mon, 13 Jul 2015, Gionatan Danti wrote:
>
>> On the other hand, private (per-user) sieve file works without
>> interfering with hardlinks. In a similar manner, disabling sieve also
>> permits dovecot to create multiple hardlinks for a single message.
>>
>>
2020 Jan 02
3
Dovecot, pigeonhole and hardlinks
Il 23-12-2019 12:04 Gionatan Danti ha scritto:
> On 19/12/19 11:08, Gionatan Danti wrote:
>> Hi list,
>> many moons ago I asked about preserving hardlink between identical
>> messages when pigeonhole (for sieve filtering) was used.
>>
>> The reply was that, while hardlink worked well for non-filtered
>> messages, using pigeonhole broke the hardlink (ie:
2019 Dec 19
2
Dovecot, pigeonhole and hardlinks
Hi list,
many moons ago I asked about preserving hardlink between identical
messages when pigeonhole (for sieve filtering) was used.
The reply was that, while hardlink worked well for non-filtered
messages, using pigeonhole broke the hardlink (ie: some message-specific
data was appended to the actual mail file). Here you can find the
original thread:
2016 Mar 16
2
Deleted symlinks when receiveing files with the same name
Hi list,
I would like to know if the following rsync's behavior can be
changed/modified.
I noticed that when rsync receive a file for which the local filesystem
already has a symlinks with the same path/name, it _first_ delete the
symlink, _then_ it start the transfer.
I think there are two problems with this approach:
- it completely ignores the content of the symlinked files, which can
2018 Sep 07
1
Re: Immutable backing files
On Thu, Sep 06, 2018 at 18:23:46 +0200, Gionatan Danti wrote:
> Il 06-09-2018 12:54 Peter Krempa ha scritto:
> > You forgot to specify the format of the backing file into the overlay
> > file (qemu-img option -F). Libvirt treats any unspecified format as raw
> > since it's not secure to do probing of the format.
>
> Hi, the immutable base file *was* a raw image.
2015 Feb 02
2
Per-protocol ssl_protocols settings
Hi all,
I have a question regarding the "ssl_protocols" parameter.
I understand that editing the 10-ssl.conf file I can set the
ssl_protocols variable as required.
At the same time, I can edit a single protocol file (eg: 20-pop3.conf)
to set the ssl_protocols for a specific protocol/listener.
I wander if (and how) I can create a different listener for another POP3
instance, for
2020 May 11
2
Deleting messages from filesystem with sdbox mail format
Il 2020-05-11 15:54 Aki Tuomi ha scritto:
> If you manually change the mailbox contents like that, you need to run
> doveadm force-resync to fix the situation.
>
> Aki
Ok, so it means that dovecot will *not* automatically fix the index file
and I need to reconstruct the index file, right?
Just for completeness: will not fixing the index file (after a manual
deletion) cause
2020 May 11
2
[EXT] Re: Deleting messages from filesystem with sdbox mail format
Il 2020-05-11 17:41 Aki Tuomi ha scritto:
> The index will be rebuilt once you try to access the mail.
Hi Aki, I probably misunderstood yor previous reply. So, the index will
be fixed when either:
- running doveadm force-sync
- accessing the mailbox (ie: via IMAP).
This means that removing an email from the filesystem (ie: a u.* file)
and the index cache file (dovecot.index.cache) should be
2017 Aug 26
0
GlusterFS as virtual machine storage
Il 26-08-2017 07:38 Gionatan Danti ha scritto:
> I'll surely give a look at the documentation. I have the "bad" habit
> of not putting into production anything I know how to repair/cope
> with.
>
> Thanks.
Mmmm, this should read as:
"I have the "bad" habit of not putting into production anything I do NOT
know how to repair/cope with"
Really :D
2018 Sep 03
2
Immutable backing files
Hi list,
suppose I have an immutable (ie: due to a read-only snapshot) backing file.
After creating an overlay file with "qemu-img create -f qcow2 -o
backing_file=/path/to/immutable/file.img current.qcow2", libvirt refuse
to start the virtual machine and exits with an error stating "Could not
open backing file /path/to/immutable/file.img: Permission denied".
From my
2015 Nov 26
2
[g.danti@assyoma.it: Re: Questions about hardlinks, alternate storage and compression]
Il 26-11-2015 15:15 John R. Dennison ha scritto:
> You are strongly encouraged to update that CentOS system. Current is
> 6.7 (released some 3 months ago) and dovecot-2.0.9-19.
Ouch! I copied outdated information from my old post.
My current system _is_ CentOS 6.7 with dovecot
dovecot-2.0.9-19.el6_7.2.x86_64
Sorry for the confusion. Still, the problems remain
> If you find you need a
2017 Sep 09
0
GlusterFS as virtual machine storage
Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
> Sorry, I did not start the glusterfsd on the node I was shutting
> yesterday and now killed another one during FUSE test, so it had to
> crash immediately (only one of three nodes were actually up). This
> definitely happened for the first time (only one node had been killed
> yesterday).
>
> Using FUSE seems to be OK with
2015 Feb 09
0
Per-protocol ssl_protocols settings
I performed a quick test and it seems that the "ssl_protocols" setting is per-IP only and shared among all listeners defined for that address. As you want this setting to be active for one specific "inet_listener" only (with port 10995 in your case), dovecot would have to permit the "ssl_protocols" directive in that scope, which it doesn?t.
As a workaround I suggest