Displaying 20 results from an estimated 7000 matches similar to: "Unsafe migration with copy-storage-all (non shared storage) and writeback cache"
2017 Feb 17
2
Libvirt behavior when mixing io=native and cache=writeback
Hi all,
I write about libvirt inconsistent behavior when mixing io=native and
cache=writeback. This post can be regarded as an extension, or
clarification request, of BZ 1086704
(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
On a fully upgraded CentOS6 x86-64 machine, starting a guest with
io=native and cache=writeback is permitted: no errors are raised and the
VM (qemu, really)
2017 Feb 21
1
Re: Unsafe migration with copy-storage-all (non shared storage) and writeback cache
On 17/02/2017 19:50, Jiri Denemark wrote:
>
> Yeah storage migration should be safe regardless of the cache mode used.
>
Very good news, thanks. Let me be paranoid: is it *surely* safe?
>
> This looks like a bug in libvirt. The code doesn't check whether an
> affected disk is going to be migrated (--copy-storage-all) or accessed
> on the shared storage.
>
> Jirka
2017 Aug 26
2
GlusterFS as virtual machine storage
Il 26-08-2017 01:13 WK ha scritto:
> Big +1 on what was Kevin just said.? Just avoiding the problem is the
> best strategy.
Ok, never run Gluster with anything less than a replica2 + arbiter ;)
> However, for the record,? and if you really, really want to get deep
> into the weeds on the subject, then the? Gluster people have docs on
> Split-Brain recovery.
>
>
2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> Hi, after many VM crashes during upgrades of Gluster, losing network
> connectivity on one node etc. I would advise running replica 2 with
> arbiter.
Hi Pavel, this is bad news :(
So, in your case at least, Gluster was not stable? Something as simple
as an update would let it crash?
> I once even managed to break this setup (with
2017 Aug 30
4
GlusterFS as virtual machine storage
Ciao Gionatan,
I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
storage for oVirt 4.x and I have had no major issues so far.
I have done online upgrades a couple of times, power losses, maintenance,
etc with no issues. Overall, it is very resilient.
Important thing to keep in mind is your network, I run the Gluster nodes on
a redundant network using bonding mode 1 and I have
2017 Aug 30
3
GlusterFS as virtual machine storage
Solved as to 3.7.12. The only bug left is when adding new bricks to
create a new replica set, now sure where we are now on that bug but
that's not a common operation (well, at least for me).
On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote:
> There has ben a bug associated to sharding that led to VM corruption that
> has been around for a long time (difficult to reproduce I
2015 Jul 14
3
Questions about hardlinks, alternate storage and compression
On 14/07/15 08:17, Steffen Kaiser wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Mon, 13 Jul 2015, Gionatan Danti wrote:
>
>> On the other hand, private (per-user) sieve file works without
>> interfering with hardlinks. In a similar manner, disabling sieve also
>> permits dovecot to create multiple hardlinks for a single message.
>>
>>
2015 Nov 30
1
Questions about hardlinks, alternate storage and compression]
On 30 Nov 2015, at 17:48, Gionatan Danti <g.danti at assyoma.it> wrote:
>
> Hi Timo,
> glad to know it is in your TODO list ;)
It's been for many years.
> Any rough ETA on that?
Right now it doesn't seem likely to be developed anytime soon.
> Thanks.
>
> On 30/11/2015 14:23, Timo Sirainen wrote:
>> On 30 Nov 2015, at 10:21, Gionatan Danti <g.danti
2015 Feb 09
2
Per-protocol ssl_protocols settings
Sorry for the bump...
Anyone know if it is possible to have multiple protocols instances with
different ssl_protocols settings?
Regards.
On 07/02/15 00:03, Gionatan Danti wrote:
> Hi all,
> anyone with some ideas?
>
> Thanks.
>
> Il 2015-02-02 23:08 Gionatan Danti ha scritto:
>> Hi all,
>> I have a question regarding the "ssl_protocols" parameter.
2015 Nov 30
4
Questions about hardlinks, alternate storage and compression]
On 30 Nov 2015, at 10:21, Gionatan Danti <g.danti at assyoma.it> wrote:
>
> So, let me do a straigth question: is someone using dovecot/LMTP with hardlinking? To me, this seems a _very_ important feature, and I wonder if I am doing something wrong or if the feature (hardlink+sieve) simply does not exists.
Hardlink+Sieve has never worked. The fix is a bit complicated. Here's my
2017 Jun 05
2
Cache auth credentials on Samba domain member
Il 01-06-2017 19:42 Jeremy Allison ha scritto:
> On Thu, Jun 01, 2017 at 03:11:53PM +0200, Gionatan Danti wrote:
>> However, *no* user authentication is possible on samba shares when
>> the VPN tunnel is down?
>>
>> Do you have any suggestions?
>
> I think Uri and Volker did the work on this. Uri, can you
> give an update on where we stand with offline auth
2020 Jan 02
3
Dovecot, pigeonhole and hardlinks
Il 23-12-2019 12:04 Gionatan Danti ha scritto:
> On 19/12/19 11:08, Gionatan Danti wrote:
>> Hi list,
>> many moons ago I asked about preserving hardlink between identical
>> messages when pigeonhole (for sieve filtering) was used.
>>
>> The reply was that, while hardlink worked well for non-filtered
>> messages, using pigeonhole broke the hardlink (ie:
2017 Aug 26
0
GlusterFS as virtual machine storage
Il 26-08-2017 07:38 Gionatan Danti ha scritto:
> I'll surely give a look at the documentation. I have the "bad" habit
> of not putting into production anything I know how to repair/cope
> with.
>
> Thanks.
Mmmm, this should read as:
"I have the "bad" habit of not putting into production anything I do NOT
know how to repair/cope with"
Really :D
2017 Aug 30
0
GlusterFS as virtual machine storage
There has ben a bug associated to sharding that led to VM corruption that
has been around for a long time (difficult to reproduce I understood). I
have not seen reports on that for some time after the last fix, so
hopefully now VM hosting is stable.
2017-08-30 3:57 GMT+02:00 Everton Brogliatto <brogliatto at gmail.com>:
> Ciao Gionatan,
>
> I run Gluster 3.10.x (Replica 3 arbiter
2019 Dec 19
2
Dovecot, pigeonhole and hardlinks
Hi list,
many moons ago I asked about preserving hardlink between identical
messages when pigeonhole (for sieve filtering) was used.
The reply was that, while hardlink worked well for non-filtered
messages, using pigeonhole broke the hardlink (ie: some message-specific
data was appended to the actual mail file). Here you can find the
original thread:
2018 Jun 25
5
ZFS on Linux repository
Hi list,
we all know why ZFS is not included in RHEL/CentOS distributions: its
CDDL license is/seems not compatible with GPL license.
I'm not a layer, and I do not have any strong opinion on the matter.
However, as a sysadmin, I found ZFS to be extremely useful, especially
considering BTRFS sad state. I would *really* love to have ZFS on Linux
more intergrated with current CentOS.
From
2015 Jul 13
2
Questions about hardlinks, alternate storage and compression
Hi Javier,
thanks for your reply.
I already checked SIS and, while interesting, is not what I want, because:
1) it can be difficult to restore a single message/attachment from a backup
2) only the attachments, and not the entire messages, are deduped.
Message-based hardlinks really exists for a reason. The good news is
that I found _why_ they are not working: it depends from how dovecot and
2015 Feb 02
2
Per-protocol ssl_protocols settings
Hi all,
I have a question regarding the "ssl_protocols" parameter.
I understand that editing the 10-ssl.conf file I can set the
ssl_protocols variable as required.
At the same time, I can edit a single protocol file (eg: 20-pop3.conf)
to set the ssl_protocols for a specific protocol/listener.
I wander if (and how) I can create a different listener for another POP3
instance, for
2016 Mar 16
2
Deleted symlinks when receiveing files with the same name
Hi list,
I would like to know if the following rsync's behavior can be
changed/modified.
I noticed that when rsync receive a file for which the local filesystem
already has a symlinks with the same path/name, it _first_ delete the
symlink, _then_ it start the transfer.
I think there are two problems with this approach:
- it completely ignores the content of the symlinked files, which can
2017 Feb 20
0
Re: Libvirt behavior when mixing io=native and cache=writeback
On Fri, Feb 17, 2017 at 02:52:06PM +0100, Gionatan Danti wrote:
>Hi all,
>I write about libvirt inconsistent behavior when mixing io=native and
>cache=writeback. This post can be regarded as an extension, or
>clarification request, of BZ 1086704
>(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
>
>On a fully upgraded CentOS6 x86-64 machine, starting a guest with