similar to: Upgrade from 1.2 to 2.2

Displaying 20 results from an estimated 7000 matches similar to: "Upgrade from 1.2 to 2.2"

2017 Feb 15
1
Upgrade from 1.2 to 2.2
2017-02-15 13:27 GMT+01:00 Aki Tuomi <aki.tuomi at dovecot.fi>: > For good pointers, see http://wiki.dovecot.org/Upgrading > > it's not complete, but it should give you some idea. I've already read that, and as wrote previously, everything broke down. dovecont -n wasn't able to convert the configuration file and dovecot wasn't started properly. The only way to fix
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi, I want to give an update for this. I also tested READ speed. It seems, sharded volume has a lower read speed than striped volume. This machine has 24 cores with 64GB of RAM . I really don?t think its caused due to low system. Stripe is kind of a shard but a fixed size based on stripe value / filesize. Hence, I would expect at least the same speed or maybe little slower. What I get is
2017 Oct 04
0
data corruption - any update?
Just so I know. Is it correct to assume that this corruption issue is ONLY involved if you are doing rebalancing with sharding enabled. So if I am not doing rebalancing I should be fine? -bill On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > > > On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran > <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Oct 13
1
small files performance
Where did you read 2k IOPS? Each disk is able to do about 75iops as I'm using SATA disk, getting even closer to 2000 it's impossible Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto: > Depends what you need. > 2K iops for small file writes is not a bad result. > In my case I had a system that was just poorly written and it was >
2017 Sep 08
0
GlusterFS as virtual machine storage
On Fri, Sep 8, 2017 at 12:48 PM, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote: > I think this should be considered a bug > If you have a server crash, glusterfsd process obviously doesn't exit > properly and thus this could least to IO stop ? I agree with you completely in this.
2017 Sep 23
1
EC 1+2
Already read that. Seems that I have to use a multiple of 512, so 512*(3-2) is 512. Seems fine Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto: > Hi > > Take a look at this link (under ?Optimal volumes?), for Erasure Coded > volume optimal configuration > > http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/ >
2017 Nov 15
0
Re: [Qemu-devel] [qemu-img] support for XVA
https://stacklet.com/downloads/XenServer-XVA-Template-Debian-7.8-Lightweight-x86 Some XVAs Il 15 nov 2017 10:42 PM, "Gandalf Corvotempesta" < gandalf.corvotempesta@gmail.com> ha scritto: > I'm thinking on how to prove you a sample XVA > I have to create (and populate) a VM because an empty image will result in > an empty XVA > And a VM is 300-400Mb as minimum
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to be used in production? >> > >
2016 Oct 27
4
Server migration
On 27 Oct 2016, at 15:29, Tanstaafl <tanstaafl at libertytrek.org> wrote: > > On 10/26/2016 2:38 AM, Gandalf Corvotempesta > <gandalf.corvotempesta at gmail.com> wrote: >> This is much easier than dovecot replication as i can start immedialy with >> no need to upgrade the old server >> >> my only question is: how to manage the email received on the
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown don't? I think this is a bug, if I'm shutting down a server, it's obvious that all clients should stop to connect to it.... Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the
2017 Sep 08
0
GlusterFS as virtual machine storage
I currently only have a Windows 2012 R2 server VM in testing on top of the gluster storage, so I will have to take some time to provision a couple Linux VMs with both ext4 and XFS to see what happens on those. The Windows server VM is OK with killall glusterfsd, but when the 42 second timeout goes into effect, it gets paused and I have to go into RHEVM to un-pause it. Diego On Fri, Sep 8, 2017
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as
2017 Oct 04
0
data corruption - any update?
On 3 October 2017 at 13:27, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Any update about multiple bugs regarding data corruptions with > sharding enabled ? > > Is 3.12.1 ready to be used in production? > Most issues have been fixed but there appears to be one more race for which the patch is being worked on. @Krutika, is that correct? Thanks,
2017 Sep 23
0
EC 1+2
Hi Take a look at this link (under ?Optimal volumes?), for Erasure Coded volume optimal configuration http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/ On Sat, Sep 23, 2017 at 10:01 Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Is possible to create a dispersed volume 1+2 ? (Almost the same as replica > 3, the same as RAID-6) > > If
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2017 Sep 08
0
GlusterFS as virtual machine storage
So even killall situation eventually kills VM (I/O errors). Gandalf, isn't possible server hard-crash too much? I mean if reboot reliably kills the VM, there is no doubt network crash or poweroff will as well. I am tempted to test this setup on DigitalOcean to eliminate possibility of my hardware/network. But if Diego is able to reproduce the "reboot crash", my doubts of
2017 Sep 08
2
GlusterFS as virtual machine storage
I would prefer the behavior was different to what it is of I/O stopping. The argument I heard for the long 42 second time out was that MTBF on a server was high, and that the client reconnection operation was *costly*. Those were arguments to *not* change the ping timeout value down from 42 seconds. I think it was mentioned that low ping timeout settings could lead to high cpu loads with many
2017 Sep 08
0
GlusterFS as virtual machine storage
On Sep 8, 2017 13:36, "Gandalf Corvotempesta" < gandalf.corvotempesta at gmail.com> wrote: 2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to