Displaying 20 results from an estimated 50000 matches similar to: "Shared storage for dovecot cluster"
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd.
killall should send signal 15 (SIGTERM) to the process, maybe a bug in
signal
2017 Sep 08
0
GlusterFS as virtual machine storage
On Sep 8, 2017 13:36, "Gandalf Corvotempesta" <
gandalf.corvotempesta at gmail.com> wrote:
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and
fixed some months ago and after that, this bug arose?.
Is still unclear to me why two very similiar bug was discovered in two
different times for the same operation
How this is possible?
If you fixed the first bug, why the second one wasn't triggered on your
test environment?
Il 6 giu 2017 10:35 AM, "Mahdi
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:07 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> OK, so killall seems to be ok after several attempts i.e. iops do not stop
> on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the
> command. I will check the servers console during reboot to see if the VM
> errors appear just after the power cycle and will try to crash the VM after
>
2017 Oct 12
1
gluster status
How can I show the current state of a gluster cluster, like status,
replicas down, what is going on and so on ?
Something like /proc/mdstat for raid, where I can see which disks are
down, if raid is rebuilding,checking, ....
Anything similiar in gluster?
2017 Sep 08
0
GlusterFS as virtual machine storage
So even killall situation eventually kills VM (I/O errors).
Gandalf, isn't possible server hard-crash too much? I mean if reboot
reliably kills the VM, there is no doubt network crash or poweroff
will as well.
I am tempted to test this setup on DigitalOcean to eliminate
possibility of my hardware/network. But if Diego is able to reproduce
the "reboot crash", my doubts of
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown
don't?
I think this is a bug, if I'm shutting down a server, it's obvious that all
clients should stop to connect to it....
Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the
2017 Sep 08
2
GlusterFS as virtual machine storage
I would prefer the behavior was different to what it is of I/O stopping.
The argument I heard for the long 42 second time out was that MTBF on a
server was high, and that the client reconnection operation was *costly*.
Those were arguments to *not* change the ping timeout value down from 42
seconds. I think it was mentioned that low ping timeout settings could lead
to high cpu loads with many
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as
2017 Sep 23
1
EC 1+2
Already read that.
Seems that I have to use a multiple of 512, so 512*(3-2) is 512.
Seems fine
Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto:
> Hi
>
> Take a look at this link (under ?Optimal volumes?), for Erasure Coded
> volume optimal configuration
>
> http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
>
2017 Sep 08
0
GlusterFS as virtual machine storage
I currently only have a Windows 2012 R2 server VM in testing on top of
the gluster storage, so I will have to take some time to provision a
couple Linux VMs with both ext4 and XFS to see what happens on those.
The Windows server VM is OK with killall glusterfsd, but when the 42
second timeout goes into effect, it gets paused and I have to go into
RHEVM to un-pause it.
Diego
On Fri, Sep 8, 2017
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi,
I want to give an update for this. I also tested READ speed. It seems, sharded volume has a lower read speed than striped volume.
This machine has 24 cores with 64GB of RAM . I really don?t think its caused due to low system. Stripe is kind of a shard but a fixed size based on stripe value / filesize. Hence, I would expect at least the same speed or maybe little slower. What I get is
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified
2017 Jun 29
0
How to shutdown a node properly ?
On my nodes, when i use the system.d script to kill gluster (service glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn?t kill everything !
De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com]
Envoy? : 29 juin 2017 13:41
? : Ravishankar N <ravishankar at redhat.com>
Cc : gluster-users at gluster.org; Renaud Fortier <Renaud.Fortier at
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2017 Sep 08
0
GlusterFS as virtual machine storage
OK, so killall seems to be ok after several attempts i.e. iops do not stop
on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the
command. I will check the servers console during reboot to see if the VM
errors appear just after the power cycle and will try to crash the VM after
killall again...
-ps
On Fri, Sep 8, 2017 at 12:57 PM, Diego Remolina <dijuremo at gmail.com>
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there