similar to: gluster status

Displaying 20 results from an estimated 10000 matches similar to: "gluster status"

2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown? Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the volumes > for 42 seconds (network.ping-timeout). Is there a special way to
2018 May 15
4
end-to-end encryption
Hi to all I was looking at protonmail.com Is possible to implement and end-to-end encryption with dovecot, where server-side there is no private key to decrypt messages? If I understood properly, on protonmail the private key is encrypted with user's password, so that only an user is able to decrypt the mailbox. Anything similiar ?
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the system.d script to kill gluster (service > glusterfs-server stop) only glusterd is killed. Then I guess the > shutdown doesn?t kill everything ! > Killing glusterd does not kill other gluster processes. When you shutdown a node, everything obviously gets killed but the client does not get notified
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as
2017 Feb 15
4
Upgrade from 1.2 to 2.2
Hi, I have a production server running Debian Squeeze with Dovecot 1.2 I would like to upgrade everything to Jessie, running 2.2 Last time I did something similiar, but from Lenny to Squeeze, the whole dovecot installation broke. Any suggestion on how to upgrade everything ? Can I test our current configuration with a newer dovecot version to be sure that everything would be converted properly
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown don't? I think this is a bug, if I'm shutting down a server, it's obvious that all clients should stop to connect to it.... Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd. killall should send signal 15 (SIGTERM) to the process, maybe a bug in signal
2017 Mar 20
1
Server migration
Hi to all. It's time to migrate an old server to a newer platform Some questions: 1) what happens by changing the pop3/IMAP server on the client? Is the client (Outlook, Thunderbird,...) smart enough to not download every message again? I'm asking this because the easier way to migrate would be move all mailboxes to the new server and then change the hostname on the client 2) what if I
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if we move this bug to CLOSED state and revert the rebalance-cli warning patch? -Krutika On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > Hello, > > > Yes, i forgot to upgrade the client as well. > > I did the upgrade and created a new volume,
2017 Jun 29
0
How to shutdown a node properly ?
On my nodes, when i use the system.d script to kill gluster (service glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn?t kill everything ! De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com] Envoy? : 29 juin 2017 13:41 ? : Ravishankar N <ravishankar at redhat.com> Cc : gluster-users at gluster.org; Renaud Fortier <Renaud.Fortier at
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:07 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > OK, so killall seems to be ok after several attempts i.e. iops do not stop > on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the > command. I will check the servers console during reboot to see if the VM > errors appear just after the power cycle and will try to crash the VM after >
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few > minutes. SIGTERM on the other hand causes crash, but this time it is > not read-only remount, but around 10 IOPS tops and 2 IOPS on average. > -ps So, seems to be reliable to server crashes but not to server shutdown :)
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news. Is this planned to be published in next release? Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > Thanks for that update. Very happy to hear it ran fine without any issues. > :) > > Yeah so you can ignore those 'No such file or directory' errors. They > represent a transient state where DHT in the client process
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning; Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb. I attached two logs (shard and stripe logs) Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself. Gencer
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2012 Apr 27
1
geo-replication and rsync
Hi, can someone tell me the differenct between geo-replication and plain rsync? On which frequency files are replicated with geo-replication? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL > will be as well. This assumption might be wrong. So I will test it. It would > be interesting to see client to work in case of crash (SIGKILL) and not in > case of graceful exit of glusterfsd. Exactly. if this happen, probably there
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >