Displaying 20 results from an estimated 21 matches for "fortier".
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything o...
2017 Jun 30
0
How to shutdown a node properly ?
...ients and a graceful shutdown
don't?
I think this is a bug, if I'm shutting down a server, it's obvious that all
clients should stop to connect to it....
Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the shutdown
> doesn?t kill everything !
>
>
> Killing glusterd does not kill other gluster processes.
>
> When you shutdown a no...
2017 Jun 29
0
How to shutdown a node properly ?
...rvice glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn?t kill everything !
De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com]
Envoy? : 29 juin 2017 13:41
? : Ravishankar N <ravishankar at redhat.com>
Cc : gluster-users at gluster.org; Renaud Fortier <Renaud.Fortier at fsaa.ulaval.ca>
Objet : Re: [Gluster-users] How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com<mailto:ravishankar at redhat....
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to shutdown a
> node to keep the access to the volumes without interruption ? Currently, I
> use the ?shutdown? or ?reboot...
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
...."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"
????,
*?? ?? ???*
??? ??????? ???? ????? ????? <http://linvirtstor.net/> ?? ????? ????? ???
<http://benhamo.org>
On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier <
Renaud.Fortier at fsaa.ulaval.ca> wrote:
> Hi,
> In your ganesha-ha.conf do you have your virtual ip adresses set something
> like this :
>
> VIP_tlxdmz-nfs1="192.168.22.33"
> VIP_tlxdmz-nfs2="192.168.22.34"
>
> Renaud
>
> De : gluster-user...
2016 Mar 16
0
[Bug 1654] ~/.ssh/known_hosts.d/*
https://bugzilla.mindrot.org/show_bug.cgi?id=1654
Vincent Fortier <vincent.fortier at canada.ca> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |vincent.fortier at canada.ca
--
You are receiving this mail because:...
2017 Sep 02
0
ganesha error ?
On 09/02/2017 02:09 AM, Renaud Fortier wrote:
> Hi,
>
> I got these errors 3 times since I?m testing gluster with nfs-ganesha.
> The clients are php apps and when this happen, clients got strange php
> session error. Below, the first error only happen once but other errors
> happen every time a clients try to creat...
2017 Sep 01
2
ganesha error ?
Hi,
I got these errors 3 times since I'm testing gluster with nfs-ganesha. The clients are php apps and when this happen, clients got strange php session error. Below, the first error only happen once but other errors happen every time a clients try to create a new session file. To make php apps work again, I had to restart the client. Do you have an idea of what's happening here ?
2016 Mar 16
0
[Bug 1654] ~/.ssh/known_hosts.d/*
https://bugzilla.mindrot.org/show_bug.cgi?id=1654
--- Comment #4 from Vincent Fortier <vincent.fortier at canada.ca> ---
If I can add, I just came accross a clear case where this feature is
lacking for me which forces me to redirect to /dev/null: I need to
access multiple hosts from various management networks accross multiple
locations. Management IP are often the same at e...
2017 Jun 22
1
Volume options appear twice
Hi,
This is a list of volume options that appear twice when I run : gluster volume get my_volume all
features.grace-timeout
features.lock-heal
geo-replication.ignore-pid-check
geo-replication.indexing
network.ping-timeout
network.tcp-window-size
performance.cache-size
Is that normal ?
Thanks
Gluster version : 3.8.11 on Debian 8
-------------- next part --------------
An HTML attachment was
2017 Jun 29
0
How to shutdown a node properly ?
On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the
> volumes for 42 seconds (network.ping-timeout). Is there a special way
> to shutdown a node to keep the access to the volumes without
> interruption ? Currently, I use the ?shutdown? or ?reb...
2017 Jun 29
2
How to shutdown a node properly ?
Hi,
Everytime I shutdown a node, I lost access (from clients) to the volumes for 42 seconds (network.ping-timeout). Is there a special way to shutdown a node to keep the access to the volumes without interruption ? Currently, I use the 'shutdown' or 'reboot' command.
My setup is :
-4 gluster 3.10.3 nodes on debian 8 (jessie)
-3 volumes Distributed-Replicate 2 X 2 = 4
Thank you
2017 Jun 29
0
How to shutdown a node properly ?
...but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as quorum is maintained.
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
>
>> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>>
>> Hi,
>>
>> Everytime I shutdown a node, I lost access (from clients) to the volumes
>> for 42 seconds (network.ping-timeout). Is there a special way to shutdown a
>> node to keep the access to the volumes without interruption ? Currently, I
>> u...
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like this?:
VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"
Renaud
De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo
Envoy??: 20 d?cembre 2017 04:35
??: gluster-users at gluster.org
Objet?: [Gluster-users]
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi,
I've just created again the gluster with NFS ganesha. Glusterfs version 3.8
When I run the command gluster nfs-ganesha enable - it returns a success.
However, looking at the pcs status, I see this:
[root at tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition
with quorum
Last updated: Wed Dec 20
2020 Jul 10
0
[Bug 1654] ~/.ssh/known_hosts.d/*
https://bugzilla.mindrot.org/show_bug.cgi?id=1654
--- Comment #6 from Darren Tucker <dtucker at dtucker.net> ---
(In reply to Vincent Fortier from comment #4)
> Management IP are often the same at every
> location making SSH to complain that another host exist.
BTW you can turn that off with CheckHostIP=no and rely solely on the
HostKeyAlias.
--
You are receiving this mail because:
You are watching the assignee of the bug.
You a...
2017 Aug 17
1
shared-storage bricks
Hi,
I enabled shared storage on my four nodes cluster but when I look at the volume info, I only have 3 bricks. Is that suppose to be normal ?
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170817/9e94d1ac/attachment.html>
2017 Jul 07
1
Ganesha "Failed to create client in recovery dir" in logs
Hi all,
I have this entry in ganesha.log file on server when mounting the volume on client :
< GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir (/var/lib/nfs/ganesha/v4recov/node0/::ffff:192.168.2.152-(24:Linux NFSv4.2 client-host-name)), errno=2 >
But everything seems to work as expected without any other errors (so far).
2002 Jul 30
8
rehuff [source attached]
Hi all,
Yes, it's true. A new version of rehuff, the tool that losslessly compresses
Vorbis files: one that is easy to compile, and that works with
newer-than-two-years-ago streams, too!
On 1.0 streams, you get about 3% size reduction, and the headers get _much_
smaller (which helps for fast-start network streams).
Building it should be easy (you might have to add some -I and -L for
2002 Jul 30
8
rehuff [source attached]
Hi all,
Yes, it's true. A new version of rehuff, the tool that losslessly compresses
Vorbis files: one that is easy to compile, and that works with
newer-than-two-years-ago streams, too!
On 1.0 streams, you get about 3% size reduction, and the headers get _much_
smaller (which helps for fast-start network streams).
Building it should be easy (you might have to add some -I and -L for