Displaying 20 results from an estimated 7000 matches similar to: "glusterd daemon - restart"
2017 Aug 02
0
glusterd daemon - restart
On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor <markconnor64 at gmail.com> wrote:
> Can the glusterd daemon be restarted on all storage nodes without causing
> any disruption to data being served or the cluster in general? I am running
> gluster 3.2 using distributed replica 2 volumes with fuse clients.
Yes, in general. Any clients already connected will still continue to work.
What
2017 Aug 02
1
glusterd daemon - restart
Sorry, I meant RedHat's Gluster Storage Server 3.2 which is latest and
greatest.
On Wed, Aug 2, 2017 at 9:28 AM, Kaushal M <kshlmster at gmail.com> wrote:
> On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor <markconnor64 at gmail.com>
> wrote:
> > Can the glusterd daemon be restarted on all storage nodes without causing
> > any disruption to data being served or the
2013 Jan 15
2
1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?
Hi,
Anyone know if the 1024 char limit for auth.allow still exists in the
latest production version (seems to be there in 3.2.5). Also anyone
know if the new versions check if auth.allow has been updated without
having to restart glusterd? Is there anyway to restart glusterd
without killing it and restarting the process, is kill -1 (HUP)
possible with it (also with the version i'm running?)
2017 Nov 03
2
[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+
Just so I am clear the upgrade process will be as follows:
upgrade all clients to 4.0
rolling upgrade all servers to 4.0 (with GD1)
kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)
start GD2 ( necessary or does the upgrade script do this?)
I assume that once the cluster had been migrated to GD2 the glusterd
startup script will be smart
2017 Nov 06
0
[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+
On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <ajneil.tech at gmail.com> wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this point)
>
> start GD2 ( necessary or
2017 Nov 02
2
Request for Comments: Upgrades from 3.x to 4.0+
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?
Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.
RIO (as DHT2 is now known as) developers
2017 Nov 02
5
Request for Comments: Upgrades from 3.x to 4.0+
We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.
We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.
---
## Background
Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
2017 Nov 02
0
Request for Comments: Upgrades from 3.x to 4.0+
if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
volume without any challenge?
I am asking this because 4.0 comes with DHT2?
On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M <kshlmster at gmail.com> wrote:
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user
2017 Nov 03
1
Request for Comments: Upgrades from 3.x to 4.0+
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <budic at onholyground.com> wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?
We should be able to do this with CentOS (and other RPM based distros)
which have well split glusterfs
2017 Dec 28
1
Adding larger bricks to an existing volume
I have a 10x2 distributed replica volume running gluster3.8.
Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 )
I am running of storage so I intend on adding servers with larger 8Tb
drives.
My new bricks will be 80TB in size. I will make sure the replica to the
larger brick will match in size.
Will gluster place more files on the larger bricks? Or will I have wasted
space?
In
2017 Nov 14
1
glusterfs-fuse package update
Folks, I need to update all my gluserfs-fuse clients to the latest version.
Can I do this without a reboot?
If I stop the module then update the fuse client, would this suffice? Or do
I really need a reboot?
Thank You
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171114/438530ee/attachment.html>
2017 Nov 02
0
Request for Comments: Upgrades from 3.x to 4.0+
does RIO improves folder listing and rebalance, when compared to 3.x?
if yes, do you have any performance data comparing RIO and DHT?
On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M <kshlmster at gmail.com> wrote:
> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> > volume
2017 Nov 02
0
Request for Comments: Upgrades from 3.x to 4.0+
Will the various client packages (centos in my case) be able to automatically handle the upgrade vs new install decision, or will we be required to do something manually to determine that?
It?s a little unclear that things will continue without interruption because of the way you describe the change from GD1 to GD2, since it sounds like it stops GD1. Early days, obviously, but if you could
2013 Oct 07
1
glusterd service fails to start on one peer
I'm hoping that someone here can point me the right direction to help me
solve a problem I am having.
I've got 3 gluster peers and for some reason glusterd sill not start on one
of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4
(2.6.32-358.el6.x86_64).
In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error
repeated 36 times (alternating between brick-0
2017 May 09
1
Empty info file preventing glusterd from starting
Hi Atin/Team,
We are using gluster-3.7.6 with setup of two brick and while restart of
system I have seen that the glusterd daemon is getting failed from start.
At the time of analyzing the logs from etc-glusterfs.......log file I have
received the below logs
[2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version
2017 Jul 05
2
[New Release] GlusterD2 v4.0dev-7
After nearly 3 months, we have another preview release for GlusterD-2.0.
The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework
2017 Aug 22
2
Glusterd proccess hangs on reboot
Hi there,
I have a strange problem.
Gluster version in 3.10.5, I am testing new servers. Gluster
configuration is 16+4 EC, I have three volumes, each have 1600 bricks.
I can successfully create the cluster and volumes without any
problems. I write data to cluster from 100 clients for 12 hours again
no problem. But when I try to reboot a node, glusterd process hangs on
%100 CPU usage and seems to
2012 Sep 18
1
glusterd vs. glusterfsd
I'm running version 3.3.0 on Fedora16-x86_64. The official(?) RPMs
ship two init scripts, glusterd and glusterfsd. I've googled a bit,
and I can't figure out what the purpose is for each of them. I know
that I need one of them, but I can't tell which for sure. There's no
man page for either, and running them with --help returns the same
exact output. Do they have separate
2017 Oct 17
2
Gluster processes remaining after stopping glusterd
Hi,
I noticed that when i stop my gluster server via systemctl stop glusterd command , one glusterfs process is still up.
Which is the correct way to stop all gluster processes in my host?
That's we see after run the command:
***************************************************************************************************
[root at xxxxxx ~]# ps -ef | grep -i glu
root 1825 1
2017 Aug 22
0
Glusterd proccess hangs on reboot
As an addition perf top shows %80 libc-2.12.so __strcmp_sse42 during
glusterd %100 cpu usage
Hope this helps...
On Tue, Aug 22, 2017 at 2:41 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi there,
>
> I have a strange problem.
> Gluster version in 3.10.5, I am testing new servers. Gluster
> configuration is 16+4 EC, I have three volumes, each have 1600 bricks.
> I