Displaying 20 results from an estimated 10000 matches similar to: "Hot Upgrade Gluster 3.1.4"
2009 Dec 23
1
Questions about Gluster Storage Platform
Hi,
I got some questions about the Gluster Storage Platform :
- Is it just a platform to manage the storage brick and clients running with glusterFS 3.0 ?
- Do you install the platform on a single node or do you install the platform on each server you want to manage?
Thx
_________________________________________________________________
Windows 7 ? 35? pour les ?tudiants?!
2011 Sep 05
1
Quota calculation
Hi Junaid,
Sorry about the confusion, indeed I gave you the
wrong output. So let's start to the beginning. I disabled quota and I
reactivated it
My configuration :
Volume Name: venus
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Brick3: yval1000:/soft/venus
Brick4:
2012 Sep 13
0
eager locking
Hi everyone,
I have seen in (GlusterFS 3.3) the gluster cli a eager locking option (cluster.eager-lock) and there is no reference to this option in the documentation.
Is it in relation with this post http://hekafs.org/index.php/2012/03/glusterfs-algorithms-replication-future/
?
Enabling this option seems to boost write performance but is it without danger ?
Thx
Anthony Garnier
2019 May 27
1
[PATCH v7 11/12] x86/paravirt: Adapt assembly for PIE support
On 21/05/2019 01:19, Thomas Garnier wrote:
> From: Thomas Garnier <thgarnie at google.com>
>
> if PIE is enabled, switch the paravirt assembly constraints to be
> compatible. The %c/i constrains generate smaller code so is kept by
> default.
>
> Position Independent Executable (PIE) support will allow to extend the
> KASLR randomization range below
2018 May 29
1
[PATCH v3 09/27] x86/acpi: Adapt assembly for PIE support
On Fri 2018-05-25 10:00:04, Thomas Garnier wrote:
> On Fri, May 25, 2018 at 2:14 AM Pavel Machek <pavel at ucw.cz> wrote:
>
> > On Thu 2018-05-24 09:35:42, Thomas Garnier wrote:
> > > On Thu, May 24, 2018 at 4:03 AM Pavel Machek <pavel at ucw.cz> wrote:
> > >
> > > > On Wed 2018-05-23 12:54:03, Thomas Garnier wrote:
> > > > >
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi Martin,
> Do you mean latest package from Ubuntu repository or latest package from
> Gluster PPA (3.7.20-ubuntu1~xenial1).
> Currently I am using Ubuntu repository package, but want to use PPA for
> upgrade because Ubuntu has old packages of Gluster in repo.
When you switch to PPA, make sure to download and keep a copy of each
set of gluster deb packages, otherwise if you ever
2017 Oct 01
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi Diego,
I?ve tried to upgrade and then extend gluster with 3rd node in virtualbox test environment and all went without problems.
Sharding will not help me at this time so I will consider upgrading 1G to 10G before this procedure in production. That should lower downtime - healing time of VM image files on Gluster.
I hope healing will take as short as possible on 10G.
Additional info for
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.
Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD
2017 Jul 31
0
Hot Tier
Milind and Hari,
Can you please take a look at this?
Thanks,
Nithya
On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com> wrote:
> Hi
>
> I'm looking for an advise on hot tier feature - how can I tell if the hot
> tier is working?
>
> I've attached replicated-distributed hot tier to an EC volume.
> Yet, I don't think it's working, at
2018 May 23
2
[PATCH v3 23/27] x86/modules: Adapt module loading for PIE support
Hi,
(for several patches in this series:)
The commit message is confusing. See below.
On 05/23/2018 12:54 PM, Thomas Garnier wrote:
> Adapt module loading to support PIE relocations. Generate dynamic GOT if
> a symbol requires it but no entry exist in the kernel GOT.
exists
>
> Position Independent Executable (PIE) support will allow to
2018 May 23
2
[PATCH v3 23/27] x86/modules: Adapt module loading for PIE support
Hi,
(for several patches in this series:)
The commit message is confusing. See below.
On 05/23/2018 12:54 PM, Thomas Garnier wrote:
> Adapt module loading to support PIE relocations. Generate dynamic GOT if
> a symbol requires it but no entry exist in the kernel GOT.
exists
>
> Position Independent Executable (PIE) support will allow to
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter.
/var/lib/glusterd/vols/gds-common is the upgraded aribiter
/home/marcus/gds-common is one of the other nodes still on gluster 10
diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common
5c5
< listen-port=60419
---
> listen-port=0
11c11
<
2017 Jul 30
2
Hot Tier
Hi
I'm looking for an advise on hot tier feature - how can I tell if the hot
tier is working?
I've attached replicated-distributed hot tier to an EC volume.
Yet, I don't think it's working, at least I don't see any files directly on
the bricks (only folder structure). 'Status' command has all 0s and 'In
progress' for all servers.
~]# gluster volume tier home
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through.
---------- Forwarded message ----------
From: Martin Toth <snowmailer at gmail.com>
Date: Thu, Sep 21, 2017 at 9:17 AM
Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
To: gluster-users at gluster.org
Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com
Hello all fellow GlusterFriends,
I would like you to comment /
2018 May 24
2
[PATCH v3 11/27] x86/power/64: Adapt assembly for PIE support
On Wed 2018-05-23 12:54:05, Thomas Garnier wrote:
> Change the assembly code to use only relative references of symbols for the
> kernel to be PIE compatible.
>
> Position Independent Executable (PIE) support will allow to extended the
> KASLR randomization range below the -2G memory limit.
>
> Signed-off-by: Thomas Garnier <thgarnie at google.com>
Again, was this
2018 May 24
2
[PATCH v3 11/27] x86/power/64: Adapt assembly for PIE support
On Wed 2018-05-23 12:54:05, Thomas Garnier wrote:
> Change the assembly code to use only relative references of symbols for the
> kernel to be PIE compatible.
>
> Position Independent Executable (PIE) support will allow to extended the
> KASLR randomization range below the -2G memory limit.
>
> Signed-off-by: Thomas Garnier <thgarnie at google.com>
Again, was this
2019 Dec 23
1
[PATCH v10 10/11] x86/paravirt: Adapt assembly for PIE support
On Wed, Dec 04, 2019 at 04:09:47PM -0800, Thomas Garnier wrote:
> If PIE is enabled, switch the paravirt assembly constraints to be
> compatible. The %c/i constrains generate smaller code so is kept by
> default.
>
> Position Independent Executable (PIE) support will allow to extend the
> KASLR randomization range below 0xffffffff80000000.
>
> Signed-off-by: Thomas
2024 Mar 04
1
Gluster Rebalance question
Hi all,
I'm using glusterfs for a few years now, and generally very happy with it. Saved my data multiple times already! :-)
However, I do have a few questions for which I hope someone is able to answer them.
I have a distributed, replicated glusterfs setup. I am in the process of replacing 4TB bricks with 8TB bricks, which is working nicely. However, what I am seeing now is that the space
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance
translators have to be turned off.
By default the performance quick-read and io-cache are turned on. You
can turn them off so that
the files will be migrated for read.
On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote:
> Hi,
>
> If it was just reads then the tier daemon won't migrate