similar to: tiering

Displaying 20 results from an estimated 1000 matches similar to: "tiering"

2018 Mar 05
0
tiering
Hi, There isn't a way to replace the failing tier brick through a single command as we don't have support for replace/ remove or add brick with tier. Once you bring the brick online(volume start force), the data in the brick will be built by the self heal daemon (Done because its a replicated tier). But adding brick will still not work. Else if you use the force option, it will work as
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6 On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote: > a short while ago I experimented with tiering on one of my volumes. I > decided it was not working out so I removed the tier. I now have spam in > the glusterd.log evert 7 seconds: > > [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I decided it was not working out so I removed the tier. I now have spam in the glusterd.log evert 7 seconds: [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory) [2017-10-18
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W [socket.c:3292:socket_connect]
2007 Jun 22
1
Implicit storage tiering w/ ZFS
I''m curious if there has been any discussion of or work done toward implementing storage classing within zpools (this would be similar to the storage foundation QoSS feature). I''ve searched the forum and inspected the documentation looking for a means to do this, and haven''t found anything, so pardon the post if this is redundant/superfluous. I would imagine this would
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2018 Feb 09
1
Tiering Volumns
Hello everyone. I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier" volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the tiers for each volume? I will be adding 2 more HDDs to each server. I would then like to change from a Replicate to Distributed-Replicated. Not sure if that makes a difference in the tiering setup. [root at
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could
2017 Jul 30
2
Hot Tier
Hi I'm looking for an advise on hot tier feature - how can I tell if the hot tier is working? I've attached replicated-distributed hot tier to an EC volume. Yet, I don't think it's working, at least I don't see any files directly on the bricks (only folder structure). 'Status' command has all 0s and 'In progress' for all servers. ~]# gluster volume tier home
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2017 Jul 31
0
Hot Tier
Milind and Hari, Can you please take a look at this? Thanks, Nithya On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > I'm looking for an advise on hot tier feature - how can I tell if the hot > tier is working? > > I've attached replicated-distributed hot tier to an EC volume. > Yet, I don't think it's working, at
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > If it was just reads then the tier daemon won't migrate
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files that are used for the tiering file access counters, stored on each hot and cold tier brick in .glusterfs/<volname>.db. When the tier is first created, these DB files do not exist, they are created, and everything works fine. On a stop/start or service restart, the .db files are already present, albeit empty since I don't have
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue: Bug 1540376 ? Tiered volume performance degrades badly after a volume stop/start or system restart. https://bugzilla.redhat.com/show_bug.cgi?id=1540376 Does anyone have any ideas on what might be causing this, and what a fix or work-around might be? Thanks! ~ Jeff Byers ~ Tiered volume performance degrades badly after a volume stop/start or system restart. The