similar to: Tiered volume performance degrades badly after a volume stop/start or system restart.

Displaying 20 results from an estimated 3000 matches similar to: "Tiered volume performance degrades badly after a volume stop/start or system restart."

2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results. Was trying to get better read performance from local mounts with hundreds of thousands maildir email files by using SSD, hoping that .gluster file stat read will improve which does migrate to hot tire. After seeing what you described for 24 hours and confirming all move around on the tires is done - killed it. Here are my
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files that are used for the tiering file access counters, stored on each hot and cold tier brick in .glusterfs/<volname>.db. When the tier is first created, these DB files do not exist, they are created, and everything works fine. On a stop/start or service restart, the .db files are already present, albeit empty since I don't have
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could
2017 Dec 08
2
Testing sharding on tiered volume
Hi, I'm looking to use sharding on tiered volume. This is very attractive feature that could benefit tiered volume to let it handle larger files without hitting the "out of (hot)space problem". I decided to set test configuration on GlusterFS 3.12.3 when tiered volume has 2TB cold and 1GB hot segments. Shard size is set to be 16MB. For testing 100GB files are used. It seems writes
2018 Feb 09
1
Tiering Volumns
Hello everyone. I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier" volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the tiers for each volume? I will be adding 2 more HDDs to each server. I would then like to change from a Replicate to Distributed-Replicated. Not sure if that makes a difference in the tiering setup. [root at
2006 Jul 21
1
Handling a tiered subscription service
Hi all, I''m working on a project that involves a tiered pricing structure for various features, all of which are accessed via a secure admin area. The features available in each tier, the tier structure and pricing will change over time, so new tier schemes will come into play, requiring feature/pricing schemes to be timestamped. I guess what I''m aiming for is
2017 Jul 30
2
Hot Tier
Hi I'm looking for an advise on hot tier feature - how can I tell if the hot tier is working? I've attached replicated-distributed hot tier to an EC volume. Yet, I don't think it's working, at least I don't see any files directly on the bricks (only folder structure). 'Status' command has all 0s and 'In progress' for all servers. ~]# gluster volume tier home
2018 Mar 04
1
tiering
Hi, Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick
2006 Feb 17
4
Three-tier
Hi Everyone, I''m working at getting Rails introduced in my company. We''re a J2EE shop. Our deployments make use of thee-tiered architecture, just to be clear, that means that there are essentially three machines involved in dealing out an app: a webserver, an application server, and a database server. As I see it (unless I''ve missed something) Ruby is essentially
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I decided it was not working out so I removed the tier. I now have spam in the glusterd.log evert 7 seconds: [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory) [2017-10-18
2017 Jul 31
0
Hot Tier
Milind and Hari, Can you please take a look at this? Thanks, Nithya On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > I'm looking for an advise on hot tier feature - how can I tell if the hot > tier is working? > > I've attached replicated-distributed hot tier to an EC volume. > Yet, I don't think it's working, at
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6 On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote: > a short while ago I experimented with tiering on one of my volumes. I > decided it was not working out so I removed the tier. I now have spam in > the glusterd.log evert 7 seconds: > > [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > If it was just reads then the tier daemon won't migrate
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W [socket.c:3292:socket_connect]
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case
2017 Aug 01
0
Hot Tier
Hi, You have missed the log files. Can you attach them? On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > At this point I already detached Hot Tier volume to run rebalance. Many > volume settings only take effect for the new data (or rebalance), so I > thought may this was the case with Hot Tier as well. Once rebalance > finishes,