similar to: Hot Tier

Displaying 20 results from an estimated 700 matches similar to: "Hot Tier"

2017 Aug 01
0
Hot Tier
Hi, You have missed the log files. Can you attach them? On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > At this point I already detached Hot Tier volume to run rebalance. Many > volume settings only take effect for the new data (or rebalance), so I > thought may this was the case with Hot Tier as well. Once rebalance > finishes,
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case
2017 Jul 31
2
Hot Tier
Hi, Before you try turning off the perf translators can you send us the following, So we will make sure that the other things haven't gone wrong. can you send us the log files for tier (would be better if you attach other logs too), the version of gluster you are using, the client, and the output for: gluster v info gluster v get v1 performance.io-cache gluster v get v1
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > If it was just reads then the tier daemon won't migrate
2018 Mar 05
0
Why files goes to hot tier and cold tier at same time
Hi, The actual data will be in the hot tier only till demotion. The file that you see on the cold tier is just a linkto file of the file on the hot tier. These linkto file are necessary for the internal working of the tier. On Mon, Mar 5, 2018 at 1:16 PM, Sherin George <allmyforums at outlook.in> wrote: > Hi Guys > > Got a quick question regarding hot tier and cold tier. > I
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Mar 05
0
tiering
Hi, There isn't a way to replace the failing tier brick through a single command as we don't have support for replace/ remove or add brick with tier. Once you bring the brick online(volume start force), the data in the brick will be built by the self heal daemon (Done because its a replicated tier). But adding brick will still not work. Else if you use the force option, it will work as
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi, Can you send the volume info, and volume status output and the tier logs. And I need to know the size of the files that are being stored. On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote: > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 > bricks per server distributed replicated volume. > > I'm seeing IO get blocked
2018 Feb 27
0
Failed to get quota limits
Hi, Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so I guess this bug is also fixed there. Best regards, M. ? ??????? Original Message ??????? On February 27, 2018 9:38 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > ?? > > Hi Mabi, > > The bugs is fixed from 3.11. For 3.10 it is yet to be backported and > > made available. >
2018 Feb 24
0
Failed to get quota limits
Dear Hari, Thank you for getting back to me after having analysed the problem. As you said I tried to run "gluster volume quota <VOLNAME> list <PATH>" for all of my directories which have a quota and found out that there was one directory quota which was missing (stale) as you can see below: $ gluster volume quota myvolume list /demo.domain.tld Path
2017 Jul 02
0
Some bricks are offline after restart, how to bring them online gracefully?
Thank you, I created bug with all logs: https://bugzilla.redhat.com/show_bug.cgi?id=1467050 During testing I found second bug: https://bugzilla.redhat.com/show_bug.cgi?id=1467057 There something wrong with Ganesha when Gluster bricks are named "w0" or "sw0". On Fri, Jun 30, 2017 at 11:36 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > Jan, by
2018 Feb 13
0
Failed to get quota limits
I tried to set the limits as you suggest by running the following command. $ sudo gluster volume quota myvolume limit-usage /directory 200GB volume quota : success but then when I list the quotas there is still nothing, so nothing really happened. I also tried to run stat on all directories which have a quota but nothing happened either. I will send you tomorrow all the other logfiles as
2018 Feb 27
2
Failed to get quota limits
Hi Mabi, The bugs is fixed from 3.11. For 3.10 it is yet to be backported and made available. The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259. On Sat, Feb 24, 2018 at 4:05 PM, mabi <mabi at protonmail.ch> wrote: > Dear Hari, > > Thank you for getting back to me after having analysed the problem. > > As you said I tried to run "gluster volume quota
2018 Feb 23
2
Failed to get quota limits
Hi, There is a bug in 3.10 which doesn't allow the quota list command to output, if the last entry on the conf file is a stale entry. The workaround for this is to remove the stale entry at the end. (If the last two entries are stale then both have to be removed and so on until the last entry on the conf file is a valid entry). This can be avoided by adding a new limit. As the new limit you
2018 Feb 13
0
Failed to get quota limits
Yes, I need the log files in that duration, the log rotated file after hitting the issue aren't necessary, but the ones before hitting the issues are needed (not just when you hit it, the ones even before you hit it). Yes, you have to do a stat from the client through fuse mount. On Tue, Feb 13, 2018 at 3:56 PM, mabi <mabi at protonmail.ch> wrote: > Thank you for your answer. This
2017 Jun 30
1
Some bricks are offline after restart, how to bring them online gracefully?
Hi, Jan, by multiple times I meant whether you were able to do the whole setup multiple times and face the same issue. So that we have a consistent reproducer to work on. As grepping shows that the process doesn't exist the bug I mentioned doesn't hold good. Seems like another issue irrelevant to the bug i mentioned (have mentioned it now). When you say too often, this means there is a