similar to: Blocking IO when hot tier promotion daemon runs

Displaying 20 results from an estimated 4000 matches similar to: "Blocking IO when hot tier promotion daemon runs"

2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi, Can you send the volume info, and volume status output and the tier logs. And I need to know the size of the files that are being stored. On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote: > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 > bricks per server distributed replicated volume. > > I'm seeing IO get blocked
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick. Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect? On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote: > Sure! > > > 1 -
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick
2017 Dec 05
1
Slow seek times on stat calls to glusterfs metadata
Hi all, I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded with about 15 TB of data. Clients are connected via FUSE. I'm using glusterfs 3.12.1. I've found that running large rsyncs to populate the pool are taking a
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Hi all, I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 boxes, distributed-replicate) My testing shows the same thing -- running a find on a directory dramatically increases lstat performance. To add another clue, the performance degrades again after issuing a call to reset the system's cache of dentries and inodes: # sync; echo 2 > /proc/sys/vm/drop_caches I
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following - 1 - output of gluster volume heal <volname> info 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log 3 - output of gluster volume <volname> info 4 - output of gluster volume <volname> status 5 - Also, could you try unmount the volume and mount it again and check the size? ----- Original Message ----- From:
2017 Dec 19
3
Wrong volume size with df
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ). After a server crash, "gluster peer status" reports all peers as connected. "gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size. When browsing the data, they seem to
2018 Mar 05
2
Why files goes to hot tier and cold tier at same time
Hi Guys Got a quick question regarding hot tier and cold tier. I got a gluster volume with 1 x 3 hot tier and 1 x 3 cold tier. watermark-low is 75 and watermark-hi is 90. usage of volume is very less. My files always go to hot tier and cold tier at same time As I understand, data should go to hot tier only until demoted. Could someone please shed some light into this? Thanks in advance. --
2018 Mar 05
0
Why files goes to hot tier and cold tier at same time
Hi, The actual data will be in the hot tier only till demotion. The file that you see on the cold tier is just a linkto file of the file on the hot tier. These linkto file are necessary for the internal working of the tier. On Mon, Mar 5, 2018 at 1:16 PM, Sherin George <allmyforums at outlook.in> wrote: > Hi Guys > > Got a quick question regarding hot tier and cold tier. > I
2017 Jul 31
2
Hot Tier
Hi, If it was just reads then the tier daemon won't migrate the files to hot tier. If you create a file or write to a file that file will be made available on the hot tier. On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Milind and Hari, > > Can you please take a look at this? > > Thanks, > Nithya > > On 31 July 2017 at
2017 Jul 31
0
Hot Tier
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > Hi, > > If it was just reads then the tier daemon won't migrate
2017 Jul 31
2
Hot Tier
Hi, Before you try turning off the perf translators can you send us the following, So we will make sure that the other things haven't gone wrong. can you send us the log files for tier (would be better if you attach other logs too), the version of gluster you are using, the client, and the output for: gluster v info gluster v get v1 performance.io-cache gluster v get v1
2017 Jul 31
1
Hot Tier
Hi At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier. cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case
2017 Aug 01
0
Hot Tier
Hi, You have missed the log files. Can you attach them? On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote: > Hi > > At this point I already detached Hot Tier volume to run rebalance. Many > volume settings only take effect for the new data (or rebalance), so I > thought may this was the case with Hot Tier as well. Once rebalance > finishes,
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Any updates on this one? On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite <tomfite at gmail.com> wrote: > Hi all, > > I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 > boxes, distributed-replicate) My testing shows the same thing -- running a > find on a directory dramatically increases lstat performance. To add > another clue, the performance degrades