Displaying 20 results from an estimated 3000 matches similar to: "Wrong volume size with df"
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following -
1 - output of gluster volume heal <volname> info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume <volname> info
4 - output of gluster volume <volname> status
5 - Also, could you try unmount the volume and mount it again and check the size?
----- Original Message -----
From:
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi,
Can you send the volume info, and volume status output and the tier logs.
And I need to know the size of the files that are being stored.
On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote:
> I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
> bricks per server distributed replicated volume.
>
> I'm seeing IO get blocked
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom,
The volume info doesn't show the hot bricks. I think you have took the
volume info output before attaching the hot tier.
Can you send the volume info of the current setup where you see this issue.
The logs you sent are from a later point in time. The issue is hit
earlier than the logs what is available in the log. I need the logs
from an earlier time.
And along with the entire tier
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small
(<1 MB) files and thousands of files larger than 1 GB.
Attached is the tier log for gluster1 and gluster2. These are full of
"demotion failed" messages, which is also shown in the status:
[root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
Node Promoted files Demoted files
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I
grabbed that from a file not realizing it was out of date. Here's a current
configuration showing the active hot tier:
[root at pod-sjc1-gluster1 ~]# gluster volume info
Volume Name: gv0
Type: Tier
Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
Status: Started
Snapshot Count: 13
Number of Bricks: 8
Transport-type: tcp
Hot
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is
held up, IO is not interrupted for existing transfers. I think this points
to the heat metadata in the sqlite DB for the tier, is it possible that a
table is temporarily locked while the promotion daemon runs so the calls to
update the access count on files are blocked?
On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 09
2
Blocking IO when hot tier promotion daemon runs
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
bricks per server distributed replicated volume.
I'm seeing IO get blocked across all client FUSE threads for 10 to 15
seconds while the promotion daemon runs. I see the 'glustertierpro' thread
jump to 99% CPU usage on both boxes when these delays occur and they happen
every 25 minutes (my
2017 Dec 05
1
Slow seek times on stat calls to glusterfs metadata
Hi all,
I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks
a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB
disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded
with about 15 TB of data. Clients are connected via FUSE. I'm using
glusterfs 3.12.1.
I've found that running large rsyncs to populate the pool are taking a
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem,
Looks like the issue is about cache warming up. Specially, I suspect rsync
doing a 'readdir(), stat(), file operations' loop, where as when a find or
ls is issued, we get 'readdirp()' request, which contains the stat
information along with entries, which also makes sure cache is up-to-date
(at md-cache layer).
Note that this is just a off-the memory
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
If you create a volume with replica 2 arbiter 1
you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.
You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name
them all arbiter!)
The sequence is important:
gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
arbiter1:/arb1
adds two data bricks and a corresponding arbiter from 3 different
servers and 3 different disks,?
thus you can loose any one server OR any one disk and stay up and
consistent.
adding more bricks
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Hi there.
In previous emails, I comment with you guys, about 2 node gluster server,
where the bricks lay down in different size and folders in the same server,
like
gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
So I went ahead and installed a Debian 12 and
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Hi all,
I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
boxes, distributed-replicate) My testing shows the same thing -- running a
find on a directory dramatically increases lstat performance. To add
another clue, the performance degrades again after issuing a call to reset
the system's cache of dentries and inodes:
# sync; echo 2 > /proc/sys/vm/drop_caches
I
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Ok.
I got confused here!
For each brick I will need one arbiter brick, in a different
partition/folder?
And what if in some point in the future I decide to add a new brick in the
main servers?
Do I need to provide another partition/folder in the arbiter and then
adjust the arbiter brick counter?
---
Gilberto Nunes Ferreira
Em ter., 5 de nov. de 2024 ?s 13:22, Andreas Schwibbe
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2023 Jan 19
1
really large number of skipped files after a scrub
Hi,
Just to follow up my first observation from this email from december:
automatic scheduled scrubs that not happen. We have now upgraded glusterfs
from 7.4 to 10.1, and now see that the automated scrubs ARE running now.
Not sure why they didn't in 7.4, but issue solved. :-)
MJ
On Mon, 12 Dec 2022 at 13:38, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb at gmail.com>
wrote:
> Hi,
>
> I
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present