similar to: files do not show up on gluster volume

Displaying 20 results from an estimated 2000 matches similar to: "files do not show up on gluster volume"

2013 Jul 02
1
problem expanding a volume
Hello, I am having trouble expanding a volume. Every time I try to add bricks to the volume, I get this error: [root at gluster1 sdb1]# gluster volume add-brick vg0 gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1 /export/brick2/sdb1 or a prefix of it is already part of a volume Here is the volume info: [root at gluster1 sdb1]# gluster volume info vg0 Volume Name: vg0 Type:
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi, Can you send the volume info, and volume status output and the tier logs. And I need to know the size of the files that are being stored. On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote: > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 > bricks per server distributed replicated volume. > > I'm seeing IO get blocked
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick. Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect? On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote: > Sure! > > > 1 -
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following - 1 - output of gluster volume heal <volname> info 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log 3 - output of gluster volume <volname> info 4 - output of gluster volume <volname> status 5 - Also, could you try unmount the volume and mount it again and check the size? ----- Original Message ----- From:
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 09
2
Blocking IO when hot tier promotion daemon runs
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 bricks per server distributed replicated volume. I'm seeing IO get blocked across all client FUSE threads for 10 to 15 seconds while the promotion daemon runs. I see the 'glustertierpro' thread jump to 99% CPU usage on both boxes when these delays occur and they happen every 25 minutes (my
2017 Dec 19
3
Wrong volume size with df
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ). After a server crash, "gluster peer status" reports all peers as connected. "gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size. When browsing the data, they seem to
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2013 Nov 23
1
Maildir issue.
We brought up a test cluster to investigate GlusterFS. Using the Quick Start instructions, we brought up a 2 server 1 brick replicating setup and mounted to it from a third box with the fuse mount (all ver 3.4.1) # gluster volume info Volume Name: mailtest Type: Replicate Volume ID: 9e412774-b8c9-4135-b7fb-bc0dd298d06a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks:
2013 Jun 26
2
HI Guys
Hi ,? I recenlty configured the 2 node replica glusterfs , and I am having couple of issues? 1. As soon ?as a I reboot the node2 , the glusterfs on node1 is not available but when I reboot/shutdown node1 the glusterfs is available on node 0 , so please let me know if you guys have encountered the same issue 2. I am not able to mount the glusterfs mount at the time of reboot I had to do manually
2013 Nov 29
1
Self heal problem
Hi, I have a glusterfs volume replicated on three nodes. I am planing to use the volume as storage for vMware ESXi machines using NFS. The reason for using tree nodes is to be able to configure Quorum and avoid split-brains. However, during my initial testing when intentionally and gracefully restart the node "ned", a split-brain/self-heal error occurred. The log on "todd"
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Hi all, I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 boxes, distributed-replicate) My testing shows the same thing -- running a find on a directory dramatically increases lstat performance. To add another clue, the performance degrades again after issuing a call to reset the system's cache of dentries and inodes: # sync; echo 2 > /proc/sys/vm/drop_caches I
2017 Dec 05
1
Slow seek times on stat calls to glusterfs metadata
Hi all, I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded with about 15 TB of data. Clients are connected via FUSE. I'm using glusterfs 3.12.1. I've found that running large rsyncs to populate the pool are taking a
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Your add-brick command adds 2 bricks 1 arbiter (even though you name them all arbiter!) The sequence is important: gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0 arbiter1:/arb1 adds two data bricks and a corresponding arbiter from 3 different servers and 3 different disks,? thus you can loose any one server OR any one disk and stay up and consistent. adding more bricks