similar to: Error - Disk Full - No Space Left

Displaying 20 results from an estimated 10000 matches similar to: "Error - Disk Full - No Space Left"

2018 Feb 05
1
Error - Disk Full - No Space Left
Hi to all, its sad that no one can help. I testet the 3 Bricks and created a new Volume. But if i want to create an folder, i got the same error message. Disk ist full? Looks seems ok and i need help. thx Taste
2018 Feb 06
1
Error - Disk Full - No Space Left
Hi Nithya, and thax for your Information. I testet it yesterday and set the value to 0 /Zero. First i thought i have to restart all bricks because it doesnt work, but after aprox 3min i could create directories. After the first test i tried it again, but got the same message with disk full. I than restart the bricks and tested it again, but without success. I than tried a different directory in
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
Hi Eva, One more question. What version of gluster were you running before the upgrade? Thanks, Nithya On 31 January 2018 at 09:52, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi Eva, > > Can you send us the following: > > gluster volume info > gluster volume status > > The log files and tcpdump for df on a fresh mount point for that volume. > >
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
Nithya, Responding to an earlier question: Before the upgrade, we were at 3.103 on these servers, but some of the clients were 3.7.6. From below, does this mean that ?shared-brick-count? needs to be set to 1 for all bricks. All of the bricks are on separate xfs partitions composed hardware of RAID 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes was 5%. I changed it to
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:14 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya, I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:26 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:34, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > Responding to an earlier question: Before the upgrade, we were at 3.103 on > these servers, but some of the clients were 3.7.6. From below, does this > mean that ?shared-brick-count? needs to be set to 1 for all bricks. > > > > All of the bricks are on separate xfs
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:50, Freer, Eva B. <freereb at ornl.gov> wrote: > The values for shared-brick-count are still the same. I did not re-start > the volume after setting the cluster.min-free-inodes to 6%. Do I need to > restart it? > > > That is not necessary. Let me get back to you on this tomorrow. Regards, Nithya > Thanks, > > Eva (865) 574-6894
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer, Our analysis is that this issue is caused by https://review.gluster.org/17618. Specifically, in 'gd_set_shared_brick_count()' from https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c . But even if we fix it today, I don't think we have a release planned immediately for shipping this. Are you planning to fix the code and re-compile? Regards,
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, I'm sorry but I need to get in touch with another developer to check about the changes here and he will be available only tomorrow. Is there someone else I could work with while you are away? Regards, Nithya On 31 January 2018 at 22:00, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > I will be out of the office for ~10 days starting tomorrow. Is
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar, Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release. Thanks, Eva (865) 574-6894 From: Amar Tumballi <atumball at redhat.com> Date: Wednesday, January 31, 2018 at 12:15 PM To: Eva Freer
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi, I think we have a workaround for until we have a fix in the code. The following worked on my system. Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You might need to create the *filter* directory in this path.) Make sure the file has execute permissions. On my system: [root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/ [root at rhgsserver1 3.12.5]# l total 4.0K
2018 May 02
3
Healing : No space left on device
Hello list, I have an issue on my Gluster cluster. It is composed of two data nodes and an arbiter for all my volumes. After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is what I get : ??? - on node 1, volumes won't start, and glusterd.log shows a lot of : ??? ??? [2018-05-02 09:46:06.267817] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, Can you send us the following: gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. <freereb at ornl.gov> wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, > the ?df? command shows only part of the available space on the
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hi, I see a lot of the following messages in the logs: [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-02-04 07:41:16.189349] W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash (value) = 122440868 [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] 0-glusterfs-fuse: