similar to: Error - Disk Full - No Space Left

Displaying 20 results from an estimated 10000 matches similar to: "Error - Disk Full - No Space Left"

2018 Feb 06
1
Error - Disk Full - No Space Left
Hi Nithya, and thax for your Information. I testet it yesterday and set the value to 0 /Zero. First i thought i have to restart all bricks because it doesnt work, but after aprox 3min i could create directories. After the first test i tried it again, but got the same message with disk full. I than restart the bricks and tested it again, but without success. I than tried a different directory in
2018 Feb 03
1
Error - Disk Full - No Space Left
Hello Community and Devs, i have the following problem. I have a 3 Brick Distributed GFS. After Upgrading to the latest 3.13.x i cant create any directory on the volume which i access due NFS. I have on the smallest brick over 170GB free and also enough free inodes on all 3 Bricks. If i want to create a directory, the system says that there is not enough disk space left, but i have. I run a
2004 May 25
4
Sip/IAX Clients for Linux
Hi There, i think all VOIP clients for Linux are unusable! i got testet: Linphone + Linphonec all in version 12.2 Kphone gophone and other... the only programm that is usable is gnomemeeting... does anybody knew some other tools? Best Regards, Mark
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
Hi Eva, One more question. What version of gluster were you running before the upgrade? Thanks, Nithya On 31 January 2018 at 09:52, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi Eva, > > Can you send us the following: > > gluster volume info > gluster volume status > > The log files and tcpdump for df on a fresh mount point for that volume. > >
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, I'm sorry but I need to get in touch with another developer to check about the changes here and he will be available only tomorrow. Is there someone else I could work with while you are away? Regards, Nithya On 31 January 2018 at 22:00, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > I will be out of the office for ~10 days starting tomorrow. Is
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:50, Freer, Eva B. <freereb at ornl.gov> wrote: > The values for shared-brick-count are still the same. I did not re-start > the volume after setting the cluster.min-free-inodes to 6%. Do I need to > restart it? > > > That is not necessary. Let me get back to you on this tomorrow. Regards, Nithya > Thanks, > > Eva (865) 574-6894
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:34, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > Responding to an earlier question: Before the upgrade, we were at 3.103 on > these servers, but some of the clients were 3.7.6. From below, does this > mean that ?shared-brick-count? needs to be set to 1 for all bricks. > > > > All of the bricks are on separate xfs
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
Nithya, Responding to an earlier question: Before the upgrade, we were at 3.103 on these servers, but some of the clients were 3.7.6. From below, does this mean that ?shared-brick-count? needs to be set to 1 for all bricks. All of the bricks are on separate xfs partitions composed hardware of RAID 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes was 5%. I changed it to
2017 Dec 18
2
interval or event to evaluate free disk space?
Hi all, with the option "cluster.min-free-disk" set, glusterfs avoids placing files bricks that are "too full". I'd like to understand when the free space on the bricks is calculated. It seems to me that this does not happen for every write call (naturally) but at some interval or that some other event triggers this. i.e, if I write two files quickly (that together
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:14 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at
2009 Dec 03
3
Xen DomU with high IOWAIT and low disk performance (lvm raid1)
Hello list! My setup: Dom0: Debain 5.0.3 with xen-hypervisor-3.2-1-i386 (2.6.26-2-xen-686) DomU: Ubuntu 8.04 2.6.26-2-xen-686 System is running on two hard drives mirrored with raid1 and organized by LVM. Dom0 and DomU are running on logical volumes. Partitions for DomUs are connected via ''phy:/dev/lvm/disk1,sda1,w'' for example. Here are some scenarios I testet, where you
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer, Our analysis is that this issue is caused by https://review.gluster.org/17618. Specifically, in 'gd_set_shared_brick_count()' from https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c . But even if we fix it today, I don't think we have a release planned immediately for shipping this. Are you planning to fix the code and re-compile? Regards,
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi, I think we have a workaround for until we have a fix in the code. The following worked on my system. Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You might need to create the *filter* directory in this path.) Make sure the file has execute permissions. On my system: [root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/ [root at rhgsserver1 3.12.5]# l total 4.0K
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, Can you send us the following: gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. <freereb at ornl.gov> wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, > the ?df? command shows only part of the available space on the
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar, Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release. Thanks, Eva (865) 574-6894 From: Amar Tumballi <atumball at redhat.com> Date: Wednesday, January 31, 2018 at 12:15 PM To: Eva Freer
2004 Feb 17
2
ldap, quickie...
Hi, I'm a bit confused over the whole "dn" concept .... various documentation states that I should create new samba-entries with dn: uid=user,ou=<user-org>,dc=<domain> and other states that i should do it with dn: cn=user,ou=<user-org>,dc=<domain> Right now i have a few entries, created both ways... and since i'm not quite home in ldap, I haven't
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya, I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:26 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2007 Mar 19
2
Fullscreen Refresh rate problem...
Hello to everybody. I was wondering... is there a way to force wine to use always highest possible refrash rate - when I ran a game in Fullscreen? I ask because - for excample my good old Incubation works flawless on wine (testet with wine 0.9.27 and 0.9.28) but when I switch to fullscreen(640x480 for this game) the refresh rate is only at 60 HZ - and my eyes hurts after a while... (mty monitor
2019 Oct 26
3
Problems with NUT on Raspberry Pi
Hi, thanks for your reply. >> The Unit is new, the battery drains because it went down to about 15 >> percent before NUT shuts it down. >> >> Yes, I can here a "clac". The problem is not the shutdown procedure, >> it works quite good. After the Unit reaches 20 percent the NUT-Server >> on the pi shuts down the system and the ups is turned off as
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hi, I see a lot of the following messages in the logs: [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-02-04 07:41:16.189349] W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash (value) = 122440868 [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] 0-glusterfs-fuse: