search for: freeres

Displaying 20 results from an estimated 50 matches for "freeres".

Did you mean: freereb
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar, Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release. Thanks, Eva (865) 574-6894 From: Amar Tumballi <atumball at redhat.com> Date: Wednesday, January 31, 2018 at 12:15 PM To: Eva Freer
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi, I think we have a workaround for until we have a fix in the code. The following worked on my system. Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You might need to create the *filter* directory in this path.) Make sure the file has execute permissions. On my system: [root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/ [root at rhgsserver1 3.12.5]# l total 4.0K
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer, Our analysis is that this issue is caused by https://review.gluster.org/17618. Specifically, in 'gd_set_shared_brick_count()' from https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c . But even if we fix it today, I don't think we have a release planned immediately for shipping this. Are you planning to fix the code and re-compile? Regards,
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
Nithya, I will be out of the office for ~10 days starting tomorrow. Is there any way we could possibly resolve it today? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:26 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at ornl.gov>, "gluster-users at
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, I'm sorry but I need to get in touch with another developer to check about the changes here and he will be available only tomorrow. Is there someone else I could work with while you are away? Regards, Nithya On 31 January 2018 at 22:00, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > I will be out of the office for ~10 days starting tomorrow. Is
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
The values for shared-brick-count are still the same. I did not re-start the volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it? Thanks, Eva (865) 574-6894 From: Nithya Balachandran <nbalacha at redhat.com> Date: Wednesday, January 31, 2018 at 11:14 AM To: Eva Freer <freereb at ornl.gov> Cc: "Greene, Tami McFarlin" <greenet at
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:50, Freer, Eva B. <freereb at ornl.gov> wrote: > The values for shared-brick-count are still the same. I did not re-start > the volume after setting the cluster.min-free-inodes to 6%. Do I need to > restart it? > > > That is not necessary. Let me get back to you on this tomorrow. Regards, Nithya > Thanks, > > Eva (865) 574-6894
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
Sam, For du ?sh on my newer volume, the result is 161T. The sum of the Used space in the df ?h output for all the bricks is ~163T. Close enough for me to believe everything is there. The total for used space in the df ?h of the mountpoint it 83T, roughly half what is used. Relevant lines from df ?h on server-A: Filesystem Size Used Avail Use% Mounted on /dev/sda1 59T 42T
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the ?df? command shows only part of the available space on the mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and clients. We have 2 different server configurations. Configuration 1: A distributed volume of 8 bricks with 4 on each server. The initial configuration had 4 bricks of
2018 Jan 31
2
df does not show full volume capacity after update to 3.12.4
Nithya, Responding to an earlier question: Before the upgrade, we were at 3.103 on these servers, but some of the clients were 3.7.6. From below, does this mean that ?shared-brick-count? needs to be set to 1 for all bricks. All of the bricks are on separate xfs partitions composed hardware of RAID 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes was 5%. I changed it to
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
On 31 January 2018 at 21:34, Freer, Eva B. <freereb at ornl.gov> wrote: > Nithya, > > > > Responding to an earlier question: Before the upgrade, we were at 3.103 on > these servers, but some of the clients were 3.7.6. From below, does this > mean that ?shared-brick-count? needs to be set to 1 for all bricks. > > > > All of the bricks are on separate xfs
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
We noticed something similar. Out of interest, does du -sh . show the same size? -- Sam McLeod (protoporpoise on IRC) https://smcleod.net https://twitter.com/s_mcleod Words are my own opinions and do not necessarily represent those of my employer or partners. > On 31 Jan 2018, at 12:47 pm, Freer, Eva B. <freereb at ornl.gov> wrote: > > After OS update to CentOS 7.4 or RedHat
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Eva, Can you send us the following: gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. <freereb at ornl.gov> wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, > the ?df? command shows only part of the available space on the
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
Hi Eva, One more question. What version of gluster were you running before the upgrade? Thanks, Nithya On 31 January 2018 at 09:52, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi Eva, > > Can you send us the following: > > gluster volume info > gluster volume status > > The log files and tcpdump for df on a fresh mount point for that volume. > >
2012 May 02
17
ChillDB License
A few of you sounded interested in using it. I haven''t explicitly put a software license on it, so I guess it''s not technically FOSS yet. What licenses are good? BSD? Public Domain? ? Jenna -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://rubyforge.org/pipermail/camping-list/attachments/20120502/96b87580/attachment.html>
2013 Jan 22
5
Centos 6.3 - which repos to use?
I've just installed v6.3 as a desktop (from Centos-6.3-i386-LiveCD.iso) and to get the hang of the Centos approach and then hope to move on to a server. I've been using linux *buntu for 5 years. Hope i don't sound like a nit but i've got a little confused with the repos. Hoping someone would be kind enough just to clarify. This installation is for stability whilst installing the
2012 Mar 28
2
[PATCH v2] New APIs: mount-local and umount-local using FUSE
This version doesn't crash or cause hung processes or stuck mountpoints, so that's an improvement. Rich.
2012 Mar 27
3
[PATCH 0/3] Enable FUSE support in the API via 'mount-local' call.
This patch is just for review. It enables FUSE support in the API via two new calls, 'guestfs_mount_local' and 'guestfs_umount_local'. FUSE turns out to be very easy to deadlock (necessitating that the machine be rebooted). Running the test from the third patch is usually an effective way to demonstrate this. However I have not yet managed to produce a simple reproducer that
2012 Mar 29
3
[PATCH v3] New APIs: mount-local, mount-local-run and umount-local using FUSE
This changes the proposed API slightly. Previously 'mount-local' generating a 'mounted' event when the filesystem was ready, and from the 'mounted' event you had to effectively do a fork. Now, 'mount-local' just initializes the mountpoint and you have to call 'mount-local-run' to enter the FUSE main loop. Between these calls you can do a fork or whatever
2017 Nov 13
4
[nbdkit PATCH 0/3] various nbdkit patches
Fixes for various issues found while implementing my nbd forwarder plugin. I'm okay if you choose to take some but not others; the most important one is patch 3 which fixes a protocol violation that makes it impossible for a client to try and recover from EIO failures over a partially-flaky source block device. Eric Blake (3): maint: Add emacs hint file maint: Add NBDKIT_GDB support to