similar to: wrong size displayed with df after upgrade to 3.12.6

Displaying 20 results from an estimated 200 matches similar to: "wrong size displayed with df after upgrade to 3.12.6"

2018 Mar 09
1
wrong size displayed with df after upgrade to 3.12.6
Hi Stefan, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. Please take a look at [1] and try out the workaround provided in the comments. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260 On 9 March 2018 at 13:37, Stefan Solbrig <stefan.solbrig at ur.de> wrote: > Dear all, > > I have a problem with df after
2018 Jan 21
1
mkdir -p, cp -R fails
Dear all, I have problem with glusterfs 3.12.4 mkdir -p fails with "no data available" when umask is 0022, but works when umask is 0002. Also recursive copy (cp -R or cp -r) fails with "no data available", independly of the umask. See below for an example to reproduce the error. I already tried to change transport from rdma to tcp. (Changing the transport works, but
2018 Apr 12
0
wrong size displayed with df after upgrade to 3.12.6
Dear all, I encountered the same issue, I saw that this is fixed in 3.12.7 but I cannot find this release in the main repo (centos storage SIG), only in the test one. What is the expectation to see this release available into the main repo? Greetings, ??? Paolo Il 09/03/2018 10:41, Stefan Solbrig ha scritto: > Dear Nithya, > > Thank you so much. ?This fixed the problem
2018 Feb 27
0
mkdir -p, cp -R fails
Dear all, I identified the source of the problem: if i set "server.root-squash on", then the problem is 100% reproducible, with "server.root-squash off", the problem vanishes. This is true for glusterfs 3.12.3, 3.12.4 and 3.12.6 (haven't tested other versions) best wishes, Stefan -- Dr. Stefan Solbrig Universit?t Regensburg, Fakult?t f?r Physik, 93040 Regensburg,
2018 May 29
2
RDMA inline threshold?
Dear all, I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all. The respective log file shows this
2018 May 30
2
RDMA inline threshold?
Forgot to mention, sometimes I have to do force start other volumes as well, its hard to determine which brick process is locked up from the logs. Status of volume: rhev_vms_primary Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
2018 May 30
0
RDMA inline threshold?
Stefan, Sounds like a brick process is not running. I have notice some strangeness in my lab when using RDMA, I often have to forcibly restart the brick process, often as in every single time I do a major operation, add a new volume, remove a volume, stop a volume, etc. gluster volume status <vol> Does any of the self heal daemons show N/A? If that's the case, try forcing a restart on
2018 May 30
0
RDMA inline threshold?
Dear Dan, thanks for the quick reply! I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding). I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change
2017 Dec 12
1
active/active failover
Hi Alex, Thank you for the quick reply! Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2017 Dec 11
2
active/active failover
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8) So my question is: can I really use glusterfs to do failover in the way described
2017 Dec 11
0
active/active failover
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up. Chances are if you weren't using the SAN volumes; you could have purchased two servers
2007 Sep 02
5
hash_cache a bogus function that never worked?
Hi, I''ve been investigating various caching methods provided by Rail. I first looked at the hash_cache module and function. In testing it, I noticed it wasn''t actually caching anything. Then looking at the source code, I noticed that it attempted to hold its cache in a class variable - which won''t actually be saved from page request to page because of the way that rails
2006 Nov 25
4
Sessions And Active Record
Hi, I''m a newbie even though I''ve play with rails for a few months now. I would like to save several activerecord objects (not in the database) accross several screens. What is the prefered rails way to do this? Should copy all of their data to @session or use the member variables and put them into hidden fields? I am trying to move from "whatever kludge works" to
2017 Dec 18
2
interval or event to evaluate free disk space?
Hi all, with the option "cluster.min-free-disk" set, glusterfs avoids placing files bricks that are "too full". I'd like to understand when the free space on the bricks is calculated. It seems to me that this does not happen for every write call (naturally) but at some interval or that some other event triggers this. i.e, if I write two files quickly (that together
2017 Dec 19
0
interval or event to evaluate free disk space?
On 19 December 2017 at 00:11, Stefan Solbrig <stefan.solbrig at ur.de> wrote: > Hi all, > > with the option "cluster.min-free-disk" set, glusterfs avoids placing > files bricks that are "too full". > I'd like to understand when the free space on the bricks is calculated. > It seems to me that this does not happen for every write call (naturally)
2023 Oct 10
0
Updated invitation: Gluster Community Meeting @ Tue Oct 10, 2023 2:30pm - 3:30pm (IST) (gluster-users@gluster.org)
This event has been updated Changed: description Gluster Community Meeting Tuesday Oct 10, 2023 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177
2023 Apr 11
0
Updated invitation: Gluster Community Meeting @ Tue Apr 11, 2023 2:30pm - 3:30pm (IST) (gluster-users@gluster.org)
This event has been updated Changed: description Gluster Community Meeting Tuesday Apr 11, 2023 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177
2023 May 09
0
Updated invitation: Gluster Community Meeting @ Tue May 9, 2023 2:30pm - 3:30pm (IST) (gluster-users@gluster.org)
This event has been updated Changed: description Gluster Community Meeting Tuesday May 9, 2023 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177
2023 Jan 11
0
Updated invitation: Gluster Community Meeting @ Wed Jan 11, 2023 2:30pm - 3:30pm (IST) (gluster-users@gluster.org)
This event has been updated Changed: description Gluster Community Meeting Wednesday Jan 11, 2023 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177
2023 Jul 11
0
Updated invitation: Gluster Community Meeting @ Tue Jul 11, 2023 2:30pm - 3:30pm (IST) (gluster-users@gluster.org)
This event has been updated Changed: description Gluster Community Meeting Tuesday Jul 11, 2023 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177