similar to: stale file handle on gluster NFS client when trying to remove a directory

Displaying 19 results from an estimated 19 matches similar to: "stale file handle on gluster NFS client when trying to remove a directory"

2018 Jan 03
0
stale file handle on gluster NFS client when trying to remove a directory
Hi all, I haven't found any root cause or workaround for this yet. Can any one help me in underatanding the issue? Regards, Jeevan. On Dec 21, 2017 8:20 PM, "Jeevan Patnaik" <g1patnaik at gmail.com> wrote: > Hi, > > > After running rm -rf on a directory, the files under it got deleted, but > the directory was not deleted and was showing stale file handle
2018 Jan 03
1
stale file handle on gluster NFS client when trying to remove a directory
An ESTALE error usually means the gfid could not be found. Does repeating the "rm -rf" delete the directory? Regards, Nithya On 3 January 2018 at 12:16, Jeevan Patnaik <g1patnaik at gmail.com> wrote: > Hi all, > > I haven't found any root cause or workaround for this yet. Can any one > help me in underatanding the issue? > > Regards, > Jeevan. > >
2011 Aug 04
0
Local delivery via deliver fails for 1 user in alias
Hi all, I'm a bit baffled. I have an OS X server 10.6.8 and everything was working fine. Now however I seem to be having some issues and I'm unable to find log entries to help point me to the error. I have an alias, sales at cirrusav.com, which forwards mail to myself and two others. This works fine most of the time, but on occasion messages are not delivered to one user. It is possible
2017 Apr 12
2
"table(droplevels(aq)$Month)" in manual page of droplevels
Hello, Inline. Em 12-04-2017 16:40, Henric Winell escreveu: > (Let's keep the discussion on-list -- I've added back R-devel.) > > On 2017-04-12 16:39, Ulrich Windl wrote: > >>>>> Henric Winell <nilsson.henric at gmail.com> schrieb am 12.04.2017 >>>>> um 15:35 in >> Nachricht <b66fe849-bb8d-f00d-87e5-553f866d57e0 at gmail.com>:
2017 Apr 12
3
"table(droplevels(aq)$Month)" in manual page of droplevels
The last line of the example in droplevels' manual page seems to be incorrect to me. I think it should read: "table(droplevels(aq$Month))". Amazingly (I don't understand) both variants seem to produce the same result (R 3.3.3): --- > aq <- transform(airquality, Month = factor(Month, labels = month.abb[5:9])) > aq <- subset(aq, Month != "Jul") >
2017 Apr 12
0
"table(droplevels(aq)$Month)" in manual page of droplevels
(Let's keep the discussion on-list -- I've added back R-devel.) On 2017-04-12 16:39, Ulrich Windl wrote: >>>> Henric Winell <nilsson.henric at gmail.com> schrieb am 12.04.2017 >>>> um 15:35 in > Nachricht <b66fe849-bb8d-f00d-87e5-553f866d57e0 at gmail.com>: >> On 2017-04-12 14:40, Ulrich Windl wrote: >> >>> The last line of the
2017 Apr 13
0
"table(droplevels(aq)$Month)" in manual page of droplevels
>>>>> Rui Barradas <ruipbarradas at sapo.pt> >>>>> on Wed, 12 Apr 2017 17:07:45 +0100 writes: > Hello, Inline. > Em 12-04-2017 16:40, Henric Winell escreveu: >> (Let's keep the discussion on-list -- I've added back >> R-devel.) >> >> On 2017-04-12 16:39, Ulrich Windl wrote: >>
2013 Feb 26
0
Replicated Volume Crashed
Hi, I have a gluster volume that consists of 22Bricks and includes a single folder with 3.6 Million files. Yesterday the volume crashed and turned out to be completely unresposible and I was forced to perform a hard reboot on all gluster servers because they were not able to execute a reboot command issued by the shell because they were that heavy overloaded. Each gluster server has 12 CPU cores
2011 Sep 26
4
Hard I/O lockup with EL6
I'm trying to figure out why 2 machines have a "hard I/O lock" on the HDD when running EL6. I have 4 identical machines, all were stable with EL5. 2 work great with EL6, 2 do not. I've checked momtherboard BIOS versions and settings, SAS controller BIOS versions and settings, they are the same between the working and non- working systems. When booting a non-working system,
2013 May 24
0
Problem After adding Bricks
Hello, I have run into some performance issues after adding bricks to a 3.3.1 volume. Basically I am seeing very high CPU usage and extremely degraded performance. I started a re-balance but stopped it after a couple days. The logs have a lot of entries for split-brain as well as "Non Blocking entrylks failed for". For some of the directories on the client doing an ls will show multiple
2011 Aug 24
1
Input/output error
Hi, everyone. Its nice meeting you. I am poor at English.... I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want to change from gluster mount to nfs mount. I have installed GlusterFS 3.2.1 one week ago,and replication 2 server. OS:CentOS5.5 64bit RPM:glusterfs-core-3.2.1-1 glusterfs-fuse-3.2.1-1 command gluster volume create syncdata replica 2 transport tcp
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover that when one of replicate node reboot and startup the glusterd daemon,the gluster will crash cause by the other replicate node cpu usage reach 100%. Our gluster info: Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Options Reconfigured: performance.cache-size: 3GB
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems running RHEL 7.3. They are mounting a Gluster volume which is hosted on more traditional x86 servers. Everything seems to be working properly except for creating new directories from the PPC64 clients. The mkdir command gives a "Input/output error" and for the first few minutes the new directory is
2012 Jun 24
0
nouveau _BIOS method
Hi to all! I have a problem with a nvidia geforce 520mx [NVd0 generation card (0x0d9110a1)] i have on my notebook (samsung 3 series). In practice i'm not able to use it with bumblebee and bbswitch. The dmesg message is: [ 13.507435] nouveau 0000:01:00.0: power state changed by ACPI to D0 [ 13.507440] nouveau 0000:01:00.0: power state changed by ACPI to D0 [ 13.507448] nouveau
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp