similar to: IO error in gluterfs

Displaying 20 results from an estimated 100 matches similar to: "IO error in gluterfs"

2011 Aug 30
1
setfacl <dir> : operation not supported , using glusterfs 3.2.2
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do it for nfs mount it says. setfacl <dir> : operation not supported. The logs created in server are as
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system. I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case. ===== Symptoms ===== - Error from test
2011 Aug 30
2
setfacl <dir>:operation not supported
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do it for nfs mount it says. setfacl <dir> : operation not supported. The logs created in server are as
2011 Aug 05
2
Problem running Gluterfs over Infiniband
Dear List, We have lots of issues running Glusterfs over Infiniband. The client can mount the share, but when trying to access it by ls, df, touch du or any other kind the client hosts freeze the accessing shell and reboot is hardly possible. OS: CentOS 6.0 64 bit on i7 2600k machines Version: GlusterFS 3.2.2, also found in lower 3.2.1 Kernel Version 2.6.32 IB Stack and kernel
2004 Sep 20
1
(28552) ERROR: err=-14, Linux/ocfsmain.c, 1887 ; error in mapping iobuf; need to fail out
we are running OCFS on 2.4.21-15.0.4.ELsmp ocfs-2.4.21-EL-smp-1.0.12-1 ocfs-support-1.0.10-1 ocfs-tools-1.0.10-1 I have been deleting datafiles, more than 3 times successfully for rman duplication, from a mount point /data1 (total 191G). Last week when I tried to delete datafiles from the same directory, it did delete datafiles but did not release (reclaim) all space. It still showed that 20G of
2015 Jul 11
0
EXTLINUX - GCC 5
Hi, Gene Cumm wrote: > > 3) It feels like this is a moving target where gcc keeps changing and > > different results get reported. Do we have indications that different versions of gcc5 cause different behavior on the same build and boot machines ? Ady wrote: > Since the issue is only present on specific > hardware / firmware, whatever might seem to "solve" the
2016 Jan 21
1
[Bug 11683] New: hang on select when send many files
https://bugzilla.samba.org/show_bug.cgi?id=11683 Bug ID: 11683 Summary: hang on select when send many files Product: rsync Version: 3.1.2 Hardware: All OS: All Status: NEW Severity: normal Priority: P5 Component: core Assignee: wayned at samba.org Reporter: tom916 at
2011 Dec 16
5
[Bug 8666] New: --debug=all9 fail
https://bugzilla.samba.org/show_bug.cgi?id=8666 Summary: --debug=all9 fail Product: rsync Version: 3.1.0 Platform: All OS/Version: All Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: chris at onthe.net.au QAContact: rsync-qa at
2009 Oct 23
1
bugs in version 3.1
I'm having two problems. The first is that when running with --files-from and -ii unmodified files are not put in the log. --out-format=%-14b %C %-14l %i %B %M %f All that appears in the log is Number of files: 0 Number of created files: 0 Number of regular files transferred: 0 Total file size: 0 bytes Total transferred file size: 0 bytes Literal data: 0 bytes Matched data: 0 bytes File list
2013 Jul 31
1
pre 1 OSX errors
Hi Wayne, Trying out 3.1 pre 1 on OSX 10.8.4 I like the new extended stats and the ir-chunk file numbers in log. Also getting a lot of errors and most copies not completing to local disk. rsync just stalls. Standard osx build: patch -p1 <patches/fileflags.diff patch -p1 <patches/crtimes.diff patch -p1 <patches/hfs-compression.diff ./configure make rsync --fileflags --force-change
2010 Feb 10
1
DO NOT REPLY [Bug 7124] New: Error exit causes I/O error
https://bugzilla.samba.org/show_bug.cgi?id=7124 Summary: Error exit causes I/O error Product: rsync Version: 3.1.0 Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: matt at mattmccutchen.net
2013 Nov 16
2
[Bug 10272] New: resource fork handling is broken in 3.1.0
https://bugzilla.samba.org/show_bug.cgi?id=10272 Summary: resource fork handling is broken in 3.1.0 Product: rsync Version: 3.1.0 Platform: All OS/Version: Mac OS X Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: bugzilla-samba at
2015 Apr 08
1
syslinux.efi with QEMU/OVMF
On Tue, 7 Apr 2015, Laszlo Ersek wrote: > As far as I can see (... well, guess), lpxelinux.0 uses the TCP > implementation under core/lwip/, which doesn't support TCP timestamps. > > Whereas syslinux.efi apparently uses the embedded gpxe/ tree, and that > one uses TCP timestamps. See tcp_xmit() in gpxe/src/net/tcp.c: > > if ( ( flags & TCP_SYN ) || tcp->timestamps
2017 Aug 01
3
How to delete geo-replication session?
Hi, I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below: $ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2015 Apr 07
0
syslinux.efi with QEMU/OVMF
On 04/07/15 19:22, BALATON Zoltan wrote: > Hello, > > I'm trying to find out how to pxe boot with syslinux.efi on QEMU with > OVMF. After getting through the initial hurdle caused by the iPXE based > option ROM included with QEMU having a problem as described in these > threads: > > http://www.syslinux.org/archives/2014-November/022804.html >
2017 Aug 07
0
How to delete geo-replication session?
Hi, I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation... Can someone let me know how I can delete it? Thanks > -------- Original Message -------- > Subject: How to delete geo-replication session? > Local Time: August 1, 2017 12:15 PM > UTC Time: August
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2008 Nov 04
1
Reexporting glusterfs to nfs fail
Hi, I have a machine that must to reexport glusterfs to nfs. CONFIGURATION 2 glusterfs servers | ?| ? | 1 gluterfs client 1 nfs server ?| ?| ?| 1 nfs client #*********************************************** # GLUSTERFS SERVER ?#*********************************************** # Export with glusterfs $ glusterfs -f /etc/glusterfs/glusterfs-server.vol $ cat
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail. Please perform the following steps once a new node is added - Run gsec create command again gluster system:: execute gsec_create - Run Geo-rep create command with force and run start force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> create push-pem force gluster volume geo-replication <mastervol>