similar to: Posix warning : Access to ... is crossing device

Displaying 20 results from an estimated 20000 matches similar to: "Posix warning : Access to ... is crossing device"

2010 Mar 04
1
[3.0.2] booster + unfsd failed
Hi list. I have been testing with glusterfs-3.0.2. glusterfs mount works well. unfsd on glusterfs mount point works well too. When using booster, unfsd realpath check failed. But ls util works well. I tried 3.0.0-git head source build but result was same. My System is Ubuntu 9.10 and using unfsd source from official gluster download site. Any comment appreciated!! - kpkim root at
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users 01 | 02 mirrored --| 03 | 04 mirrored --| distributed 05 | 06 mirrored --| 1) Would this command work for that? glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01 clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01 clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01 clustr-06:/mnt/data01 So the
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume and now mount it via mount -t gluster and I can see everything. I am still seeing the following error in /var/log/glusterfs/nfs.log [2011-02-04 13:09:16.404851] E [client-handshake.c:1079:client_query_portmap_cbk] bhl-volume-client-98: failed to get the port number for remote subvolume [2011-02-04 13:09:16.404909] I
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there, Im running glusterfs version 3.1.0. The client crashed after sometime with below stack. 2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1: Subvolume 'distribute-1' came back up; going online. [2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1: data self-heal triggered. path: /streaming/set3/work/reduce.12.1294902171.dplog.temp,
2011 May 07
1
Gluster "Peer Rejected"
Hello All, I have 8 servers.? 7 of the 8 say that gbe02 is in state State: Peer Rejected (Connected). gbe08 says it is connected to the other?7? but they are all State: Peer Rejected (Connected) So it would appear that gbe02 is out of sync with the group. I triggered a manual self heal by doing a the recommended ./find on a gluster mount. I'm stuck... I cannot find ANY docs on this
2010 Apr 22
1
Transport endpoint not connected
Hey guys, I've recently implemented gluster to share webcontent read-write between two servers. Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50 Fuse : 2.7.2-1ubuntu2.1 Platform : ubuntu 8.04LTS I used the following command to generate my configs: /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1 10.10.130.11:/data/export
2008 Dec 16
4
GlusterFS process take very many memory
Hello!!! I try use GLusterFS + openvz, but gfs process every 1 minute memory usare increase at ~2MB. How i can fix this? P.S. sorry about my bad english. Cluster information: 1) 3 nodes (server-client), conf: ############## # local data # ############## volume vz type storage/posix option directory /home/local end-volume volume vz-locks type features/posix-locks subvolumes vz end-volume
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2008 Nov 04
1
fuse_setlk_cbk error
I'm building a two node cluster to run vserver systems on. I've setup glusterfs with this config: # node a volume data-posix type storage/posix option directory /export/cluster end-volume volume data1 type features/posix-locks subvolumes data-posix end-volume volume data2 type protocol/client option transport-type tcp/client option remote-host
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get: [root at ovirt share]# ls ls: reading directory .: Too many levels of symbolic links [root at ovirt share]# ls -fl ls: reading directory .: Too many levels of symbolic links total 3636 drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 .. drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi, I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster. Now I am getting permission denied errors and I see the following in the client logs: [2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument) [2011-02-24 09:59:11.851656] I
2010 Apr 19
1
Permission Problems
Hello List, first of all my configuration: I have 2 GlusterPlatform 3.0.3 Servers virtualized on VMWare Esxi 4. With one Volume exported as "raid 1". I mounted the share with the GlusterClient 3.0.2 with the following /etc/fstab line: /etc/glusterfs/client.vol /mnt/images glusterfs defaults 0 0 The client.vol looks like this: # auto generated by
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive. The archive stores webpages collected by our spiders. The test setup consists of three data machines, each exporting a volume of about 3.7TB and one nameserver machine. File layout is such that each host has it's own directory, for example the GlusterFS website would be located in:
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All, In a two node glusterfs setup, with one node down, can't use the second node to mount the volume. I understand this is expected behaviour? Anyway to allow the secondary node to function then replicate what changed to the first (primary) when it's back online? Or should I just go for a third node to allow for this? Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi, You need 3 nodes at least to have quorum enabled. In 2 node setup you need to disable quorum so as to be able to still use the volume when one of the nodes go down. On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote: > Hey All, > > In a two node glusterfs setup, with one node down, can't use the second > node to mount the volume. I understand this is
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs ii glusterfs-client 1.3.8-0pre2 GlusterFS fuse client ii glusterfs-server 1.3.8-0pre2 GlusterFS fuse server ii libglusterfs0 1.3.8-0pre2 GlusterFS libraries and translator modules I have 2 hosts set up to use AFR with
2017 Dec 05
4
SAMBA VFS module for GlusterFS crashes
Hello, I'm trying to set up a SAMBA server serving a GlusterFS volume. Everything works fine if I locally mount the GlusterFS volume (`mount -t glusterfs ...`) and then serve the mounted FS through SAMBA, but the performance is slower by a 2x/3x compared to a SAMBA server with a local ext4 filesystem. I gather that SAMBA vfs_glusterfs module can give better performance. However, as soon as I
2008 Nov 09
3
Still problem with trivial self heal
Hi! I have trivial problem with self healing. Maybe somebody will be able to tell mi what am I doing wrong, and why do the files not heal as I expect. Configuration: Servers: two nodes A, B --------- volume posix type storage/posix option directory /ext3/glusterfs13/brick end-volume volume brick type features/posix-locks option mandatory on subvolumes posix end-volume volume server
2011 Sep 28
0
Filename completion and relative problems
Hi all, after upgrading from 2.0.10rc1 to 3.1.7, filename completion on glusterfs filesystem stopped working. We could live without that, but a more serious problem appears, which is IMHO related to the same missing(?) feature. When the WWW browser (firefox) is configured to ask the user, where to save downloaded files, it crashes immediately after displaying the dialog window. This is a problem
2009 May 28
2
Glusterfs 2.0 hangs on high load
Hello! After upgrade to version 2.0, now using 2.0.1, I'm experiencing problems with glusterfs stability. I'm running 2 node setup with cliet side afr, and glusterfsd also is running on same servers. Time to time glusterfs just hangs, i can reproduce this running iozone benchmarking tool. I'm using patched Fuse, but same result is with unpatched.