similar to: Can't delete or add files when a node fails.

Displaying 20 results from an estimated 20000 matches similar to: "Can't delete or add files when a node fails."

2010 Oct 09
2
gluster and rocks?
Is anyone using Gluster with the Rocks cluster system? If so, do you know if there is a Roll for Gluster? Or, do you have any advice about things to do and things to avoid doing? We're considering using Gluster with Infiniband on our cluster and trying to learn whether other people have done this so we can perhaps learn from their experience. Thanks. .. Lana (lana.deere at gmail.com)
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week? Mike
2010 Sep 23
1
proposed new doco for "Gluster 3.1: Installing GlusterFS on OpenSolaris"
Hi all Reference: http://support.zresearch.com/community/documentation/index.php/Gluster_3.1:_Installing_GlusterFS_on_OpenSolaris I have found this guide to be too brief/terse, and have endeavoured to improve it via more of a recipie/howto approach - and possibly misunderstood the intent of the brief directions in the process. Please advise if there are any errors? Once the procedure is
2010 Jun 08
1
how much has performance improved since 2.x?
Greetings, I was curious if the 3.x glusterfs has made any significant write performance headway (especially when dealing with small files). Last time I was testing 2.x, I had to write glusterfs off due to poor performance (previous config/performance-stats can be accessed in list archives if curious). Did anyone notice *dramatic* improvement when switching to 3.x? Thanks, Michael
2011 Sep 26
1
Is gluster suitable and production ready for email/web servers?
I've been leaning towards actually deploying gluster in one of my projects for a while and finally a probable candidate project came up. However, researching into the specific use case, it seems that gluster isn't really suitable for load profiles that deal with lots of concurrent small files. e.g. http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_small_files
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often. We partially fixed this problem by "ls" the folder before doing anything in our command, however, Sun Grid Engine
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2010 Dec 08
1
NFS with UCARP vs. GlusterFS mount question
Morning Folks, should I prefer NFS with UCARP or native GlusterFS mounts for serving the system images to XCP? Which one performes better over 1G network links? NFS is probaby easier to setup due to existing tools like rpcinfo and showmount, both are used inside the storage container code, and there is some code for NFS, not for GlusterFS, except I write one. UCARP has the disadvantage that
2012 Mar 09
1
dht log entries in fuse client after successful expansion/rebalance
Hi I'm using Gluster 3.2.5. After expanding a 2x2 Distributed-Replicate volume to 3x2 and performing a full rebalance fuse clients log the following messages for every directory access: [2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk] 1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench [2012-03-08 10:53:56.953065] I
2010 Oct 21
1
Some client problems with TCP-only NFS in Gluster 3.1
I see that the built-in NFS support registers mountd in portmap only with tcp and not udp. While this makes sense for a TCP-only NFS implementation, it does cause problems for some clients: Ubuntu 10.04 and 7.04 mount just fine. Ubuntu 8.04 gives "requested NFS version or transport protocol is not supported", unless you specify "-o mountproto=tcp" as a mount option, in
2011 Jun 21
2
GlusterFS 3.1.5 now available
If you haven't seen it already, GlusterFS 3.1.5 is now available at http://www.gluster.org/download/ For those of you currently on the 3.1.x series, we recommend that you upgrade to this latest release. Here are some issues fixed in this release: Bug 2294: Fixed the issue occurred during creating and sharing of volumes with both RDMA and TCP/IP transport type. Bug 2522: Fixed the issue of
2010 May 19
0
ext3 filesystem options and Gluster -
All - I would like to know if anyone is using "dir_index" on a ext3 filesystem used by Gluster and/or is mounting that block device with "noatime" and "data=writeback". We are doing some of our own testing in the lab but it would be nice to know if anyone is using these options in real life. As always, thank you very much. Craig -- Craig Carl
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2010 May 15
1
Input/output error when running `ls` and `cd` on directories
I'm getting Input/output errors on gluster mounted directories. First, I have a few directories I created a few weeks ago, but when I run an ls on them, their status is listed as ???????: [23:52:54] [root at clustr06 /mnt/glusterfs]# ls -al ls: cannot access lost+found: Input/output error ls: cannot access bhl: Input/output error total 1920 drwxr-xr-x 7 root root 294912 2010-05-13 19:11 .
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few > minutes. SIGTERM on the other hand causes crash, but this time it is > not read-only remount, but around 10 IOPS tops and 2 IOPS on average. > -ps So, seems to be reliable to server crashes but not to server shutdown :)
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2011 Aug 21
2
Fixing split brain
Hi Consider the typical spit brain situation: reading from file gets EIO, logs say: [2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open] 0-gfs-replicate-0: failed to open as split brain seen, returning EIO [2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk] 0-glusterfs-fuse: 1371456: OPEN() /manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1 (Input/output
2012 May 04
1
'Transport endpoint not connected'
This should be a pretty easy issue to reproduce, at least it seems to happen to me very often. (gluster-3.2.5) After storage backend(s) have been rebooted, the client mounts are often broken until you unmount and remount. Example from this morning: I had rebooted storage servers to upgrade them to ubuntu 12.04. Now at the client side: $ ls /gluster/scratch ls: cannot access /gluster/scratch:
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume with default network.ping-timeout will cause the underlying VM to remount filesystem as read-only (device error will occur) unless you tune mount options in VM's fstab. -ps On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote: > What he is saying is that, on a two node volume, upgrading a node will