similar to: Gluster NFS Replicate bricks different size

Displaying 20 results from an estimated 4000 matches similar to: "Gluster NFS Replicate bricks different size"

2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ; the client can write data that originally on offline bricks to other online bricks ; the distributed volume crash, even if one brick offline; it's so unreliable when the failed brick online ,how to join the original distribute volume; don't want the new write data can't
2013 Sep 16
1
Gluster Cluster
Hi all I have a glusterfs cluster underpinning a KVM virtual server in a two host setup. Today the cluster just froze and stopped working. I have rebooted both nodes, brought up the storage again. I can see all the vm files there but when I try to start the vm the machine just hangs. How can I see if gluster is trying to synchronise files between the two servers? thanks Shaun -------------- next
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Oct 06
0
Options to turn off/on for reliable virtual machinewrites & write performance
In a replicated cluster, the client writes to all replicas at the same time. This is likely while you are only getting half the speed for writes as its going to two servers and therefore maxing your gigabit network. That is, unless I am misunderstanding how you are measuring the 60MB/s write speed. I don't have any advice on the other bits...sorry. Todd -----Original Message----- From:
2013 Jul 02
1
files do not show up on gluster volume
I am trying to touch files on a mounted gluster mount point. gluster1:/gv0 24G 786M 22G 4% /mnt [root at centos63 ~]# cd /mnt [root at centos63 mnt]# ll total 0 [root at centos63 mnt]# touch hi [root at centos63 mnt]# ll total 0 The files don't show up after I ls them, but if I try to do a mv operation something very strange happens: [root at centos63 mnt]# mv /tmp/hi . mv:
2013 Dec 16
1
Gluster Management Console
I see references to the Gluster Management Console in some of the older (3.0 and 3.1) documentation. Is this feature available in version 3.4 as well? If so, could someone point me to documentation on how to access? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello Is it possible to take advantage of copy-on-write implemented in btrfs if all bricks are stored on it? If not is there any other mechanism (in glusterfs) which supports CoW? regards -- Maciej Ga?kiewicz Shelly Cloud Sp. z o. o., Sysadmin http://shellycloud.com/, macias at shellycloud.com KRS: 0000440358 REGON: 101504426 -------------- next part -------------- An HTML attachment was
2013 Sep 06
1
Gluster native client very slow
Hello, I'm testing a two nodes glusterfs distributed cluster (version 3.3.1-1) on Debian 7. The two nodes write on the same iscsi volume on a SAN. When I try to write an 1G file with dd, I have the following results : NFS : 107 Mbytes/s Gluster client : 8 Mbytes/sec My /etc/fstab on the client : /etc/glusterfs/cms.vol /data/cms glusterfs defaults 0 0 I'd like to use the gluster
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the
2013 Jul 18
1
Gluster & PHP - stat problem?
Has anyone ever ran into a problem in which PHP's stat() call to a file on a Gluster-backed volume randomly fails, yet /usr/bin/stat never fails? Running strace against both reveals that the underlying system calls succeed. I realize this is probably a PHP problem since I cannot replicate with a non-PHP-based script; however, was hoping someone on this list might have seen this before. RHEL
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all, as the title says - i'm looking to change a volume from dist/repl -> dist. we're currently running 3.2.7. a few of questions for you gurus out there: - is this possible to do on 3.2.7? - is this possible to do with 3.4.1? (would involve upgrade) - are there any pitfalls i should be aware of? many thanks in advance, regards, paul -------------- next part -------------- An
2013 Oct 01
1
Gluster on ZFS: cannot open empty files
Hi everyone, I've got glusterfs-server/glusterfs-client version 3.4.0final-ubuntu1~precise1 (from the semiosis PPA) running on Ubuntu 13.04. I'm trying to share ZFS (ZFS on Linux 0.6.2-1~precise from the zfs-stable PPA) using GlusterFS. When creating the ZFS filesystem and the Gluster volume, I accepted all the defaults and then: - I enabled deduplication for the ZFS filesystem (zfs set
2013 Jul 03
1
One Volume Per User - Possible with Gluster?
I've been looking into using Gluster to replace a system that we currently use for storing data for several thousand users. With our current networked file system, each user can create volumes and only that user has access to their volumes with authentication. I see that Gluster also offers a username/password auth system, which is great, but there are several issues about it that bother me:
2013 Sep 05
0
Gluster native client configuration
Hi All, To start with, I am using gluster 3.4.0 (official packages from the gluster site) on a centos 6.4 machines. Reading some docs, it appears that one can (and it is advisable) set some configuration parameters, like enabling and set a value for performance.cache-size translator, on the native client side. In the same time, I can't find any configuration files on the client. My
2013 Oct 31
0
AWS backups of gluster vols
Good Afternoon, Can someone point me to some documentation on best practices for backing up bluster volumes created from EBS storage in AWS? We'd be looking to recover the entire volume from a catastrophic failure, to a single point in time. Thanks! This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended
2014 Mar 20
1
Optimizing Gluster (gfapi) for high IOPS
Hey folks, We've been running VM's on qemu using a replicated gluster volume connecting using gfapi and things have been going well for the most part. Something we've noticed though is that we have problems with many concurrent disk operations and disk latency. The latency gets bad enough that the process eats the cpu and the entire machine stalls. The place where we've seen it
2013 Sep 30
1
Using gluster with ipv6
Hi, I'm starting to use gluster version debian/unstable 3.4.0-4 but I need use ipv6 in my network, but just can't setup glusterfs for. I tried configuring IPv6 or DNS (only resolving at IPv6), got errors in both case. Here last test I did + error: /usr/sbin/glusterfsd -s ipv6.google.com --volfile-id testvol5.XXX.... [2013-09-30 14:09:24.424624] E [common-utils.c:211:gf_resolve_ip6]
2013 Jul 17
0
Gluster 3.4.0 RDMA stops working with more then a small handful of nodes
I was wondering if anyone on this list has run into this problem. When creating/mounting RDMA volumes of ~half dozen or less nodes - I am able to successfully create, start, and mount these RDMA only volumes. However if I try to scale this to 20, 50, or even 100 nodes RDMA only volumes completely fall over on themselves. Some of the basic symptoms I'm seeing are: * Volume create always
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance