similar to: CIFS Documentation

Displaying 20 results from an estimated 100 matches similar to: "CIFS Documentation"

2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi, I've setup Gluster Geo Replication according the manual, # sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave config log-level DEBUG #sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start #sudo gluster volume geo-replication flvol ssh://root at
2012 Oct 20
1
Gluster download link redirect to redhat
Dear Team, Please note that many download links of gluster.org redirect to redhat.com. Please refer below links and correct download link. http://gluster.org/community/documentation/index.php/Gluster_3.2:_Downloading_and_Installing_the_Gluster_Virtual_Storage_Appliance_for_KVM Click on link and try to download Gluster virtual storage appliance for kvm but it
2011 Jun 22
1
glusterfs 3.2.1 processes in an endless loop?
Hello, I found a new issue with glusterfs 3.2.1 - im getting a glusterfs process for each mountpoint and they are consuming all of the CPU time. strace won't show a thing - so no system calls are made Mounting the same volumes on another server works fine. Has anyone seen such a thing? Oder any idea, what causes this and how to fix it? The logfiles don't show any information about
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
Hi, i have a lot of troubles when i try to build rpm?s out of the glusterfs 3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1) all is running fine i guess until it try?s to build the rpm?s. Then i always run into this error : RPM build errors: File not found: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd File not found by glob:
2011 Jun 06
2
uninterruptible processes writing to glusterfs share
hi! sometimes we've on some client-servers hanging uninterruptible processes ("ps aux" stat is on "D" ) and on one the CPU wait I/O grows within some minutes to 100%. you are not able to kill such processes - also "kill -9" doesnt work - when you connect via "strace" to such an process, you wont see anything and you cannot detach it again. there
2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2012 Jan 05
1
Can't stop or delete volume
Hi, I can't stop or delete a replica volume: # gluster volume info Volume Name: sync1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: thinkpad:/gluster/export Brick2: quad:/raid/gluster/export # gluster volume stop sync1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Volume sync1 does not exist # gluster volume
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2011 Aug 12
2
Replace brick of a dead node
Hi! Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to. I have a two node cluster, with two bricks replicated, one on each node. Lets say one of the node dies and is unreachable. I want to be able to spin a new node and replace the dead node's brick to a location on the new node. The command 'gluster volume
2014 Mar 06
1
Clarification on cluster quorum
Hi, I'm looking for an option to add an arbiter node to the gluster cluster, but the leads I've been following seem to lead to inconclusive results. The scenario is, a 2 node replicated cluster. What I want to do is introduce a fake host/arbiter node which would set the cluster to a 3 node meaning, we can meet the conditions of allow over 50% to write (ie. 2 can write, 1 can not).
2014 Oct 30
1
Firewall ports with v 3.5.2 grumble time
Hi, I have a requirement to run my gluster hosts within a firewalled section of network and where the consumer hosts are in a different segment due to IP address preservation, part of our security policy requires that we run local firewalls on every host so I have to get the network access locked down appropriately. I am running 3.5.2 using the packages provided in the Gluster package repository
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks
2012 Feb 08
1
How to synchronous volume
Hi,all! Have you ever use this command volume sync <HOSTNAME> [all|<VOLNAME>] to <http://dict.cn/synchronous> synchronous a volume ? I haven?t use it successfully. When I use it ,system remind me ?please delete the volume?, after I delete the volume, system remind me ?the volume doesn't exist!?. What?s the purpose of this command? Thank you!
2011 Aug 30
2
setfacl <dir>:operation not supported
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do it for nfs mount it says. setfacl <dir> : operation not supported. The logs created in server are as
2012 Nov 14
1
Howto find out volume topology
Hello, I would like to find out the topology of an existing volume. For example, if I have a distributed replicated volume, what bricks are the replication partners? Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2011 Jun 03
1
Suggestions welcome for expanded single array and redundancy
Hi We have started evaluating Glusterfs and I am very impressed with what we have seen and tested. I am hoping someone can point me in the right direction or offer a solution. We have multiple servers which we wish to use for storage. We would like to have a single storage array accessible by client machines on the network. We would like to achieve N+2 redundancy but have the ability to keep
2004 Nov 17
3
Samba share to access windows folders in linux.
Hi, I am trying to access folders on a windows system on a linux system using the command, smbmount '//a.b.c.d/CCViews/abcd/abcd_Linux_dev' '/root/pqrs/LinuxDev' -o username=abcd/<domain>,uid=abcd,gid=abcd This prompts for a password and I give the correct domain password here. It gives me the error: 21896: tree connect failed: ERRDOS - ERRnosuchshare (You specified an
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2011 May 09
1
Gluster text file configuration information?
Where can I find documentation about manual configuration of Gluster peers/volumes? All documentation seems to be about the gluster CLI. I would prefer manual configuration to facilitate automation via scripts (e.g. Puppet/Chef). I also read in this list that it is possible to configure Raid10 via text files... I would also like to experiment with this setup. Any related documents on how to do
2011 Oct 19
1
gluster map/reduce performance..
Hi, all, i try to check the performance of Map/Reduce of Gluster File system. Mapper side speed is quite good and it is sometimes faster than hadoop's map job. But in the Reduce Side job is much slower than hadoop. i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage. Would you have any suggestion for this issue FYI check the blog