similar to: NFS cann't use by esxi with Striped Volume

Displaying 20 results from an estimated 1000 matches similar to: "NFS cann't use by esxi with Striped Volume"

2013 Sep 13
1
glusterfs-3.4.1qa2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1qa2.tar.gz This release is made off jenkins-release-42 -- Gluster Build System
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Jun 26
5
[Bug 830] New: 關於iptables影響服務器性能事宜
https://bugzilla.netfilter.org/show_bug.cgi?id=830 Summary: ??iptables????????? Product: iptables Version: unspecified Platform: All OS/Version: RedHat Linux Status: NEW Severity: major Priority: P5 Component: iptables AssignedTo: netfilter-buglog at lists.netfilter.org ReportedBy: higkoohk
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All, 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0 can be downloaded from [1] and release notes are available at [2]. Upgrade instructions can be found at [3]. If you would like to propose bug fix candidates or minor features for inclusion in 3.4.1, please add them at [4]. 3.3.2 packages can be downloaded from [5]. A big note of thanks to everyone who helped in
2013 Dec 10
4
Structure needs cleaning on some files
Hi All, When reading some files we get this error: md5sum: /path/to/file.xml: Structure needs cleaning in /var/log/glusterfs/mnt-sharedfs.log we see these errors: [2013-12-10 08:07:32.256910] W [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote operation failed: No such file or directory [2013-12-10 08:07:32.257436] W [client-rpc-fops.c:526:client3_3_stat_cbk]
2017 Jul 11
2
Extremely slow du
Hi Kashif, Thank you for your feedback! Do you have some data on the nature of performance improvement observed with 3.11 in the new setup? Adding Raghavendra and Poornima for validation of configuration and help with identifying why certain files disappeared from the mount point after enabling readdir-optimize. Regards, Vijay On 07/11/2017 11:06 AM, mohammad kashif wrote: > Hi Vijay and
2013 Jun 13
2
incomplete listing of a directory, sometimes getdents loops until out of memory
Hello, We're having an issue with our distributed gluster filesystem: * gluster 3.3.1 servers and clients * distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes * xfs backend * nfs clients * nfs.enable-ino32: On * servers: CentOS 6.3, 2.6.32-279.14.1.el6.centos.plus.x86_64 * cleints: CentOS 5.7, 2.6.18-274.12.1.el5 We have a directory containing 3,343 subdirectories. On
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Nov 09
1
Adding a slack for communication?
@Amye +1 for this great Idea, I am 100% for it. @Vijay for archiving purposes maybe it will be possible to use free service as https://slackarchive.io/ <https://slackarchive.io/> BR, Martin > On 9 Nov 2017, at 00:09, Vijay Bellur <vbellur at redhat.com> wrote: > > > > On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda <amye at redhat.com <mailto:amye at
2017 Jun 18
1
Extremely slow du
Hi Mohammad, A lot of time is being spent in addressing metadata calls as expected. Can you consider testing out with 3.11 with md-cache [1] and readdirp [2] improvements? Adding Poornima and Raghavendra who worked on these enhancements to help out further. Thanks, Vijay [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ [2] https://github.com/gluster/glusterfs/issues/166 On
2017 Jun 12
2
Extremely slow du
Hi Vijay I have enabled client profiling and used this script https://github.com/bengland2/gluster-profile-analysis/blob/master/gvp-client.sh to extract data. I am attaching output files. I don't have any reference data to compare with my output. Hopefully you can make some sense out of it. On Sat, Jun 10, 2017 at 10:47 AM, Vijay Bellur <vbellur at redhat.com> wrote: > Would it be
2017 Jun 16
0
Extremely slow du
Hi Vijay Did you manage to look into the gluster profile logs ? Thanks Kashif On Mon, Jun 12, 2017 at 11:40 AM, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi Vijay > > I have enabled client profiling and used this script > https://github.com/bengland2/gluster-profile-analysis/blob/ > master/gvp-client.sh to extract data. I am attaching output files. I >
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me. How can I get these news about glusterfs new features? On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote: > > Halo replication [1] could be of interest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo
2013 Dec 09
1
[CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel
Hi, I'm using glusterfs version 3.4.0 from gluster-epel[1]. Recently, I find out that there's a glusterfs version in base repo (3.4.0.36rhs). So, is it recommend to use that version instead of gluster-epel version? If yes, is there a guide to make the switch with no downtime? When run yum update glusterfs, I got the following error[2]. I found a guide[3]: > If you have replicated or
2004 Aug 18
0
[Bug 1599] New: copy-unsafe-links cann't take affect
https://bugzilla.samba.org/show_bug.cgi?id=1599 Summary: copy-unsafe-links cann't take affect Product: rsync Version: 2.6.2 Platform: x86 OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: faris.xiao@haoxi.com
2009 May 19
2
imap cann't read the mail that foxmail send
Hi everyone: I use dovecot only as the imap server, For smtp and pop3, I use apache james. The dovecot version: 1.1.14 And the configuration is : protocols: imap ssl_disable: yes login_dir: /var/run/dovecot/login login_executable: /usr/libexec/dovecot/imap-login mail_location: mbox:/usr/local/edupass/mail/repo/%u:INBOX=/usr/local/edupass/mail/%u auth default: passdb:
2017 Jun 09
2
Extremely slow du
Hi Vijay Thanks for your quick response. I am using gluster 3.8.11 on Centos 7 servers glusterfs-3.8.11-1.el7.x86_64 clients are centos 6 but I tested with a centos 7 client as well and results didn't change gluster volume info Volume Name: atlasglust Type: Distribute Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b Status: Started Snapshot Count: 0 Number of Bricks: 5 Transport-type: tcp
2017 Jun 10
0
Extremely slow du
Would it be possible for you to turn on client profiling and then run du? Instructions for turning on client profiling can be found at [1]. Providing the client profile information can help us figure out where the latency could be stemming from. Regards, Vijay [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling On Fri, Jun 9, 2017 at
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open