similar to: Possible memory leak ?

Displaying 20 results from an estimated 3000 matches similar to: "Possible memory leak ?"

2013 Aug 22
2
Error when creating volume
Hello, I've removed a volume and I can't re-create it : gluster volume create gluster-export gluster-6:/export gluster-5:/export gluster-4:/export gluster-3:/export /export or a prefix of it is already part of a volume I've formatted the partition and reinstalled the 4 gluster servers and the error still appears. Any idea ? Thanks. -- -------------- next part --------------
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Sep 06
1
Gluster native client very slow
Hello, I'm testing a two nodes glusterfs distributed cluster (version 3.3.1-1) on Debian 7. The two nodes write on the same iscsi volume on a SAN. When I try to write an 1G file with dd, I have the following results : NFS : 107 Mbytes/s Gluster client : 8 Mbytes/sec My /etc/fstab on the client : /etc/glusterfs/cms.vol /data/cms glusterfs defaults 0 0 I'd like to use the gluster
2013 Jun 03
2
recovering gluster volume || startup failure
Hello Gluster users: sorry for long post, I have run out of ideas here, kindly let me know if i am looking at right places for logs and any suggested actions.....thanks a sudden power loss casued hard reboot - now the volume does not start Glusterfs- 3.3.1 on Centos 6.1 transport: TCP sharing volume over NFS for VM storage - VHD Files Type: distributed - only 1 node (brick) XFS (LVM)
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jun 13
2
incomplete listing of a directory, sometimes getdents loops until out of memory
Hello, We're having an issue with our distributed gluster filesystem: * gluster 3.3.1 servers and clients * distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes * xfs backend * nfs clients * nfs.enable-ino32: On * servers: CentOS 6.3, 2.6.32-279.14.1.el6.centos.plus.x86_64 * cleints: CentOS 5.7, 2.6.18-274.12.1.el5 We have a directory containing 3,343 subdirectories. On
2013 Aug 30
1
cli & glusterd sm develop guide
hi, I want develop a cli cmd to create snapshot, but glusterd op sm & hook looks complex, Are there development guides or docs about cli & glusterd backend process? Thanks. --terrs
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2013 Jun 11
1
cluster.min-free-disk working?
Hi, have a system consisting of four bricks, using 3.3.2qa3. I used the command gluster volume set glusterKumiko cluster.min-free-disk 20% Two of the bricks where empty, and two were full to just under 80% when building the volume. Now, when syncing data (from a primary system), and using min-free-disk 20% I thought new data would go to the two empty bricks, but gluster does not seem
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2013 Sep 13
1
glusterfs-3.4.1qa2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1qa2.tar.gz This release is made off jenkins-release-42 -- Gluster Build System
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello Is it possible to take advantage of copy-on-write implemented in btrfs if all bricks are stored on it? If not is there any other mechanism (in glusterfs) which supports CoW? regards -- Maciej Ga?kiewicz Shelly Cloud Sp. z o. o., Sysadmin http://shellycloud.com/, macias at shellycloud.com KRS: 0000440358 REGON: 101504426 -------------- next part -------------- An HTML attachment was
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All, 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0 can be downloaded from [1] and release notes are available at [2]. Upgrade instructions can be found at [3]. If you would like to propose bug fix candidates or minor features for inclusion in 3.4.1, please add them at [4]. 3.3.2 packages can be downloaded from [5]. A big note of thanks to everyone who helped in
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Dec 10
4
Structure needs cleaning on some files
Hi All, When reading some files we get this error: md5sum: /path/to/file.xml: Structure needs cleaning in /var/log/glusterfs/mnt-sharedfs.log we see these errors: [2013-12-10 08:07:32.256910] W [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote operation failed: No such file or directory [2013-12-10 08:07:32.257436] W [client-rpc-fops.c:526:client3_3_stat_cbk]
2013 Jun 20
1
Rev Your (RDMA) Engines for the RDMA GlusterFest
It's that time again ? we want to test the GlusterFS 3.4 beta before we unleash it on the world. Like our last test fest, we want you to put the latest GlusterFS beta through real-world usage scenarios that will show you how it compares to previous releases. Unlike the last time, we want to focus this round of testing on Infiniband and RDMA hardware. For a description of how to do this, see
2013 Oct 23
3
Samba vfs_glusterfs Quota Support?
Hi All, I'm setting up a gluster cluster that will be accessed via smb. I was hoping that the quotas. I've configured a quota on the path itself: # gluster volume quota gfsv0 list path limit_set size ---------------------------------------------------------------------------------- /shares/testsharedave 10GB 8.0KB And I've
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put