Displaying 20 results from an estimated 6000 matches similar to: "Replicated and Non Replicated Bricks on Same Partition"
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello,
I have installed GlusterFS one month ago, and replication have many issues :
First of all, our infrastructure, 2 storage array of 8Tb in replication
mode... We have our backups file on this arrays, so 6Tb of datas.
I want replicate datas on the second storrage array, so, i use this command
:
# gluster volume rebalance REP_SVG migrate-data start
And gluster start to replicate, in 2 weeks
2013 Jul 13
1
mkfs.btrfs out of memory failure on lvm2 thinp volume
https://bugzilla.redhat.com/show_bug.cgi?id=984236
kernel-3.10.0-1.fc20.x86_64
Should it be possible to mkfs.btrfs on a virtual LV backed by LVM thinp? I can successfully create an XFS fs on the same virtual device. The virtual LV is 16TB in size, I haven''t tried anything smaller yet to see if that''s the problem. There is a dmesg attached to the bug report.
Just because it
2013 Jun 13
2
incomplete listing of a directory, sometimes getdents loops until out of memory
Hello,
We're having an issue with our distributed gluster filesystem:
* gluster 3.3.1 servers and clients
* distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes
* xfs backend
* nfs clients
* nfs.enable-ino32: On
* servers: CentOS 6.3, 2.6.32-279.14.1.el6.centos.plus.x86_64
* cleints: CentOS 5.7, 2.6.18-274.12.1.el5
We have a directory containing 3,343 subdirectories. On
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total
300TB, usable 100TB due to replica factor 3)
We would like to expand the existing volume by adding another 3 nodes, but
each will only have a 50TB brick. I think this is possible, but will it
affect gluster performance and if so, by how much. Assuming we run a
rebalance with force option, will this distribute the existing data
2011 Jun 29
1
Possible new bug in 3.1.5 discovered
"May you live in interesting times"
Is this a curse or a blessing? :)
I've just tested a 3.1.5 GlusterFS native client against a 3.1.3 storage pool using this volume:
Volume Name: pfs-rw1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs13-pfs1:/export/read-write/g01
2009 Apr 28
1
glusterfs and samba (file-max limit reached)
Recently a gluster I setup got mounted on a server that exports it
through samba. It appears to work till a point. Unexpectedly on heavy
usage the nodes happen to reach the max file descriptors opened limit
really easily.
Anybody else has experience on it? Is that kind of usage supported.
currently one node seems to have surpassed about 3M open files even if
the samba server claims to have
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi,
Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?
Regards,
Nithya
On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote:
> We currently have a 3 node gluster setup each has a 100TB brick (total
> 300TB,
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi,
When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file.
Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4
This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2012 Sep 18
4
cannot create a new volume with a brick that used to be part of a deleted volume?
Greetings,
I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated
volume on two bricks. This morning I deleted it successfully:
########
[root at farm-ljf0 ~]# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Stopping volume gv0 has been successful
[root at farm-ljf0 ~]# gluster volume delete gv0
Deleting volume will erase
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi,
We got a huge problem on our sun grid engine cluster with glusterfs
3.3.0. Could somebody help me?
Based on my understanding, if a folder is removed and recreated on
other client node, a program that tries to create a new file under the
folder fails very often.
We partially fixed this problem by "ls" the folder before doing
anything in our command, however, Sun Grid Engine
2013 Dec 10
4
Structure needs cleaning on some files
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
[client-rpc-fops.c:526:client3_3_stat_cbk]
2015 Nov 10
2
[PATCH] daemon: lvm: Only return public LVs from guestfs_lvs API (RHBZ#1278878).
When a disk image uses LVM thinp (thin provisioning), the guestfs_lvs
API would return the thinp pools. This confused other APIs because
thinp pools don't have corresponding /dev/VG/LV device nodes.
Filter the LVs that are returned using "lv_role=public".
Thanks: Fabian Deutsch
---
daemon/lvm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/daemon/lvm.c
2012 Jan 10
1
Howto add bricks to a replicated Volume
Hi,
I am running a three Node replicated Volume and need to add more
Bricks. What I've read so far this that this is not really possible so
the question is how it should/can be done?
Can I create a replicated Volume with 10000 Bricks where 9997 are
missing? Is there a way that I didn't find so far?
The Gluster Volume is used to store images for a Webshop in the Amazon
Cloud and I
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2018 Apr 26
0
Problem adding replicated bricks on FreeBSD
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger <mark.staudinger at nyi.net>
wrote:
> Hi Folks,
> I'm trying to debug an issue that I've found while attempting to qualify
> GlusterFS for potential distributed storage projects on the FreeBSD-11.1
> server platform - using the existing package of GlusterFS v3.11.1_4
> The main issue I've encountered is that I
2012 Jun 01
3
Striped replicated volumes in Gluster 3.3.0
Hi all,
I'm very happy to see the release of 3.3.0. One of the features I was
waiting for are striped replicated volumes. We plan to store KVM
images (from a OpenStack installation) on it.
I read through the docs and found the following phrase: "In this
release, configuration of this volume type is supported only for Map
Reduce workloads."
What does that mean exactly? Hopefully not,
2018 Apr 25
3
Problem adding replicated bricks on FreeBSD
Hi Folks,
I'm trying to debug an issue that I've found while attempting to qualify
GlusterFS for potential distributed storage projects on the FreeBSD-11.1
server platform - using the existing package of GlusterFS v3.11.1_4
The main issue I've encountered is that I cannot add new bricks while
setting/increasing the replica count.
If I create a replicated volume "poc"
2011 Aug 24
1
Adding/Removing bricks/changing replica value of a replicated volume (Gluster 3.2.1, OpenSuse 11.3/11.4)
Hi!
Until now, I use Gluster in a 2-server setup (volumes created with
replica 2).
Upgrading the hardware, it would be helpful to extend to volume to
replica 3 to integrate the new machine and adding the respective brick
and to reduce it later back to 2 and removing the respective brick when
the old machine is cancelled and not used anymore.
But it seems that this requires to delete and
2011 Sep 07
2
Gluster-users Digest, Vol 41, Issue 16
Hi Phil,
we?d the same Problem, try to compile with debug options.
Yes this sounds strange but it help?s when u are using SLES, the
glusterd works ok and u can start to work with it.
just put
exportCFLAGS='-g3 -O0'
between %build and %configure in the glusterfs spec file.
But be warned don?t use it with important data especially when u are
planing to use the replication feature,