similar to: How to expand Replicated Volume

Displaying 20 results from an estimated 4000 matches similar to: "How to expand Replicated Volume"

2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam Can I use this command "gluster vol add-brick vol1 replica 2 file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01 and 02 exited without add new servers. Is it ok for expanding volume? Thanks for your support Regards, Giang 2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>: > Hi, > You can use add-brick
2017 Sep 13
3
glusterfs expose iSCSI
Hi all I want to configure glusterfs to expose iSCSI target. I followed this artical https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/ but when I install tcmu-runner. It doesn't work. I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli, it not show *user:glfs* and *user:gcow* */>* ls o- /
2017 Sep 13
0
glusterfs expose iSCSI
On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr <ltrgiang86 at gmail.com> wrote: > Hi all > Hi GiangCoi, The Good news is that now we have gluster-block [1] which will help you configure block storage using gluster very easy. gluster-block will take care of all the targetcli and tcmu-runner configuration for you, all you need as a pre-requisite is a gluster volume. And the sad part is
2007 Jul 13
1
do we support zonepath on UFS formated ZFS volume
Hi, ZFS experts, From ZFS release notes, " Solaris 10 6/06 and Solaris 10 11/06: Do Not Place the Root File Systemof a Non-Global Zone on ZFS. The zonepath of a non-global zone should not reside on ZFS for this release. This action might result in patching problems and possibly prevent the system from being upgraded to a later Solaris 10 update release." So my question is, do we
2018 Apr 10
0
glusterfs disperse volume input output error
Hi, Could you help me? i have a problem with file on disperse volume. When i try to read this from mount point i recieve error, # md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2 md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error Configuration and status of volume is: # gluster volume info vol1 Volume Name: vol1 Type: Disperse Volume ID:
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi, I've got troubles after few minutes of glusterfs operations. I setup a 4-node replica 4 storage, with 2 bricks on every server: # gluster volume create vms replica 4 transport tcp 192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1 192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2 192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2 I started copying files with
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi, Some days ago all my glusterfs configuration was working fine. Today I realized that the total size reported by df command was changed and is smaller than the aggregated capacity of all the bricks in the volume. I checked that all the volumes status are fine, all the glusterd daemons are running, there is no error in logs, however df shows a bad total size. My configuration for one volume:
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2012 Feb 03
0
[LLVMdev] LLVM version with working Alpha backend
Hi Giang, Given that the community deprecated the Alpha backend, I'm doubtful anyone would be able to point you in the right direction. Have you iteratively tried the difference versions of LLVM (i.e., 2.9, 2.8, 2.7 on down the line)? Chad On Feb 3, 2012, at 12:34 PM, Giang Hoang <ghoang84 at gmail.com> wrote: > Hi, > > For my work, I want to use LLVM to compile SPEC 2k
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi, Yes this is possible. Make sure you have cluster.weighted-rebalance enabled for the volume and run rebalance with the start force option. Which version of gluster are you running (we fixed a bug around this a while ago)? Regards, Nithya On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote: > We currently have a 3 node gluster setup each has a 100TB brick (total > 300TB,
2010 Oct 16
1
[LLVMdev] llvm-gcc as Alpha cross compiler
Thanks Andrew. I would like to clarify what I tried to do. I want to use llvm-gcc on x86 linux to compile C programs into Alpha binary. Giang On Fri, Oct 15, 2010 at 5:41 PM, Andrew Lenharth <andrewl at lenharth.org>wrote: > llvm-gcc doesn't not compile *on* alpha (128bit fp and int issues). I > haven't tried it as a cross compiler. > > Andrew > > On Fri, Oct
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force option, will this distribute the existing data
2013 Jan 03
0
Resolve brick failed in restore
Hi, I have a lab with 10 machines acting as storage servers for some compute machines, using glusterfs to distribute the data as two volumes. Created using: gluster volume create vol1 192.168.10.{221..230}:/data/vol1 gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2 and mounted on the client and server machines using: mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1 mount