Displaying 20 results from an estimated 1000 matches similar to: "Expand distributed replicated volume with new set of smaller bricks"
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi,
Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?
Regards,
Nithya
On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote:
> We currently have a 3 node gluster setup each has a 100TB brick (total
> 300TB,
2013 Apr 29
1
Replicated and Non Replicated Bricks on Same Partition
Gluster-Users,
We currently have a 30 node Gluster Distributed-Replicate 15 x 2
filesystem. Each node has a ~20TB xfs filesystem mounted to /data and
the bricks live on /data/brick. We have been very happy with this
setup, but are now collecting more data that doesn't need to be
replicated because it can be easily regenerated. Most of the data lives
on our replicated volume and is
2018 May 26
2
glustefs as vmware datastore in production
> Hi,
>
> Does anyone have glusterfs as vmware datastore working in production in a
> real world case? How to serve the glusterfs cluster? As iscsi, NFS?
>
>
Hi,
I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore.
Our Environment is
- 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i)
- Totally 100TB service volume
- 10G Storage Network and Service
2018 May 28
0
glustefs as vmware datastore in production
Nice to read this. Any particular reason to *not* run the OS image in
glusterfs cluster?
Thanks
On 05/26/2018 02:56 PM, ??? wrote:
>
> Hi,
>
> Does anyone have glusterfs as vmware datastore working in
> production in a real world case? How to serve the glusterfs
> cluster? As iscsi, NFS?
>
>
> Hi,
>
> I am using glusterfs 3.10.x for VMware ESXi
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi,
We have a cluster with 130 compute nodes with an NAS-type
central storage under gluster (3 bricks, ~50TB). When we
run large number of ocean models we can run into bottlenecks
with many jobs trying to write to our central storage.
It was suggested to us that we could also used gluster to
unite the disks on the compute nodes into a single "disk"
in which files would be written
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
Hello,
I am still working at recovering from a few failed OS hard drives on my gluster storage and have been removing, and re-adding bricks quite a bit. I noticed yesterday night that some of the directories are not visible when I access them through the client, but are still on the brick. For example:
Client:
# ls /scratch/dw
Ethiopian_imputation HGDP Rolwaling Tibetan_Alignment
Brick:
#
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello,
I have installed GlusterFS one month ago, and replication have many issues :
First of all, our infrastructure, 2 storage array of 8Tb in replication
mode... We have our backups file on this arrays, so 6Tb of datas.
I want replicate datas on the second storrage array, so, i use this command
:
# gluster volume rebalance REP_SVG migrate-data start
And gluster start to replicate, in 2 weeks
2012 Oct 22
1
How to add new bricks to a volume?
Hi, dear glfs experts:
I've been using glusterfs (version 3.2.6) for months,so far it works very
well.Now I'm facing a problem of adding two new bricks to an existed
replicated (rep=2) volume,which is consisted of only two bricks and is
mounted by multiple clients.Can I just use the following commands to add
new bricks without stopping the services which is using the volume as
motioned?
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2013 Oct 21
1
DFS share: free space?
Hi,
is it possible, to use DFS and show the correct values of free space?
I set up a DFS-share located on filesystem1 (size 50GB) and linked shares of another server to this share
(msdfs:<fs>\share):
share1: size 110TB
share2: size 50TB
share3: size 20TB
But connecting to the DFS-share, the disk size of this network drive ist 50GB. Unfortunately files larger than
50GB can not be copied
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello,
Thanks to "partner" on IRC who told me about this (quite big) problem.
Apparently in a distributed setup once a brick fills up you start
getting write failures. Is there a way to work around this?
I would have thought gluster would check for free space before writing
to a brick.
It's very easy to test, I created a distributed volume from 2 uneven
bricks and started to
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2013 Mar 20
1
About adding bricks ...
Hi @all,
I've created a Distributed-Replicated Volume consisting of 4 bricks on
2 servers.
# gluster volume create glusterfs replica 2 transport tcp \
gluster0{0..1}:/srv/gluster/exp0 gluster0{0..1}:/srv/gluster/exp1
Now I have the following very nice replication schema:
+-------------+ +-------------+
| gluster00 | | gluster01 |
+-------------+ +-------------+
| exp0 | exp1 |
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts,
We're running glusterfs 3.3 and we have met file permission probelems after
gluster volume rebalance. Files got stick permissions T--------- after
rebalance which break our client normal fops unexpectedly.
Any one known this issue?
Thank you for your help.
--
???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Oct 18
2
gluster rebalance taking three months
Hi guys,
we have a rebalance running on eight bricks since July and this is
what the status looks like right now:
===Tue Oct 18 13:45:01 CST 2011 ====
rebalance step 1: layout fix in progress: fixed layout 223623
There are roughly 8T photos in the storage,so how long should this
rebalance take?
What does the number (in this case) 22362 represent?
Our gluster infomation:
Repository
2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works.
Our use case is fairly straight forward.
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi,
after 2 years running GlusterFS without bigger problems we're facing
some strange errors lately.
After updating to 3.12.7 some user reported at least 4 broken
directories with some invisible files. The files are at the bricks and
don't start with a dot, but aren't visible in "ls". Clients still can
interact with them by using the explicit path.
More information: