Displaying 20 results from an estimated 7000 matches similar to: "Cloud storage with glusterfs"
2017 Oct 24
0
create volume in two different Data Centers
Hi,
You can, but unless the two datacenters are very close, it'll be slow as
hell. I tried it myself and even a 10ms ping between the bricks is
horrible.
On Tue, Oct 24, 2017 at 01:42:49PM +0330, atris adam wrote:
> Hi
>
> I have two data centers, each of them have 3 servers. This two data centers
> can see each other over the internet.
> I want to create a distributed
2017 Oct 24
2
create volume in two different Data Centers
Hi
I have two data centers, each of them have 3 servers. This two data centers
can see each other over the internet.
I want to create a distributed glusterfs volume with these 6 servers, but I
have only one valid ip in each data center. Is it possible to create a
glusterfs volume?Can anyone guide me?
thx alot
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 14
1
nic requirement for teiring glusterfs
Hi everybody, I have a question about network interface used for tiering
in glusterfs, if I have a 1G nic on glusterfs servers and clients, can I
get more performance by setting up glusterfs tiering?or the network
interface should be 10G?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 24
2
create volume in two different Data Centers
thanks for answering. But I have to setup and test it myself and record the
result. Can you guide me a little more. The problem is, one valid ip for
each data centers exist, and each data centers have 3 servers. How should I
config the network in which the server bricks see each other to create a
glusterfs volume?
On Tue, Oct 24, 2017 at 1:47 PM, <lemonnierk at ulrar.net> wrote:
> Hi,
2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
hi Geoffrey,
Since you are saying it happens on all types of volumes,
lets do the following:
1) Create a dist-repl volume
2) Set the options etc you need.
3) enable gluster volume profile using "gluster volume profile <volname>
start"
4) run the work load
5) give output of "gluster volume profile <volname> info"
Repeat the steps above on new and old
2018 Feb 15
1
CentOS7 1801-01 cloud images
Hello,
There is a new cloud image versioned as 1801-01 available at https://cloud.centos.org/centos/7/images/
However, it is missing from the image-index file. I noticed the Azure version is also absent.
Was there a problem with the image build process? Should this image be used?
Thanks,
Pierre Riteau
Chameleon Lead DevOps Engineer
https://www.chameleoncloud.org
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is
available since 3.11 and the current plan is to have it fully supported in
a 4.x release.
Note that Halo replication is built on existing synchronous replication in
Gluster and differs from the current geo-replication implementation.
Kotresh's response is spot on for the current geo-replication
implementation.
Regards,
2018 Jan 27
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding devs who work on it
On 23 Jan 2018 10:40 pm, "Alan Orth" <alan.orth at gmail.com> wrote:
> Hello,
>
> I saw that parallel-readdir was an experimental feature in GlusterFS
> version 3.10.0, became stable in version 3.11.0, and is now recommended for
> small file workloads in the Red Hat Gluster Storage Server
> documentation[2]. I've successfully
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day.
Below is gdb result:
(gdb) where
#0 0x0000003267432885 in raise () from /lib64/libc.so.6
#1 0x0000003267434065 in abort () from /lib64/libc.so.6
#2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6
#3 0x00000032674750c6 in malloc_printerr () from
2018 Jan 24
1
fault tolerancy in glusterfs distributed volume
I have made a distributed replica3 volume with 6 nodes. I mean this:
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.2:/brick
Brick2: 10.0.0.3:/brick
Brick3: 10.0.0.1:/brick
Brick4: 10.0.0.5:/brick
Brick5: 10.0.0.6:/brick
Brick6:
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me.
How can I get these news about glusterfs new features?
On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote:
>
> Halo replication [1] could be of interest here. This functionality is
> available since 3.11 and the current plan is to have it fully supported in
> a 4.x release.
>
> Note that Halo
2014 Dec 04
3
[LLVMdev] Perf is dead again... :(
On Thu, Dec 4, 2014 at 8:00 AM, Dan Liew <dan at su-root.co.uk> wrote:
> * Who should fund whatever host is used to host the LNT
> infrastructure. Given the commercial interest in LLVM I hope that this
> will be straight forward
>
FWIW, if you can use google's cloud offerings, I can likely fund it. This
isn't about only being willing to fund our platform vs. some other
2017 Oct 24
2
active-active georeplication?
hi everybody,
Have glusterfs released a feature named active-active georeplication? If
yes, in which version it is released? If no, is it planned to have this
feature?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2016 Dec 15
2
Can't delete or move /home on 7.3 install
Tried this in both AWS and GCE as I though it may be a specific cloud
vendor issue. SELinux is disabled, lsof | grep home shows nothing,
lsattr /home shows nothing. Simply get "Device or resource busy."
Works just find on 7.2 so I'm kinda at a loss. Scanned over the RHEL
release notes and didn't see anything. Anyone else have this issue? We
move our /home to another mount point
2017 Jun 20
1
[Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June
On Tue, Jun 6, 2017 at 6:54 PM, Shyam <srangana at redhat.com> wrote:
> Hi,
>
> It's time to prepare the 3.11.1 release, which falls on the 20th of
> each month [4], and hence would be June-20th-2017 this time around.
>
> This mail is to call out the following,
>
> 1) Are there any pending *blocker* bugs that need to be tracked for
> 3.11.1? If so mark them
2018 Jan 12
0
Integration of GPU with glusterfs
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requirements to future client
>> deploys, requiring more than 1U for the server (most likely), and I?m
>
>> not likely to want to do this if I?m trying to optimize for client
>>
2018 Feb 04
2
halo not work as desired!!!
I have 2 data centers in two different region, each DC have 3 severs, I
have created glusterfs volume with 4 replica, this is glusterfs volume info
output:
Volume Name: test-halo
Type: Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/mnt/test1
Brick2: 10.0.0.3:/mnt/test2
Brick3: 10.0.0.5:/mnt/test3
Brick4: 10.0.0.6:/mnt/test4
2016 Dec 15
2
Can't delete or move /home on 7.3 install
On 12/15/2016 01:47 AM, Gianluca Cecchi wrote:
> On Thu, Dec 15, 2016 at 2:49 AM, Glenn E. Bailey III <
> replicant at dallaslamers.org> wrote:
>
>> Tried this in both AWS and GCE as I though it may be a specific cloud
>> vendor issue. SELinux is disabled, lsof | grep home shows nothing,
>> lsattr /home shows nothing. Simply get "Device or resource
2018 Feb 05
0
halo not work as desired!!!
I have mounted the halo glusterfs volume in debug mode, and the output is
as follows:
.
.
.
[2018-02-05 11:42:48.282473] D [rpc-clnt-ping.c:211:rpc_clnt_ping_cbk]
0-test-halo-client-1: Ping latency is 0ms
[2018-02-05 11:42:48.282502] D [MSGID: 0]
[afr-common.c:5025:afr_get_halo_latency] 0-test-halo-replicate-0: Using
halo latency 10
[2018-02-05 11:42:48.282525] D [MSGID: 0]
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message -----
> From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> To: "Alan Orth" <alan.orth at gmail.com>
> Cc: "gluster-users" <gluster-users at gluster.org>
> Sent: Saturday, January 27, 2018 7:31:30 AM
> Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4
>
> Adding