Displaying 20 results from an estimated 4000 matches similar to: "create volume in two different Data Centers"
2017 Oct 24
0
create volume in two different Data Centers
Hi,
You can, but unless the two datacenters are very close, it'll be slow as
hell. I tried it myself and even a 10ms ping between the bricks is
horrible.
On Tue, Oct 24, 2017 at 01:42:49PM +0330, atris adam wrote:
> Hi
>
> I have two data centers, each of them have 3 servers. This two data centers
> can see each other over the internet.
> I want to create a distributed
2017 Oct 24
2
create volume in two different Data Centers
thanks for answering. But I have to setup and test it myself and record the
result. Can you guide me a little more. The problem is, one valid ip for
each data centers exist, and each data centers have 3 servers. How should I
config the network in which the server bricks see each other to create a
glusterfs volume?
On Tue, Oct 24, 2017 at 1:47 PM, <lemonnierk at ulrar.net> wrote:
> Hi,
2017 Oct 24
0
create volume in two different Data Centers
Il 24/10/2017 12:45, atris adam ha scritto:
> thanks for answering. But I have to setup and test it myself and
> record the result. Can you guide me a little more. The problem is, one
> valid ip for each data centers exist, and each data centers have 3
> servers. How should I config the network in which the server bricks
> see each other to create a glusterfs volume?
>
I would
2017 Oct 24
2
active-active georeplication?
hi everybody,
Have glusterfs released a feature named active-active georeplication? If
yes, in which version it is released? If no, is it planned to have this
feature?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is
available since 3.11 and the current plan is to have it fully supported in
a 4.x release.
Note that Halo replication is built on existing synchronous replication in
Gluster and differs from the current geo-replication implementation.
Kotresh's response is spot on for the current geo-replication
implementation.
Regards,
2017 Aug 06
4
Volume hacked
Hi,
This morning one of our cluster was hacked, all the VM disks were
deleted and a file README.txt was left with inside just
"http://virtualisan.net/contactus.php :D"
I don't speak the language but with google translete it looks like it's
just a webdev company or something like that, a bit surprised ..
In any case, we'd really like to know how that happened.
I realised
2017 Aug 07
2
Volume hacked
Interesting problem...
Did you considered an insider job?( comes to mind http://verelox.com
<https://t.co/dt1c78VRxA> recent troubles)
On Mon, Aug 7, 2017 at 3:30 AM, W Kern <wkmail at bneit.com> wrote:
>
>
> On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote:
>
>
> Gluster already uses a vlan, the problem is that there is no easy way
> that I know of to tell
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me.
How can I get these news about glusterfs new features?
On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote:
>
> Halo replication [1] could be of interest here. This functionality is
> available since 3.11 and the current plan is to have it fully supported in
> a 4.x release.
>
> Note that Halo
2017 Aug 06
0
Volume hacked
Thinking about it, is it even normal they managed to delete the VM disks?
Shoudn't they have gotten "file in use" errors ? Or does libgfapi not
lock the access files ?
On Sun, Aug 06, 2017 at 03:57:06PM +0100, lemonnierk at ulrar.net wrote:
> Hi,
>
> This morning one of our cluster was hacked, all the VM disks were
> deleted and a file README.txt was left with inside
2017 Oct 24
0
active-active georeplication?
Hi,
No, gluster doesn't support active-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote:
> hi everybody,
>
> Have glusterfs released a feature named active-active georeplication? If
> yes, in which version it is released?
2017 Aug 07
0
Volume hacked
On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote:
> Interesting problem...
> Did you considered an insider job?( comes to mind http://verelox.com
> <https://t.co/dt1c78VRxA> recent troubles)
I would be really really surprised, we are only 5 / 6 with access and as
far as I know no one has a problem with the company.
The last person to leave did so last year, and we
2017 Aug 06
2
Volume hacked
> You should add VLANS, and/or overlay networks and/or Mac Address
> filtering/locking/security which raises the bar quite a bit for hackers.
> Perhaps your provider can help you with that.
>
Gluster already uses a vlan, the problem is that there is no easy way
that I know of to tell gluster not to listen on an interface, and I
can't not have a public IP on the server. I really
2017 Jun 07
2
NFS-Ganesha packages for debian aren't installing
Although looking at it I see .service files for systemd but nothing for SysV.
Is there no support for SysV ? Guess I'll have to write that myself
On Wed, Jun 07, 2017 at 11:36:05AM +0100, lemonnierk at ulrar.net wrote:
> Wait, ignore that.
> I added the stretch repo .. I think I got mind flooded by the broken link for the key before that,
> sorry about the noise.
>
> On Wed,
2017 Aug 07
0
Volume hacked
On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote:
>
> Gluster already uses a vlan, the problem is that there is no easy way
> that I know of to tell gluster not to listen on an interface, and I
> can't not have a public IP on the server. I really wish ther was a
> simple "listen only on this IP/interface" option for this
What about this?
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock
> or sanlock)?
Yes. Gluster has it's own quorum, you can disable it but that's just a
recipe for a disaster.
> Also, on a two-node setup it is *guaranteed* for updates to one node to
> put offline the whole volume?
I think so, but I never took the chance so who knows.
> On the other hand, a 3-way
2017 Jun 07
0
NFS-Ganesha packages for debian aren't installing
On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote:
> Although looking at it I see .service files for systemd but nothing for SysV.
> Is there no support for SysV ? Guess I'll have to write that myself
The packaging for packages provided by the Gluster Community (not in the
standard Debian repos) is maintained here:
https://github.com/gluster/glusterfs-debian
2017 Aug 06
0
Volume hacked
I'm not sure what you mean by saying "NFS is available by anyone"?
Are your gluster nodes physically isolated on their own network/switch?
In other words can an outsider access them directly without having to
compromise a NFS client machine first?
-bill
On 8/6/2017 7:57 AM, lemonnierk at ulrar.net wrote:
> Hi,
>
> This morning one of our cluster was hacked, all the VM
2017 Jun 07
1
NFS-Ganesha packages for debian aren't installing
On 06/07/2017 06:03 PM, Niels de Vos wrote:
> On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote:
>> Although looking at it I see .service files for systemd but nothing for SysV.
>> Is there no support for SysV ? Guess I'll have to write that myself
>
> The packaging for packages provided by the Gluster Community (not in the
> standard Debian
2017 Oct 11
2
data corruption - any update?
> corruption happens only in this cases:
>
> - volume with shard enabled
> AND
> - rebalance operation
>
I believe so
> So, what If I have to replace a failed brick/disks ? Will this trigger
> a rebalance and then corruption?
>
> rebalance, is only needed when you have to expend a volume, ie by
> adding more bricks ?
That's correct, replacing a brick
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume
with default network.ping-timeout will cause the underlying VM to
remount filesystem as read-only (device error will occur) unless you
tune mount options in VM's fstab.
-ps
On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote:
> What he is saying is that, on a two node volume, upgrading a node will