Displaying 20 results from an estimated 1000 matches similar to: "NFS-Ganesha packages for debian aren't installing"
2017 Jun 07
0
NFS-Ganesha packages for debian aren't installing
Wait, ignore that.
I added the stretch repo .. I think I got mind flooded by the broken link for the key before that,
sorry about the noise.
On Wed, Jun 07, 2017 at 11:31:22AM +0100, lemonnierk at ulrar.net wrote:
> Hi,
>
> I finally have the opportunity to give NFS-Ganesha a try, so I followed that :
> https://download.gluster.org/pub/gluster/nfs-ganesha/2.4.5/Debian/
>
> But
2017 Jun 07
2
NFS-Ganesha packages for debian aren't installing
Although looking at it I see .service files for systemd but nothing for SysV.
Is there no support for SysV ? Guess I'll have to write that myself
On Wed, Jun 07, 2017 at 11:36:05AM +0100, lemonnierk at ulrar.net wrote:
> Wait, ignore that.
> I added the stretch repo .. I think I got mind flooded by the broken link for the key before that,
> sorry about the noise.
>
> On Wed,
2017 Jun 07
0
NFS-Ganesha packages for debian aren't installing
On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote:
> Although looking at it I see .service files for systemd but nothing for SysV.
> Is there no support for SysV ? Guess I'll have to write that myself
The packaging for packages provided by the Gluster Community (not in the
standard Debian repos) is maintained here:
https://github.com/gluster/glusterfs-debian
2017 Jun 07
1
NFS-Ganesha packages for debian aren't installing
On 06/07/2017 06:03 PM, Niels de Vos wrote:
> On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote:
>> Although looking at it I see .service files for systemd but nothing for SysV.
>> Is there no support for SysV ? Guess I'll have to write that myself
>
> The packaging for packages provided by the Gluster Community (not in the
> standard Debian
2017 Oct 02
0
nfs-ganesha locking problems
Hi
On 09/29/2017 09:09 PM, Bernhard D?bi wrote:
> Hi,
>
> I have a problem with nfs-ganesha serving gluster volumes
>
> I can read and write files but then one of the DBAs tried to dump an
> Oracle DB onto the NFS share and got the following errors:
>
>
> Export: Release 11.2.0.4.0 - Production on Wed Sep 27 23:27:48 2017
>
> Copyright (c) 1982, 2011, Oracle
2017 Sep 29
2
nfs-ganesha locking problems
Hi,
I have a problem with nfs-ganesha serving gluster volumes
I can read and write files but then one of the DBAs tried to dump an
Oracle DB onto the NFS share and got the following errors:
Export: Release 11.2.0.4.0 - Production on Wed Sep 27 23:27:48 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.??All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition
2017 Aug 06
2
Volume hacked
On Sun, Aug 06, 2017 at 01:01:56PM -0700, wk wrote:
> I'm not sure what you mean by saying "NFS is available by anyone"?
>
> Are your gluster nodes physically isolated on their own network/switch?
Nope, impossible to do for us
>
> In other words can an outsider access them directly without having to
> compromise a NFS client machine first?
>
Yes, but we
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock
> or sanlock)?
Yes. Gluster has it's own quorum, you can disable it but that's just a
recipe for a disaster.
> Also, on a two-node setup it is *guaranteed* for updates to one node to
> put offline the whole volume?
I think so, but I never took the chance so who knows.
> On the other hand, a 3-way
2017 Aug 06
4
Volume hacked
Hi,
This morning one of our cluster was hacked, all the VM disks were
deleted and a file README.txt was left with inside just
"http://virtualisan.net/contactus.php :D"
I don't speak the language but with google translete it looks like it's
just a webdev company or something like that, a bit surprised ..
In any case, we'd really like to know how that happened.
I realised
2017 Aug 06
2
Volume hacked
> You should add VLANS, and/or overlay networks and/or Mac Address
> filtering/locking/security which raises the bar quite a bit for hackers.
> Perhaps your provider can help you with that.
>
Gluster already uses a vlan, the problem is that there is no easy way
that I know of to tell gluster not to listen on an interface, and I
can't not have a public IP on the server. I really
2017 Oct 11
2
data corruption - any update?
> corruption happens only in this cases:
>
> - volume with shard enabled
> AND
> - rebalance operation
>
I believe so
> So, what If I have to replace a failed brick/disks ? Will this trigger
> a rebalance and then corruption?
>
> rebalance, is only needed when you have to expend a volume, ie by
> adding more bricks ?
That's correct, replacing a brick
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya,
what I can say so far:
it is working on a standalone system but not on the clustered system
from reading the ganesha wiki I have the impression that it is
possible to change the log level without restarting ganesha. I was
playing with dbus-send but so far was unsuccessful. if you can help me
with that, this would be great.
here some details about the tested machines. the nfs client
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume
with default network.ping-timeout will cause the underlying VM to
remount filesystem as read-only (device error will occur) unless you
tune mount options in VM's fstab.
-ps
On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote:
> What he is saying is that, on a two node volume, upgrading a node will
2017 Aug 07
2
Volume hacked
Interesting problem...
Did you considered an insider job?( comes to mind http://verelox.com
<https://t.co/dt1c78VRxA> recent troubles)
On Mon, Aug 7, 2017 at 3:30 AM, W Kern <wkmail at bneit.com> wrote:
>
>
> On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote:
>
>
> Gluster already uses a vlan, the problem is that there is no easy way
> that I know of to tell
2017 Aug 23
0
GlusterFS as virtual machine storage
Really ? I can't see why. But I've never used arbiter so you probably
know more about this than I do.
In any case, with replica 3, never had a problem.
On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote:
> Hi, I believe it is not that simple. Even replica 2 + arbiter volume
> with default network.ping-timeout will cause the underlying VM to
> remount filesystem as
2018 May 03
3
@devel - Why no inotify?
There is the ability to notify the client already. If you developed against libgfapi you could do it (I think).
On May 3, 2018 9:28:43 AM PDT, lemonnierk at ulrar.net wrote:
>Hey,
>
>I thought about it a while back, haven't actually done it but I assume
>using inotify on the brick should work, at least in replica volumes
>(disperse probably wouldn't, you wouldn't get
2017 Aug 25
3
GlusterFS as virtual machine storage
Il 25-08-2017 14:22 Lindsay Mathieson ha scritto:
> On 25/08/2017 6:50 PM, lemonnierk at ulrar.net wrote:
>
> I run Replica 3 VM hosting (gfapi) via a 3 node proxmox cluster. Have
> done a lot of rolling node updates, power failures etc, never had a
> problem. Performance is better than any other DFS I've tried (Ceph,
> lizard/moose).
Hi, very interesting! Are you using
2017 Aug 06
0
Volume hacked
Thinking about it, is it even normal they managed to delete the VM disks?
Shoudn't they have gotten "file in use" errors ? Or does libgfapi not
lock the access files ?
On Sun, Aug 06, 2017 at 03:57:06PM +0100, lemonnierk at ulrar.net wrote:
> Hi,
>
> This morning one of our cluster was hacked, all the VM disks were
> deleted and a file README.txt was left with inside
2019 Mar 18
1
llvm symbolizer not able to parse debuginfo files
I am trying to run NFS-Ganesha with ASAN in our setup. I am having
difficulties to make llvm symbolizer print symbol names from the
.debug binaries/libraries, once ASAN shows any error;
bash-4.2# /opt/rh/llvm-toolset-7/root/usr/bin/llvm-symbolizer --version
LLVM (http://llvm.org/):
LLVM version 5.0.1
Optimized build.
Default target: x86_64-unknown-linux-gnu
Host CPU: nocona
I am getting
2017 Aug 25
0
GlusterFS as virtual machine storage
>
> This surprise me: I found DRBD quite simple to use, albeit I mostly use
> active/passive setup in production (with manual failover)
>
I think you are talking about DRBD 8, which is indeed very easy. DRBD 9
on the other hand, which is the one that compares to gluster (more or
less), is a whole other story. Never managed to make it work correctly
either
-------------- next part