similar to: Volume hacked

Displaying 20 results from an estimated 6000 matches similar to: "Volume hacked"

2017 Aug 06
0
Volume hacked
Thinking about it, is it even normal they managed to delete the VM disks? Shoudn't they have gotten "file in use" errors ? Or does libgfapi not lock the access files ? On Sun, Aug 06, 2017 at 03:57:06PM +0100, lemonnierk at ulrar.net wrote: > Hi, > > This morning one of our cluster was hacked, all the VM disks were > deleted and a file README.txt was left with inside
2017 Aug 06
0
Volume hacked
I'm not sure what you mean by saying "NFS is available by anyone"? Are your gluster nodes physically isolated on their own network/switch? In other words can an outsider access them directly without having to compromise a NFS client machine first? -bill On 8/6/2017 7:57 AM, lemonnierk at ulrar.net wrote: > Hi, > > This morning one of our cluster was hacked, all the VM
2017 Aug 07
2
Volume hacked
On Sun, Aug 06, 2017 at 08:54:33PM +0100, lemonnierk at ulrar.net wrote: > Thinking about it, is it even normal they managed to delete the VM disks? > Shoudn't they have gotten "file in use" errors ? Or does libgfapi not > lock the access files ? It really depends on the application if locks are used. Most (Linux) applications will use advisory locks. This means that
2017 Aug 06
2
Volume hacked
On Sun, Aug 06, 2017 at 01:01:56PM -0700, wk wrote: > I'm not sure what you mean by saying "NFS is available by anyone"? > > Are your gluster nodes physically isolated on their own network/switch? Nope, impossible to do for us > > In other words can an outsider access them directly without having to > compromise a NFS client machine first? > Yes, but we
2018 May 03
3
@devel - Why no inotify?
There is the ability to notify the client already. If you developed against libgfapi you could do it (I think). On May 3, 2018 9:28:43 AM PDT, lemonnierk at ulrar.net wrote: >Hey, > >I thought about it a while back, haven't actually done it but I assume >using inotify on the brick should work, at least in replica volumes >(disperse probably wouldn't, you wouldn't get
2017 Sep 09
2
GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > Well, that makes me feel better. > > I've seen all these
2017 Aug 07
0
Volume hacked
> It really depends on the application if locks are used. Most (Linux) > applications will use advisory locks. This means that locking is only > effective when all participating applications use and honour the locks. > If one application uses (advisory) locks, and an other application now, > well, then all bets are off. > > It is also possible to delete files that are in
2017 Aug 06
2
Volume hacked
> You should add VLANS, and/or overlay networks and/or Mac Address > filtering/locking/security which raises the bar quite a bit for hackers. > Perhaps your provider can help you with that. > Gluster already uses a vlan, the problem is that there is no easy way that I know of to tell gluster not to listen on an interface, and I can't not have a public IP on the server. I really
2017 Sep 10
0
GlusterFS as virtual machine storage
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as
2017 Aug 07
2
Volume hacked
Interesting problem... Did you considered an insider job?( comes to mind http://verelox.com <https://t.co/dt1c78VRxA> recent troubles) On Mon, Aug 7, 2017 at 3:30 AM, W Kern <wkmail at bneit.com> wrote: > > > On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote: > > > Gluster already uses a vlan, the problem is that there is no easy way > that I know of to tell
2018 May 22
0
@devel - Why no inotify?
how about gluste's own client(s)? You mount volume (locally to the server) via autofs/fstab and watch for inotify on that mountpoing(or path inside it). That is something I expected was out-of-box. On 03/05/18 17:44, Joe Julian wrote: > There is the ability to notify the client already. If you > developed against libgfapi you could do it (I think). > > On May 3, 2018 9:28:43 AM
2017 Aug 07
0
Volume hacked
On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote: > > Gluster already uses a vlan, the problem is that there is no easy way > that I know of to tell gluster not to listen on an interface, and I > can't not have a public IP on the server. I really wish ther was a > simple "listen only on this IP/interface" option for this What about this?
2017 Aug 07
0
Volume hacked
On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote: > Interesting problem... > Did you considered an insider job?( comes to mind http://verelox.com > <https://t.co/dt1c78VRxA> recent troubles) I would be really really surprised, we are only 5 / 6 with access and as far as I know no one has a problem with the company. The last person to leave did so last year, and we
2017 Jun 07
2
NFS-Ganesha packages for debian aren't installing
Although looking at it I see .service files for systemd but nothing for SysV. Is there no support for SysV ? Guess I'll have to write that myself On Wed, Jun 07, 2017 at 11:36:05AM +0100, lemonnierk at ulrar.net wrote: > Wait, ignore that. > I added the stretch repo .. I think I got mind flooded by the broken link for the key before that, > sorry about the noise. > > On Wed,
2018 May 03
0
@devel - Why no inotify?
Hey, I thought about it a while back, haven't actually done it but I assume using inotify on the brick should work, at least in replica volumes (disperse probably wouldn't, you wouldn't get all events or you'd need to make sure your inotify runs on every brick). Then from there you could notify your clients, not ideal, but that should work. I agree that adding support for inotify
2018 May 03
3
@devel - Why no inotify?
hi guys will we have gluster with inotify? some point / never? thanks, L.
2017 Jun 07
0
NFS-Ganesha packages for debian aren't installing
On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote: > Although looking at it I see .service files for systemd but nothing for SysV. > Is there no support for SysV ? Guess I'll have to write that myself The packaging for packages provided by the Gluster Community (not in the standard Debian repos) is maintained here: https://github.com/gluster/glusterfs-debian
2017 Jun 07
1
NFS-Ganesha packages for debian aren't installing
On 06/07/2017 06:03 PM, Niels de Vos wrote: > On Wed, Jun 07, 2017 at 11:59:14AM +0100, lemonnierk at ulrar.net wrote: >> Although looking at it I see .service files for systemd but nothing for SysV. >> Is there no support for SysV ? Guess I'll have to write that myself > > The packaging for packages provided by the Gluster Community (not in the > standard Debian
2017 Oct 11
2
data corruption - any update?
> corruption happens only in this cases: > > - volume with shard enabled > AND > - rebalance operation > I believe so > So, what If I have to replace a failed brick/disks ? Will this trigger > a rebalance and then corruption? > > rebalance, is only needed when you have to expend a volume, ie by > adding more bricks ? That's correct, replacing a brick
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume with default network.ping-timeout will cause the underlying VM to remount filesystem as read-only (device error will occur) unless you tune mount options in VM's fstab. -ps On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote: > What he is saying is that, on a two node volume, upgrading a node will