Displaying 20 results from an estimated 30000 matches similar to: "Diffrent volumes on diffrent interfaces"
2018 Feb 14
1
Diffrent volumes on diffrent interfaces
Hi,
I run a proxmox system with a glustervolume over three nodes.
I think about setup a second volume, but want to use the other interfaces on
the nodes.
Is this recommended or possible?
Bye
Gregor
2018 Mar 06
0
Multiple Volumes over diffrent Interfaces
Hi,
I'm trying to create two gluster volumes over two nodes with two
seperate networks:
The names are in the hosts file of each node:
root at gluster01 :~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 gluster01.peiker-cee.de gluster01
10.0.2.54 gluster02g1.peiker-cee.de gluster02g1
10.0.7.54 gluster02g2.peiker-cee.de gluster02g2
10.0.2.53
2023 Apr 03
1
bind glusterd to specified interface
Hi,
after a bit of time I play around with glusterfs again.
For now I want to bind the glusterd to an specified interface/ip adress.
I want to have a management net, where the service doesn't reachable
and an cluster net where the service is working on.
I read something about to define it in the
/etc/glusterfs/glusterfsd.vol file but found no valid desciption of it.
Even on
2017 May 30
1
Gluster client mount fails in mid flight with signum 15
Hello All
We?ve have a problem with cluster client mounts fail in mid run with this in the log
glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f640df4bdfb] ) 0-: received signum (15), shutting down.
We?ve tried running debug but
2013 Dec 17
1
Project pre planning
Hello GlusterFS users,
can anybody give me please his opinion about the following facts and
questions:
4 storage server with 16 SATA bays, connected by GigE:
Q1:
Volume will be set up as distributed-replicated.
Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big
GlusterVolume or each dir in it's own GlusterVolume?
Q2: Set up the bricks as a collection of
2013 Sep 22
2
Problem wit glusterfs-server 3.4
Hi at all!
i'm trying to use glusterfs for the first time and have the following problem:
I want to have two nodes.
On node1 I have a raid1-sytem running in /raid/storage
Both nodes see the other and now I try to create a volume.
While I create the first volume on a fresh system (node1) for the first time, gluster said:
volume create: glustervolume: failed: /raid/storage/ or a prefix of it
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
gluster peer status
Number of Peers: 3
Hostname: vnh.proxmox.softlog
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:26, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
>
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
>
> Is that just the file entry in
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage
Or I should say we *had* a 3 node cluster, one node died today. Possibly I
can recover it, in whcih case no issues, we just let it heal itself. For
now its running happily on 2 nodes with no data loss - gluster for teh win!
But its looking like I might have to replace the
2013 May 12
0
Glusterfs with Infiniband tips
Hello guys,
I was wondering if someone could share their glusterfs volume and system settings if you are running glusterfs with infiniband networking. In particular I am interested in using the glusterfs + infiniband + kvm for virtualisation. However, any other implementation would also be useful for me.
I've tried various versions of glusterfs (versions 3.2, 3.3 and 3.4beta) over the past
2018 Apr 26
2
cluster of 3 nodes and san
Hi list, I need a little help, I currently have a cluster with vmware
and 3 nodes, I have a storage (Dell powervault) connected by FC in
redundancy, and I'm thinking of migrating it to proxmox since the
maintenance costs are very expensive, but the Doubt is if I can use
glusterfs with a san connected by FC? , It is advisable? , I add
another data, that in another site I have another cluster
2018 Apr 27
0
cluster of 3 nodes and san
Hi, any advice?
El mi?., 25 abr. 2018 19:56, Ricky Gutierrez <xserverlinux at gmail.com>
escribi?:
> Hi list, I need a little help, I currently have a cluster with vmware
> and 3 nodes, I have a storage (Dell powervault) connected by FC in
> redundancy, and I'm thinking of migrating it to proxmox since the
> maintenance costs are very expensive, but the Doubt is if I can
2018 Apr 27
1
cluster of 3 nodes and san
>but the Doubt is if I can use glusterfs with a san connected by FC?
Yes, just format the volumes with xfs and ready to go
For a replica in different DC, be careful about latency. What is the
connection between DCs?
It can be doable if latency is low.
On Fri, Apr 27, 2018 at 4:02 PM, Ricky Gutierrez <xserverlinux at gmail.com> wrote:
> Hi, any advice?
>
> El mi?., 25 abr. 2018
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:01 AM, WK wrote:
> You replaced vna with vnd but it is probably not fully healed yet cuz
> you had 3.8T worth of chunks to copy.
No, the heal had completed. Finished about 9 hours before I rebooted.
>
> So you had two good nodes (vnb and vng) working and you rebooted one
> of them?
Three good nodes - vnb, vng, vnh and one dead - vna
from node vng:
root at
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote:
> > I'm thinking the following:
> >
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think that should work
2017 Sep 21
2
Performance drop from 3.8 to 3.10
Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly
substantial drop in read/write perfomance
env:
- 3 node, replica 3 cluster
- Private dedicated Network: 1Gx3, bond: balance-alb
- was able to down the volume for the upgrade and reboot each node
- Usage: VM Hosting (qemu)
- Sharded Volume
- sequential read performance in VM's has dropped from 700Mbps to 300mbs
- Seq Write
2017 Sep 22
0
Performance drop from 3.8 to 3.10
Could you disable cluster.eager-lock and try again?
-Krutika
On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial
> drop in read/write perfomance
>
> env:
>
> - 3 node, replica 3 cluster
>
> - Private dedicated Network: 1Gx3, bond: balance-alb
>
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following:
>
> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
>
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4
I think that should work perfectly fine yes, either that
or directly use replace-brick ?
-------------- next part --------------
A non-text
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
>
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think that should work perfectly fine yes, either that
> or
2024 Sep 21
1
GlusterFS Replica over ZFS
I assume you will be using the volumes for VM workload.There is a 'virt' group of settings optimized for virtualization (location at /var/lib/glusterd/groups/virt) which is also used by oVirt. It guarantees that VMs can live migrate without breaking.
Best Regards,
Strahil Nikolov
On Fri, Sep 20, 2024 at 19:00, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi there.