Displaying 20 results from an estimated 1000 matches similar to: "active/active failover"
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2018 Jan 21
1
mkdir -p, cp -R fails
Dear all,
I have problem with glusterfs 3.12.4
mkdir -p fails with "no data available" when umask is 0022, but works when umask is 0002.
Also recursive copy (cp -R or cp -r) fails with "no data available", independly of the umask.
See below for an example to reproduce the error. I already tried to change transport from rdma to tcp. (Changing the transport works, but
2002 Jun 11
1
another oops, this time with 2.4.18-4
Back again with another oops, which looks suspiciously similiar to the one i
posted some days ago
(https://listman.redhat.com/pipermail/ext3-users/2002-May/003587.html).
Jun 11 12:11:30 castor kernel: Assertion failure in journal_write_metadata_buffer() at journal.c:406: "buffer_jdirty(jh2bh(jh_in))"
ksymoops 0.7c on i686 2.4.18-4custom. Options used
-V (default)
-k
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2006 Apr 18
4
Managed to make some progress, stuck again.
Hi,
An update on my work to integrate my Linux server (CentOS 4.3) in AD
2003.
Sorry about the long post :)
Found this page
(http://www.enterprisenetworkingplanet.com/netos/article.php/3487081)
and followed the instructions on it.
First, I made sure that the Samba installation is supporting Kerberos,
LDAP, AD and Windbind. That was OK.
I made sure that /etc/hosts contain the name of the AD
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small
(<1 MB) files and thousands of files larger than 1 GB.
Attached is the tier log for gluster1 and gluster2. These are full of
"demotion failed" messages, which is also shown in the status:
[root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
Node Promoted files Demoted files
2024 Jun 11
1
[EXT] Replace broken host, keeping the existing bricks
Hi,
The method depends a bit if you use a distributed-only system (like me) or a replicated setting.
I'm using a distributed-only setting (many bricks on different servers, but no replication). All my servers boot via network, i.e., on a start, it's like a new host.
To rescue the old bricks, just set up a new server this the same OS, the same IP and and the same hostname (!very
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is
held up, IO is not interrupted for existing transfers. I think this points
to the heat metadata in the sqlite DB for the tier, is it possible that a
table is temporarily locked while the promotion daemon runs so the calls to
update the access count on files are blocked?
On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I
grabbed that from a file not realizing it was out of date. Here's a current
configuration showing the active hot tier:
[root at pod-sjc1-gluster1 ~]# gluster volume info
Volume Name: gv0
Type: Tier
Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
Status: Started
Snapshot Count: 13
Number of Bricks: 8
Transport-type: tcp
Hot
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom,
The volume info doesn't show the hot bricks. I think you have took the
volume info output before attaching the hot tier.
Can you send the volume info of the current setup where you see this issue.
The logs you sent are from a later point in time. The issue is hit
earlier than the logs what is available in the log. I need the logs
from an earlier time.
And along with the entire tier
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi,
I've got this strange problem where a striped endpoint will crash when
I try to use cp to copy files off of it but not when I use rsync to
copy files off:
[user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/
cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py':
Software caused connection abort
cp: closing
2000 May 03
2
NT SMB Support parameter questions
Hi,
I've recently had the following experience, which may be of help to
people here, so I'm mentioning my tests and conclusions.
I've recently been observing this behaviour from Samba - this is an
easily reproducible problem:-
1/ Open a share
2/ Create a new document in Word.
3/ Make some changes.
4/ Save the document using the save icon on the toolbar, this works as
expected.
5/
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
If you create a volume with replica 2 arbiter 1
you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.
You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,