Displaying 20 results from an estimated 7000 matches similar to: "LVM and thin snapshots"
2019 Sep 27
0
LVM and thin snapshots
Hi there!
Im new here.
Im trying to install Centos 8 on my laptop. I also want to have a
contingency plan in case something brakes.
My first thoughts were to use LVM.
The idea is that i would take multiple snapshots of the system during
the day/week and will keep them for a short time.
I read RHEL documentation about how lvm works and pretty much understand
how to manage the above scenario
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Dear Strahil,
Thank you very much for pointing me to the RedHat documentation. I wasn't aware of it and it is much more detailed. I will have to read it carefully.
Now as I have a single disk (no RAID) based on that documentation I understand that I should use a data alignment value of 256kB.
Best regards,
Mabi
------- Original Message -------
On Wednesday, June 7th, 2023 at 6:56 AM,
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server got
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Have you checked this page: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration ?
The alignment depends on the HW raid stripe unit size.
Best Regards,Strahil Nikolov?
On Tue, Jun 6, 2023 at 2:35, mabi<mabi at protonmail.ch> wrote: Hello,
I am preparing a brick as LVM thin volume for a test slave node using this
2019 Dec 25
2
Raspberri PI 4B 4GB install image
Thanks A lot.
I will give it a try , once the RPi is here.
Best Regards,
Strahil NikolovOn Dec 25, 2019 18:46, Akemi Yagi <amyagi at gmail.com> wrote:
>
> On Wed, Dec 25, 2019 at 7:53 AM Strahil via CentOS <centos at centos.org> wrote:
> >
> > Hello Community,
> >
> > I'm waiting for my first ARM-based toy - a 4GB Raspberry Pi 4B and I was
2019 Dec 27
0
GFS performance under heavy traffic
Hi David,
Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
Also, the gluster client should remount in order to bump the gluster op-version.
What kind of workload do you have ?
I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups .
You
2019 Dec 28
1
GFS performance under heavy traffic
Hi David,
It seems that I have misread your quorum options, so just ignore that from my previous e-mail.
Best Regards,
Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote:
>
> Hi David,
>
> Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2019 Oct 01
1
Centos 8: Multiple bugs with email/calendar
First of all thanks for your fast reply.
Second.
The server im trying to use is https://dav.mailbox.org/caldav I dont
think its a problem with my configuration (although i can always be
wrong). I suspect a bug in evolution because sometimes it works.
Sometimes i manage to create an event sometimes i cant.
I started evolution from a terminal and i noticed that when i fail to
create an event i
2019 Oct 01
0
Centos 8: Multiple bugs with email/calendar
On Tue, 1 Oct 2019 at 03:40, Georgios <gpdsbe+centos at mailbox.org> wrote:
>
> Hi there!
> I recently installed centos 8 on my laptop.
>
> I have the following problems
>
> 1. I tried the use the default evolution 3.28.
> The problem with 3.28 is that when i try to create an event i get the error
> "Failed to create an event in the calendar ?CalDAV :
2019 Oct 13
0
Browser doesnt work
On Oct 13, 2019, at 11:43 AM, Georgios <gpdsbe+centos at mailbox.org> wrote:
>
> Hello.
>
> Im new on Centos
> I recently installed Centos 8 and i have the following problem with my
> browser
>
> I cant play media on the browser.
>
> for example i cant play.
> https://twitter.com/Ffs_OMG/status/1183397555914366976
CentOS doesn?t ship with certain
2023 Feb 14
1
File\Directory not healing
I guess you didn't receive my last e-mail.
Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one.
In order a dir to heal, you have to fix all files inside it before it can be healed.
Best Regards,
Strahil Nikolov ? ???????, 14 ???????? 2023 ?., 14:04:31 ?. ???????+2, David Dolan <daithidolan at gmail.com> ??????:
I've touched the directory one
2023 Jun 05
1
How to find out data alignment for LVM thin volume brick
Hello,
I am preparing a brick as LVM thin volume for a test slave node using this documentation:
https://docs.gluster.org/en/main/Administrator-Guide/formatting-and-mounting-bricks/
but I am confused regarding the right "--dataalignment" option to be used for pvcreate. The documentation mentions the following under point 1:
"Create a physical volume(PV) by using the pvcreate
2023 Feb 14
1
File\Directory not healing
I've touched the directory one level above the directory with the I\O issue
as the one above that is the one showing as dirty.
It hasn't healed. Should the self heal daemon automatically kick in here?
Is there anything else I can do?
Thanks
David
On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> You can always mount it locally on any of the
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well,
you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ?
Best Regards,
Strahil Nikolov
? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container.
The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind.
Thanks,
Anant
________________________________
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards
2018 Apr 08
1
Wiki update
Hello Community,
my name is Strahil Nikolov (hunter86_bg) and I would like to update the
following wiki page .
In section "Create the New Initramfs or Initrd" there should be an
additional line for CentOS7:
mount --bind /run /mnt/sysimage/run
The 'run' directory is needed especially if you need to start the
multipathd.service before recreating the initramfs ('/' is on
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant,
Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ?
Best Regards,Strahil Nikolov
On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2024 Feb 26
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
In our setup, the Gluster brick comes from an iSCSI SAN storage and is then used as a brick on the Gluster server. To extend the brick, we stop the Gluster server, extend the logical volume (LV) on the SAN server, resize it on the host, mount the brick with the extended size, and finally start the Gluster server.
Please let me know if this process can be optimized, I will be happy to