similar to: Building a NFS server with a mix of HDD and SSD (for caching)

Displaying 13 results from an estimated 13 matches similar to: "Building a NFS server with a mix of HDD and SSD (for caching)"

2020 Mar 24
0
Building a NFS server with a mix of HDD and SSD (for caching)
Hi, > Hi list, > > I'm building a NFS server on top of CentOS 8. > It has 8 x 8 TB HDDs and 2 x 500GB SSDs. > The spinning drives are in a RAID-6 array. They are 4K sector size. > The SSDs are in RAID-1 array and with a 512bytes sector size. > > > I want to use the SSDs as a cache using dm-cache. So here what I've done > so far: > /dev/sdb ==> SSD raid1
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2016 Jan 22
4
LVM mirror database to ramdisk
I'm still running CentOS 5 with Xen. We recently replaced a virtual host system board with an Intel S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core Xeon with 48G RAM, max 96G. The drives are SSD. I was recently asked to move an InterBase server from Windows 7 to Windows Server. The database is 30G. I'm speculating that if I put the database on a 35G
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume. lvconvert tells me I don't have enough free space, even though I have hundreds of gigabytes free on both physical volumes. Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data Insufficient suitable allocatable extents for logical volume : 10240 more required Any ideas? Thanks!, Gordon Here's the output from the
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2015 Jun 26
1
LVM hatred, was Re: /boot on a separate partition?
On Fri, Jun 26, 2015 at 10:51 AM, Gordon Messmer <gordon.messmer at gmail.com> wrote: >> , or alternatively making the LVs >> redundant after install is a single command (each) and you can choose >> whether it should be mere mirroring or some MD manged RAID level (modulo >> the LVM RAID MD monitoring issue). > > > I hadn't realized that. That's an
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone, Anybody had the chance to test out this setup and reproduce the problem? I assumed it would be something that's used often these days and a solution would benefit a lot of users. If can be of any assistance please contact me. -- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2015 Nov 24
0
LVM - how to change lv from linear to stripped? Is it possible?
Hi All. Currently I am trying to change a logical volume from linear to stripped because I would like to have a better write throughput. I would like to perform this change "live" without stopping access to this lv. I have found two interesting examples: http://community.hpe.com/t5/System-Administration/Need-to-move-the-data-from-Linear-LV-to-stripped-LV-on-RHEL-5-7/td-p/6134323
2016 Jan 22
0
LVM mirror database to ramdisk
On Fri, Jan 22, 2016 at 11:02 AM, Ed Heron <Ed at heron-ent.com> wrote: > I'm still running CentOS 5 with Xen. > > We recently replaced a virtual host system board with an Intel > S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core > Xeon with 48G RAM, max 96G. The drives are SSD. > > I was recently asked to move an InterBase server from
2016 Jan 22
2
LVM mirror database to ramdisk
On Fri, 2016-01-22 at 14:56 -0600, NightLightHosts Admin wrote: > On Fri, Jan 22, 2016 at 11:02 AM, Ed Heron <Ed at heron-ent.com> wrote: > > I'm still running CentOS 5 with Xen. > > > > We recently replaced a virtual host system board with an Intel > > S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core > > Xeon with 48G RAM, max
2012 May 30
0
LVM superblock version
Is there an lvm command to print out any kind of version information for the LVM superblock, similar to what "mdmadmin -E" does for raid. How can I tell whether a mountable device with LVMs on it can be safely moved between CentOS 5 and CentOS 6 and/or potentially other Linux distributions? I know that that CentOS 6 supports 'lvconvert -merge'. Is this implementation purely in
2023 Nov 09
0
can I convert a linear thin pool to raid1?
In the past, I've used LVM on MD RAID, and I'd like to try using LVM RAID in order to also add dm-integrity data to some LVs.? I've added new PVs to my VG, and I've converted some of my LVs to raid1 types, but I also have one thin pool that I use for VMs with multiple layers of snapshots.? That pool can't be converted, directly: # lvconvert --type raid1 -m 1