Displaying 20 results from an estimated 40000 matches similar to: "ZFS and encryption on CentOS 7"
2023 Jan 11
1
Upgrading system from non-RAID to RAID1
I plan to upgrade an existing C7 computer which currently has one 256 GB SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest of the system remaining the same. The system also has one additional internal and one external harddisk but these should not be touched. The system will continue to run C7.
If I remember correctly, the existing SSD does not use a M2. slot so they
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks (
http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on
bricks.
On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote:
> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> > Anyone made some performance comparison between XFS and ZFS with ZIL
> > on
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> Anyone made some performance comparison between XFS and ZFS with ZIL
> on SSD, in gluster environment ?
>
> I've tried to compare both on another SDS (LizardFS) and I haven't
> seen any tangible performance improvement.
>
> Is gluster different ?
Probably not. If there is, it would probably favor
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four
partitions of 16 GB each versus having the entire SSD dedicated to each
pool?
Scenario A:
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
versus
Scenario B:
2 TB Mirror w/ 64 GB read cache SSD
2 TB
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote:
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid stripe size is).
>
> if its a raid 1
2015 Mar 08
1
LVM encryption and new volume group
I'm sorry, but grep -i crypt /var/log/anaconda/anaconda.program.log
returns nothing. But I have got an entry in /etc/crypttab.
I only found this with grep -i luks /var/log/anaconda/anaconda.*:
/var/log/anaconda/anaconda.storage.log:20:47:55,959 DEBUG blivet:
LUKS.__init__:
/var/log/anaconda/anaconda.storage.log:20:49:25,009 DEBUG storage.ui:
LUKS.__init__:
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
Last time I've read about tiering in gluster, there wasn't any performance
gain with VM workload and more over doesn't speed up writes...
Il 10 ott 2017 9:27 PM, "Bartosz Zi?ba" <kontakt at avatat.pl> ha scritto:
> Hi,
>
> Have you thought about using an SSD as a GlusterFS hot tiers?
>
> Regards,
> Bartosz
>
>
> On 10.10.2017 19:59, Gandalf
2015 Mar 06
0
LVM encryption and new volume group
On Thu, Mar 5, 2015 at 10:25 PM, Tim <lists at kiuni.de> wrote:
> Hi Chris,
>
> thanks for your answer.
>
> It is the first time I decided to encrypt my lvm. I choosed to encrypt the
> volume group, not every logical volume itself, because in case of doing lvm
> snapshots in that group they will be encrypted too?
Yes, anything that's COW'd is also encrypted in
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2020 Feb 16
1
Encrypted container on CentOS VPS
Am 16.02.20 um 16:46 schrieb Subscriber:
>
> ----- On Feb 16, 2020, at 5:18 PM, H agents at meddatainc.com wrote:
>
>> I wonder if it is possible to set up an encrypted "file container" on a CentOS
>> VPS?
>
> Yes. You can create LUKS-container on CentOS VPS.
>
>> I am the root user of the VPS but the hosting company also has access to
>> the
2009 Apr 08
2
ZFS data loss
Hi,
I have lost a ZFS volume and I am hoping to get some help to recover the
information ( a couple of months worth of work :( ).
I have been using ZFS for more than 6 months on this project. Yesterday
I ran a "zvol status" command, the system froze and rebooted. When it
came back the discs where not available.
See bellow the output of " zpool status", "format"
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2009 Jul 29
0
LVM and ZFS
I''m curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about "Bus and lots of "stalled" I/O. But as soon as I wrap that device inside an LVM metadevice and then use it in the ZFS zpool things work perfectly fine and smoothly (no
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL
on SSD, in gluster environment ?
I've tried to compare both on another SDS (LizardFS) and I haven't
seen any tangible performance improvement.
Is gluster different ?
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello,
Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation.
I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains 2 partitions (p1=32GB, p2=1800 GB) and
- p1 is used as part of a zfs mirror of rpool
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote:
>
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid
2007 May 12
3
zfs and jbod-storage
Hi.
I''m managing a HDS storage system which is slightly larger than 100 TB
and we have used approx. 3/4. We use vxfs. The storage system is
attached to a solaris 9 on sparc via a fiberswitch. The storage is
shared via nfs to our webservers.
If I was to replace vxfs with zfs I could utilize raidz(2) instead of
the built-in hardware raid-controller.
Are there any jbod-only storage
2020 Nov 05
0
BIOS RAID0 and differences between disks
is it RAID 0 (striped) or raid1 (mirrored) ??
if you wrote on half of a raid0 stripe set, you basically trashed it.
blocks are striped across both drives, so like 16k on the first disk, then
16k on the 2nd then 16k back on the first, repeat (replace 16k with
whatever your raid stripe size is).
if its a raid 1 mirror, then either disk by itself has the complete file
system on it, so you should be
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
I think it may have something to do with my disks being encrypted. This
issue happened after updating systemd to version 236, ZoL to 0.7.4 and
kernel to 4.14.
I have always mounted the pool manually by first opening LUKS-encrypted
disks and after that issuing zpool import tank.
Is there any way to still use systemd to manage smbd or do I have to
just always start it manually?
This is how the
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi,
I was suffering for weeks from the following problem:
a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool.
''zfs destroy -r pool/dataset''
hung the machine within seconds