similar to: LVM RAID0 and SSD discards/TRIM

Displaying 20 results from an estimated 4000 matches similar to: "LVM RAID0 and SSD discards/TRIM"

2013 Aug 21
2
fsck.ext4 Failed to optimize directory
I had a rather large ext4 partition on an Areca RAID shut down uncleanly while it was writing. When I mount it again, it recommends fsck, which I do, and I get the following error: Failed to optimize directory ... EXT2 directory corrupted This error shows up every time I run fsck.ext4 on this partition. How can I fix this? The file system seems to work ok otherwise, I can mount it and it
2013 Mar 24
5
How to make a network interface come up automatically on link up?
I have a recently installed Mellanox VPI interface in my server. This is an InfiniBand interface, which, through the use of adapters, can also do 10GbE over fiber. I have one of the adapter's two ports configured for 10GbE in this way, with a point to point link to a Mac workstation with a Myricom 10GbE card. I've configured this interface on the Linux box (eth2) using
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, but I'm not sure about that one yet). So, if I use
2013 Mar 26
1
ext4 deadlock issue
I'm having an occasional problem with a box. It's a Supermicro 16-core Xeon, running CentOS 6.3 with kernel 2.6.32-279.el6.x86_64, 96 gigs of RAM, and an Areca 1882ix-24 RAID controller with 24 disks, 23 in RAID6 plus a hot spare. The RAID is divided into 3 partitions, two of 25 TB plus one for the rest. Lately, I've noticed sporadic hangs on writing to the RAID, which
2013 Apr 26
1
Why is my default DISPLAY suddenly :3.0?
I'm on Fedora 6.3. After a reboot, some proprietary software didn't want to run. I found out that the startup script for said software manually sets DISPLAY to :0.0, which I know is not a good idea, and I can fix. However, this still doesn't explain why my default X DISPLAY is suddenly :3.0. -- Joakim Ziegler - Supervisor de postproducci?n - Terminal joakim at terminalmx.com
2013 Mar 23
2
"Can't find root device" with lvm root after moving drive on CentOS 6.3
I have an 8-core SuperMicro Xeon server with CentOS 6.3. The OS is installed on a 120 GB SSD connected by SATA, the machine also contains an Areca SAS controller with 24 drives connected. The motherboard is a SuperMicro X9DA7. When I installed the OS, I used the default options, which creates an LVM volume group to contain / and /home, and keeps /boot and /boot/efi outside the volume group.
2014 Oct 14
2
CentOS 6.4 kernel panic on boot after upgrading kernel to 2.6.32-431.29.2
I'm on a Supermicro server, X9DA7 motherboard, Intel C602 chipset, 2x 2.4GHz Intel Xeon E5-2665 8-core CPU, 96GB RAM, and I'm running CentOS 6.4. I just tried to use yum to upgrade the kernel from 2.6.32-358 to 2.6.32-431.29.2. However, I get a kernel panic on boot. The first kernel panic I got included stuff about acpi, so I tried adding noacpi noapic to the kernel boot parameters,
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote: > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid stripe size is). > > if its a raid 1
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote: > > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.? I assume raid0 means striped activity in a paralleled mode at lease similar to raid0 in mdadm. How can I measure the btrfs read speed since it is copy-on-write which is not the norm in mdadm raid0.? Perhaps I cannot use the same approach in btrfs to determine the performance. Secondly, I see a methodology for raid10 using
2009 Sep 24
1
Problem with raid0
Hey! I have big problem with my centos 5.1. I have two hard discs 500gig and 40gig and those are in RAID0. I would like to remowe the small 40gig hd and put new 500gig hd and i don't wan't raid0 anymore. How can i copy data from 40gig hd to 500gig hd and switch 40gig hd to new 500gig hd? Another question: Can i just copy all my files to windows laptop and if i wan't
2020 Nov 05
0
BIOS RAID0 and differences between disks
is it RAID 0 (striped) or raid1 (mirrored) ?? if you wrote on half of a raid0 stripe set, you basically trashed it. blocks are striped across both drives, so like 16k on the first disk, then 16k on the 2nd then 16k back on the first, repeat (replace 16k with whatever your raid stripe size is). if its a raid 1 mirror, then either disk by itself has the complete file system on it, so you should be
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2012 Feb 07
2
Understanding Default RAID Behavior
The Wiki does not make it clear as to why adding a secondary device defaults to RAID1 metadata and RAID0 data. I bought two SSDs with the intention of doing a BTRFS RAID0 for my root. What is the difference between forcing RAID0 on metadata and data as opposed to the default behavior? Can anyone clarify that? Thank you for your time, Mario -- To unsubscribe from this list: send the line
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2007 Apr 28
1
Problems with RAID0 array on new server
Hello, i recently installed Centos 5 on a new server with a single scsii disk. After the installation, i added 2 additional disks that were once the components of a raid0 array on another server. I get some errors and am unable to start the array the following is an extract from dmesg output: md: Autodetecting RAID arrays. md: could not open unknown-block(8,17). md: could not open
2012 Mar 10
4
Any recommendations on Perc H700 controller on Dell Rx10 ?
Hi folks: At work, I have an R510, and R610 and an R710 - all with the H700 PERC controller. Based on experiments, it seems like there is no way to bypass the PERC controller - it seems like one can only access the individual disks if they are set up in RAID0 each. This brings me to ask some questions: a. Is it fine (in terms of an intelligent controller coming in the way of ZFS) to have the
2011 Oct 23
2
ssd quandry
On a CentOS 6 64bit system, I added a couple prototype SAS SSDs on a HP P411 raid controller (I believe this is a rebranded LSI megaraid with HP firmware) and am trying to format them for best random IO performance with something like postgresql. so, I used the raid command tool to build a raid0 with 2 SAS SSDs # hpacucli ctrl slot=1 logicaldrive 3 show detail Smart Array P410 in Slot 1
2010 Oct 28
0
RAID0 limiting disk utilization
I noticed that if I have single-device allocation for data in a multi-device btrfs filesystem, a balance operation will convert the data to RAID0. This is true even if ''-d single'' is specified explicitly when creating the filesystem. Then it wants to continue using RAID0 for future data allocations, and I run out of space once there''s no longer two drives with space