Displaying 20 results from an estimated 9000 matches similar to: "Ext3: Faster deletion for big files?"
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs?
- Is it possible to add/remove drives to a RAID6 array?
Regards,
Hans-Kristian
--
To
2013 Dec 10
2
gentoo linux, problem starting vm´s when cache=none
hello mailinglist,
on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7,
virt-manager 0.10.0-r1
when i set on virtual machine "cache=none" in the disk-menu, the machine
faults to start with:
<<
Fehler beim Starten der Domain: Interner Fehler: Prozess während der
Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64: -drive
2006 May 04
1
Oppinions about Thunder K8HM
Dear list,
is anyone using a Tyan Thunder K8HM (S3892) with Xen? Or can recommand a
(different) mainboard for:
- dual dual-core opteron
- about 16GB of RAM
- two Gbit ports (for GNBD)
- one (or more) SATA ports (system disk)
- at last one PCI-X 133 slot.
Thanks a lot.
--
/"\ Goetz Bock at blacknet dot de -- secure mobile Linux everNETting
\ / (c) 2006 Creative Commons,
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T
disks) and making a big RAID60 out of it which Linux cannot make a
filesystem on, I'm created 4 x RAID6 which each are 3.64T
I then do :
sfdisk /dev/sd{b,c,d,e} <<EOF
,,8e
EOF
to make a big LVM partition on each one.
But then when I do :
pvcreate /dev/sd{b,c,d,e}1
and then
pvdisplay
It shows each one as
2013 Jun 11
1
cluster.min-free-disk working?
Hi,
have a system consisting of four bricks, using 3.3.2qa3. I used the
command
gluster volume set glusterKumiko cluster.min-free-disk 20%
Two of the bricks where empty, and two were full to just under 80% when
building the volume.
Now, when syncing data (from a primary system), and using min-free-disk
20% I thought new data would go to the two empty bricks, but gluster
does not seem
2007 Dec 11
2
Ext3 Performance Tuning - the journal
Hello,
I have some performance problems in a file server system. It is used
as Samba and NFS file server. I have some ideas what might cause the
problems, and I want to try step by step. First I have to learn more
about these areas.
First I have some questions about tuning/sizing the ext3 journal.
The most extensive list I found on ext3 performance tuning is
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a
number of redundant disks -- so instead of RAID5, RAID6, etc., we end up
with a single ''RAID56'' flag, and the amount of redundancy is stored
elsewhere.
This attempts it, but I hate it and don''t really want to do it. The type
field is designed as a bitmask, and _used_ as a bitmask in a number of
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2014 Aug 28
2
Random Disk I/O Tests
I have two openvz servers running Centos 6.x both with 32GB of RAM.
One is an Intel Xeon E3-1230 quad core with two 4TB 7200 SATA drives
in software RAID1. The other is an old HP DL380 dual quad core with 8
750GB 2.5" SATA drives in hardware RAID6. I want to figure out which
one has better random I/O performance to host a busy container. The
DL380 currently has one failed drive in the
2014 Jun 11
2
Re: libguestfs supermin error
On Wed, Jun 11, 2014 at 01:00:16PM +0530, abhishek jain wrote:
> Hi RIch
>
> Below are the logs updated logs of libguestfs-test-tool on ubuntu powerpc...
>
> libguestfs-test-tool
> ************************************************************
> * IMPORTANT NOTICE
> *
> * When reporting bugs, include the COMPLETE, UNEDITED
>
2009 Sep 24
4
mdadm size issues
Hi,
I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit)
All 10 drives are 2T in size.
device sd{a,b,c,d,e,f} are on my motherboard
device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below)
#lspci
06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller
The controller is set to JBOD the drives.
All
2006 Nov 21
3
RAID benchmarks
We (a small college with about 3000 active accounts) are currently in
the process of moving from UW IMAP running on linux to dovecot running
on a cluster of 3 or 4 new faster Linux machines. (Initially using
perdition to split the load.)
As we are building and designing the system, I'm attempting to take (or
find) benchmarks everywhere I can in order to make informed decisions
and so
2016 Oct 26
1
"Shortcut" for creating a software RAID 60?
On 10/25/2016 11:54 AM, Gordon Messmer wrote:
> If you built a RAID0 array of RAID6 arrays, then you'd fail a disk by
> marking it failed and removing it from whichever RAID6 array it was a
> member of, in the same fashion as you'd remove it from any other array
> type.
FWIW, what I've done in the past is build the raid 6's with mdraid, then
use LVM to stripe them
2009 May 25
1
raid5 or raid6 level cluster
Hello,
?s there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.
Do you have a plan for keep one or two server as a parity for whole
glusterfs system
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual
3ware 7500-8 based systems to md, I decided I'd like to go with RAID6
(since md is less tolerant of marginal drives than is 3ware). I did some
benchmarking and was getting decent speeds with a 128KiB chunksize.
So the next step was failure testing. First, I fired off memtest.sh as
found at
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote:
> On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote:
>
>> 3 - Can additional drive(s) be added later with a changein RAID level
>> without current data loss?
>
> Only some systems support that sort of restriping, and its a dangerous
> activity (if the power fails or system crashes midway through
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this list: send the line
2007 Mar 21
1
Ext3 behavior on power failure
Hi all,
We are building a new system which is going to use ext3 FS. We would like to know more about the behavior of ext3 in the case of failure. But before I procede, I would like to share more information about our future system.
* Our application always does an fsync on files
* When symbolic links (more specifically fast symlink) are created, the host directory is also fsync'ed.
* Our
2014 Aug 21
3
HP ProLiant DL380 G5
I have CentOS 6.x installed on a "HP ProLiant DL380 G5" server. It
has eight 750GB drives in a hardware RAID6 array. Its acting as a
host for a number of OpenVZ containers.
Seems like every time I reboot this server which is not very often it
sits for hours running a disk check or something on boot. The server
is located 200+ miles away so its not very convenient to look at. Is
2012 Sep 13
5
Partition large disk
Hi,
I have a 24TB RAID6 disk with a GPT partition table on it. I need to
partition it into 2 partitions one of 16TB and 1 of 8TB to put ext4
filesystems on both. But I really need to do this remotely. ( if I can
get to the site I could use gparted )
Now fdisk doesn't understand GPT partition tables and pat