Displaying 20 results from an estimated 10000 matches similar to: "Disk array format for CentOS virtual host"
2011 Apr 03
3
KVM Host Disk Performance
Hello all,
I'm having quite an interesting time getting up to speed with KVM/QEMU
and the various ways of creating virtual Guest VMs. But disk I/O
performance remains a bit of a question mark for me. I'm looking for
suggestions and opinions ....
This new machine has tons of disk space, lots of CPU cores and loads of
RAM, so those are not issues.
I currently have several software
2015 Jun 01
2
Native ZFS on Linux
On 06/01/2015 06:42 AM, Joerg Schilling wrote:
> Chuck Munro <chuckm at seafoam.net> wrote:
>
>> I have a question that has been puzzling me for some time ... what is
>> the reason RedHat chose to go with btrfs rather than working with the
>> ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue, political, etc?
>
> There is no licensing issue, but
2015 May 29
7
Native ZFS on Linux
I have a question that has been puzzling me for some time ... what is
the reason RedHat chose to go with btrfs rather than working with the
ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue, political, etc?
Although btrfs is making progress, ZFS is far more mature, has a few
more stable features (especially Raid-z3) and has worked flawlessly for
me on CentOS-6 and Scientific Linux-6.
2011 Apr 10
4
A round of applause!
Hello All,
Just a short note to add my vote for a HUGE round of applause to the
CentOS team for their untiring efforts in getting releases out the door.
I've just upgraded several servers to 5.6 and it all "just works".
None of the team's work is easy to accomplish, especially when
less-than-useful complaints keep popping up from thoughtless users who
don't appreciate
2013 Dec 18
1
ZFS on Linux testing
On 12/18/2013, 04:00 , lists at benjamindsmith.com wrote:
> I may be being presumptuous, and if so, I apologize in advance...
>
> It sounds to me like you might consider a disk-to-disk backup solution.
> I could suggest dirvish, BackupPC, or our own home-rolled rsync-based
> solution that works rather well:http://www.effortlessis.com/backupbuddy/
>
> Note that with these
2015 May 29
0
Native ZFS on Linux
Once upon a time, Chuck Munro <chuckm at seafoam.net> said:
> I have a question that has been puzzling me for some time ... what
> is the reason RedHat chose to go with btrfs rather than working with
> the ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue,
> political, etc?
Licensing. Sun chose an Open Source license that is incompatible with
the GPLv2 as used by the
2015 Jun 01
0
Native ZFS on Linux
Chuck Munro <chuckm at seafoam.net> wrote:
> I have a question that has been puzzling me for some time ... what is
> the reason RedHat chose to go with btrfs rather than working with the
> ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue, political, etc?
There is no licensing issue, but there are OpenSource enemies that spread a
fairy tale about an alleged licensing
2015 Jun 01
0
Native ZFS on Linux
Johnny Hughes <johnny at centos.org> wrote:
> On 06/01/2015 06:42 AM, Joerg Schilling wrote:
> > Chuck Munro <chuckm at seafoam.net> wrote:
> >
> >> I have a question that has been puzzling me for some time ... what is
> >> the reason RedHat chose to go with btrfs rather than working with the
> >> ZFS-on-Linux folks (now OpenZFS)? Is it a
2010 Nov 09
2
time for "balance"
Hallo, linux-btrfs,
I''m working with btrfs for some days.
btrfs-progs-20101101, kernel 2.6.35.8 (both self compiled).
First step:
mkfs.btrfs /dev/sdd1
mount /dev/sdd1 /srv/MM
for a 2 TByte partition, worked well.
Copying about 1,5 TByte data to this partition worked well.
Second step:
btrfs device add /dev/sdc1 /srv/MM
btrfs filesystem balance
2015 Jun 01
2
Native ZFS on Linux
On 06/01/2015 07:42 AM, Joerg Schilling wrote:
> Johnny Hughes <johnny at centos.org> wrote:
>
>> On 06/01/2015 06:42 AM, Joerg Schilling wrote:
>>> Chuck Munro <chuckm at seafoam.net> wrote:
>>>
>>>> I have a question that has been puzzling me for some time ... what is
>>>> the reason RedHat chose to go with btrfs rather than
2017 Jun 26
1
mirror block devices
Hi folks,
I have to migrate a set of iscsi backstores to a new target via network.
To reduce downtime I would like to mirror the active volumes first, next
stop the initators, and then do a final incremental sync.
The backstores have a size between 256 GByte and 1 TByte each. In toto
its about 8 TByte.
Of course I have found the --copy-devices patch, but I wonder if this
works as expected? Is
2010 Jun 28
1
ACE does not work for me at all.
Hello, all.
1) ACE does not work for me
I am in a voip project using Speex, failed to have hte Speex ACE work. here
is how I initialize it:
/**
* Configurations :
* #define BITS_PER_SAMPLE (16)
* #define SAMPLE_RATE (8000)
* #define CHANNEL_NB (1)
* #define DURATION (20)
* SPEEX_MODEID_NB
*/
_eco_state = speex_echo_state_init(_encframe_size, 10*_encframe_size);
speex_echo_ctl(_eco_state,
2012 Sep 19
0
OT: does the LSI 9211-8i fit into the HP N40L?
Hi again,
thanks for all the replies to the all in one with ESXi, it was
most illuminating. I will use this setup at my dayjob.
Now for a slight variation on a theme: N40L with ESXi with raw drive
passthrough, with OpenIndiana/napp-it NFS or iSCSI export of underlying
devices. This particular setup is for a home VMWare lab, using
spare hardware parts I have around.
I''m trying to do
2011 Jan 27
3
Static assignment of SCSI device names?
Hello list members,
In CentOS-5.5 I'm trying to achieve static assignment of SCSI device
names for a bunch of RAID-60 drives on a Supermicro motherboard. The
"scsi_id" command identifies all drives ok.
The board has one SATA controller and three SAS/SATA controllers ...
standard on-board ICH-10 ATA channels, an on-board LSI SAS/SATA
controller, and two add-on SAS/SATA
2012 Nov 21
5
mixing WD20EFRX and WD2002FYPS in one pool
Hi,
after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth
of my data at home (conveniently just before I could make
a backup) I''ve decided to both go full redundancy as well as
all zfs at home.
A couple questions: is there a way to make WD20EFRX (2 TByte, 4k
sectors) and WD200FYPS (4k internally, reported as 512 Bytes?)
work well together on a current OpenIndiana? Which
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a
RAIDZ2 exported over fibre channel) but there''s no such thing as too much
speed, and these other two drive bays are just begging for drives in
them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, or will the extra parity writes reduce speed, or will the two factors
offset and leave things
2013 Dec 02
2
backup mdbox best strategy
Hello,
i have to backup (tape library) a mailsystem with about 300.000
Mailboxes on 2 backends. Summary of all mailboxes are 2 TByte.
The mailstore is mdbox.
Is it save to do a simple filesystem backup (full and incremental) with
backupsoftware?
What is the prefered strategy to do a backup for desaster recovery
(mailsystem crash) and restoring single usermailboxes?
Regards,
Claus
2011 Jan 30
5
RHEL-6 vs. CentOS-5.5 (was: Static assignment of SCSI device names?)
Hello list members,
My adventure into udev rules has taken an interesting turn. I did
discover a stupid error in the way I was attempting to assign static
disk device names on CentOS-5.5, so that's out of the way.
But in the process of exploring, I installed a trial copy of RHEL-6 on
the new machine to see if anything had changed (since I intend this box
to run CentOS-6 anyway).
Lots
2011 Nov 08
2
Multiple Patitions with with mdbox
Having > 10 TByte mailstore filesystem-checks takes too much time.
At the moment we have four different partitions, but I don't like to set
symlinks or LDAP-flags to sort customers and their domains to there
individual mount-point. I'd like to work with mdbox:/mail/%d/%n to calculate
the path automatically.
How do you handle >> 10 TB mailstore?
I'm very interested in the
2010 Jan 09
1
Moving LVM from one machine to another
My CentOS4 machine died (CPU cooler failure, causing CPU to die).
In this machine I had 5 Tbyte disks in a RAID5, and LVM structures
on that.
Now I've moved those 5 disks onto a CentOS5 machine and the RAID array
is being rebuilt. However the LVM structures weren't detected at boot
time. I was able to "vgscan" and 'vgchange -a y' to bring the volume
online and then