similar to: Physical disks + Software Raid + lvm in domU

Displaying 20 results from an estimated 2000 matches similar to: "Physical disks + Software Raid + lvm in domU"

2006 Sep 13
4
benchmarking large RAID arrays
I'm just wondering what folks are using to benchmark/tune large arrays these days. I've always used bonnie with file sizes 2-3 times physical RAM. Maybe there's a better way? Cheers,
2009 Oct 01
1
3-layer structure and the bonnie rewrite problem
Hello list First of all: Good work and thanks for GlusterFS! I'm totally new to GlusterFS, but i like it a lot and think about migrating my NFS setup completely to GlusterFS. But i ran into some problems with my chosen structure. Hopefully someone can help out. The first questions: i ran into some performance issues with a certain structure/setup and like to know (before i continue testing)
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Nov 03
8
recommend benchmarking SW
Hey folks, We've got some new hardware and are trying to figure out what best to do with it. Either run CentOS right on the bare metal, or virtualize, or several combination options. Mainly looking at : - CentOS on bare metal - CentOS on ESXi 4.0 with local disk - CentOS on ESXi with 1 VM running Openfiler to serve disk to other VMs And want to benchmark these 3 scenarios So far all we
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it. It has the following spec: Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple
2008 Apr 17
3
Samba 3: bad read performance
??Hi all! We use Samba 3 server for some video stuff (editing, rendering, and so on) -- that's why performance is critical. We've tried a lot smb.conf options, but Samba can't satisfy our requirements. Our server configuration is as following: * Hard drive: RAID5 (8 x Seagate 7200.10), 3ware 9550SX-8LP controller * NICs (trunked): 2 x Broadcom NetXtreme BCM5704 * Processor: Opteron
2006 Nov 21
3
RAID benchmarks
We (a small college with about 3000 active accounts) are currently in the process of moving from UW IMAP running on linux to dovecot running on a cluster of 3 or 4 new faster Linux machines. (Initially using perdition to split the load.) As we are building and designing the system, I'm attempting to take (or find) benchmarks everywhere I can in order to make informed decisions and so
2008 Jul 15
1
Much higher disk usage in OCFS2 then in XFS
Hi all, I created a OCFS2 volume with a block size of 4kB and a clustersize of 4kB. After I mounted the volume the first time there have been already over 600 MB in use. Then I started to copy a directory with a overall size of 1784 MB(measured on an XFS with "du"). After an hour about 4 GB(!) have been used on the OCFS2 volume so I stopped the copy. The files I copied are mostly
2008 Apr 20
6
creating domU''s consumes 100% of system resources
Hello, I have noticed that when I use xen-create-image to generate an domU, the whole server (dom0 and domU''s) basically hangs until it is finished. This happens primarily during the creation of the ext3 filesystem on an LVM partition. This creation of the file system can take up to 4 or 5 minutes. At which point, any other domU''s are basically paused... tcp connections time
2005 Jul 14
1
a comparison of ext3, jfs, and xfs on hardware raid
I'm setting up a new file server and I just can't seem to get the expected performance from ext3. Unfortunately I'm stuck with ext3 due to my use of Lustre. So I'm hoping you dear readers will send me some tips for increasing ext3 performance. The system is using an Areca hardware raid controller with 5 7200RPM SATA disks. The RAID controller has 128MB of cache and the disks
2012 Feb 20
4
Really bad KVM disk performance
Hi Gang, I recently rented a server at a datacenter with Centos 5.7 X64, Q9550 Processor, 8GB Ram, and dual 250GB SATA HDs (with 16mb cache). They had loaded it with KVM, and installed a 30-day trial of Virtualizor as the front-end for KVM. I was so impressed with how fasts the guests ran that I want to build a few of these machines for myself. I just installed one: same Q9550 processor, 4GB
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi, I've got a new server with a 3ware 9550SXU with the Battery. I am using FreeBSD 7.0-Release (tried both 4BSD and ULE) using AMD64 and the 3ware performance for writes is just plain horrible. Something is obviously wrong but I'm not sure what. I've got a 4 disk RAID 10 array. According to 3dm2 the cache is on. I even tried setting The StorSave preference to
2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2008 Feb 28
4
Gluster / DRBD Anyone using either?
Anyone using either Glusterfs or DRBD in their mail setup? How is performance, manageability? Problems? Tips? Ed W
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than approximately 40MB/s on an ext2 file system. IMO, this is horrible performance for a 6-drive, hardware RAID 5 array. Please have a look at what I'm doing and let me know if anybody has any suggestions on how to improve the performance... System specs: ----------------- 2 x 2.8GHz Xeons 6GB RAM 1 3ware 9500S-12 2 x 6-drive,
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Mar 30
7
FTP DNAT not working - "Server sent passive reply with unroutable address"
Hi all! I am a long time lurker, but have not posted until now. My old trusted firewall machine broke a couple of weeks ago and I replaced it with a XEN domU that is using DNAT and has two interfaces. The firewall domU and the FTP server domU are both guests on the same dom0. All three machines are running Debian/etch (stable) and Shorewall has version 3.2.6. I can''t get FTP to work
2012 Jun 07
1
[virt-tools-list] virt-make-fs: Partition 1 has different physical/logical beginnings and endings
On Thu, Jun 07, 2012 at 02:49:14PM +0200, Sebastien Douche wrote: > On Ubuntu 12.04, I'm trying to create a second disk for a VM. The disk > seems work well (and mounted) but I don't like the cfdisk / fdisk > message. > > # virt-make-fs --partition --size=+300M --type=ext3 --format=qcow2 > srv-2912.tar.gz datadisk-test.qcow2 > Formatting 'datadisk-test.qcow2',