Displaying 20 results from an estimated 40000 matches similar to: "ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?"
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2018 May 03
0
Finding performance bottlenecks
It worries me how many threads talk about low performance. I'm about to
build out a replica 3 setup and run Ovirt with a bunch of Windows VMs.
Are the issues Tony is experiencing "normal" for Gluster? Does anyone here
have a system with windows VMs and have good performance?
*Vincent Royer*
*778-825-1057*
<http://www.epicenergy.ca/>
*SUSTAINABLE MOBILE ENERGY SOLUTIONS*
2008 Aug 10
15
corrupt zfs stream? checksum mismatch
Hi Folks,
I''m in the very unsettling position of fearing that I''ve lost all of my data via a zfs send/receive operation, despite ZFS''s legendary integrity.
The error that I''m getting on restore is:
receiving full stream of faith/home at 09-08-08 into Z/faith/home at 09-08-08
cannot receive: invalid stream (checksum mismatch)
Background:
I was running snv_91,
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting:
???????????????????
Before using gfapi:
]# dd if=/dev/urandom of=test.file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824
2018 May 01
3
Finding performance bottlenecks
On 01/05/2018 02:27, Thing wrote:
> Hi,
>
> So is the KVM or Vmware as the host(s)?? I basically have the same setup
> ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.? I do notice with
> vmware using NFS disk was pretty slow (40% of a single disk) but this
> was over 1gb networking which was clearly saturating.? Hence I am moving
> to KVM to use glusterfs
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi,
I've got a new server with a 3ware 9550SXU with the
Battery. I am using FreeBSD 7.0-Release (tried both
4BSD and ULE) using AMD64 and the 3ware performance
for writes is just plain horrible. Something is
obviously wrong but I'm not sure what.
I've got a 4 disk RAID 10 array.
According to 3dm2 the cache is on. I even tried
setting The StorSave preference to
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2007 Nov 26
4
Filesystem for Maildir
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
</head>
<body bgcolor="#ffffff" text="#000000">
<font size="-1"><font face="Verdana">Hi all,<br>
<br>
In last year, i had made some research and benchmarks based on CentOS 4
to know which filesystem is better for
2006 Dec 08
22
ZFS Usage in Warehousing (lengthy intro)
Dear all,
we''re currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
We''re currently running the database side on various SF V440''s attached via
dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
(obviously in a SAN) shared between many systems. Performance is mediocre
in terms
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2009 Feb 13
3
Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Hi folks,
For people who might be interested, here is how btrfs performs
with two partitions on a single SSD drive in a RAID-1 mirror.
This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM
and a Samsung SSD (128GB Thin uSATA SSD).
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
2003 May 26
1
bad performance on ATA promise controllers
Hello friends,
I'm having a problem with my home server (ASUS A7V133 motherboard) which has
a horrible performance with ATA disks connected to an integrated promise
controller.
Below you can see iozone results of the same disk connected to the primary/
secondary controller versus the promise one.
Promise ATA100 controller:
=========================
Version 1.02a ------Sequential
2008 Feb 13
1
Strange performance issues under CentOS 5.1
I am still running CentOS 4.6 on our production systems, but I am
starting to plan the upgrade to CentOS 5.1. I have one test system
running 5.1 that is the exact same hardware configuration as my 4.6
test system. One of our builds runs about 6 times slower on the 5.1
system, even though is uses less overall CPU time. I first suspected
something wrong with the disk, but the results
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than
approximately 40MB/s on an ext2 file system. IMO, this is horrible
performance for a 6-drive, hardware RAID 5 array. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive,
2008 Apr 20
6
creating domU''s consumes 100% of system resources
Hello,
I have noticed that when I use xen-create-image to generate an domU, the
whole server (dom0 and domU''s) basically hangs until it is finished. This
happens primarily during the creation of the ext3 filesystem on an LVM
partition.
This creation of the file system can take up to 4 or 5 minutes. At which
point, any other domU''s are basically paused... tcp connections time
2009 May 01
2
current zfs tuning in RELENG_7 (AMD64) suggestions ?
I gave the AMD64 version of 7.2 RC2 a spin and all installed as
expected off the dvd
INTEL S3200SHV MB, Core2Duo, 4G of RAM
In the past it had been suggested that for zfs tuning, something like
vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1
However doing a simple test with bonnie and dd, there does not seem
to be very much difference in
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org