Displaying 20 results from an estimated 2000 matches similar to: "Stress test zfs"
2005 May 11
5
Xen reboots on dom-U disk stress...
Hi all
I tried to run bonnie++ disk stresser in dom-U, who''s disk is backed
with non-local (on nfs) loop-back file.
The machine rebooted pretty quickly.
So, how do I tell what''s barfing? Is it Xen? Is it dom-0 (nfs or loop-back)?
I looked in dom-0''s /var/log/messages and didn''t see any obvious record
of a dom-0 whoopsie (but is that the right place to look
2009 Feb 13
3
Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Hi folks,
For people who might be interested, here is how btrfs performs
with two partitions on a single SSD drive in a RAID-1 mirror.
This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM
and a Samsung SSD (128GB Thin uSATA SSD).
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
2007 Sep 28
4
Sun 6120 array again
Greetings,
Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We''re about to
reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable
zfs_nocacheflush is not going to serve us well (there is a ZFS
2018 May 01
3
Finding performance bottlenecks
On 01/05/2018 02:27, Thing wrote:
> Hi,
>
> So is the KVM or Vmware as the host(s)?? I basically have the same setup
> ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.? I do notice with
> vmware using NFS disk was pretty slow (40% of a single disk) but this
> was over 1gb networking which was clearly saturating.? Hence I am moving
> to KVM to use glusterfs
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance is Z-Raid to this configuration, given the
complete randomness and size of these accesses?
Would
2018 May 03
0
Finding performance bottlenecks
It worries me how many threads talk about low performance. I'm about to
build out a replica 3 setup and run Ovirt with a bunch of Windows VMs.
Are the issues Tony is experiencing "normal" for Gluster? Does anyone here
have a system with windows VMs and have good performance?
*Vincent Royer*
*778-825-1057*
<http://www.epicenergy.ca/>
*SUSTAINABLE MOBILE ENERGY SOLUTIONS*
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting:
???????????????????
Before using gfapi:
]# dd if=/dev/urandom of=test.file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824
2008 Feb 27
8
Physical disks + Software Raid + lvm in domU
Hi,
I''m trying to setup my box as described below:
- The Dom0 exports two disks as physical devices to the DomU
- DomU use the two devices to assemble a software raid device (/dev/md0)
- On the /dev/md0 I create a lvm volume group and lvm volumes on it.
Everything seems to work fine if the lvm volumes on the DomU are lightly
used. Under heavy load the DomU freeze up immediately.
The
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed
CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz
4GB ECC memory
4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm
to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple
2006 Oct 04
2
server disk subsystem benchmarks, bonnie++ and/or others?
Greetings
I've searched to no avail so far... there is bound to be something more
intelligible out there...???
I am playing with bonnie++ for the first time...
May I please get some advise and list experience on using this or other disk
subsystem benchmark programs properly with or without a GUI ?
Test system in this case is a Compaq DL360 with 2 to 4 Gig DRAM and qty (2)
36Gig 10k drives
2007 Nov 26
4
Filesystem for Maildir
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
</head>
<body bgcolor="#ffffff" text="#000000">
<font size="-1"><font face="Verdana">Hi all,<br>
<br>
In last year, i had made some research and benchmarks based on CentOS 4
to know which filesystem is better for
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi,
I've got a new server with a 3ware 9550SXU with the
Battery. I am using FreeBSD 7.0-Release (tried both
4BSD and ULE) using AMD64 and the 3ware performance
for writes is just plain horrible. Something is
obviously wrong but I'm not sure what.
I've got a 4 disk RAID 10 array.
According to 3dm2 the cache is on. I even tried
setting The StorSave preference to
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than
approximately 40MB/s on an ext2 file system. IMO, this is horrible
performance for a 6-drive, hardware RAID 5 array. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive,
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a
few benchmark programs on them: bonnie++, tiobench, dbench and a few
generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
All with standard mkfs/mount options and +noatime for all of them.
Here are the results, no graphs - sorry:
http://nerdbynature.de/benchmarks/v40z/2009-12-22/
Reiserfs
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a
few benchmark programs on them: bonnie++, tiobench, dbench and a few
generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
All with standard mkfs/mount options and +noatime for all of them.
Here are the results, no graphs - sorry:
http://nerdbynature.de/benchmarks/v40z/2009-12-22/
Reiserfs
2013 Sep 04
4
Linux tool to check random I/O performance
we just purchase new I/O card and like to check I/O performance.
for sequence I/O performance test we can use "hdparm -t /dev/xxx"
bur for random I/O performance test which Linux command I can use?
** our environment does NOT allow install third party software..
Thanks
2005 Jul 14
1
a comparison of ext3, jfs, and xfs on hardware raid
I'm setting up a new file server and I just can't seem to get the
expected performance from ext3. Unfortunately I'm stuck with ext3 due
to my use of Lustre. So I'm hoping you dear readers will send me some
tips for increasing ext3 performance.
The system is using an Areca hardware raid controller with 5 7200RPM
SATA disks. The RAID controller has 128MB of cache and the disks
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now