similar to: Help understanding some benchmark results

Displaying 20 results from an estimated 10000 matches similar to: "Help understanding some benchmark results"

2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2007 Nov 26
4
Filesystem for Maildir
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body bgcolor="#ffffff" text="#000000"> <font size="-1"><font face="Verdana">Hi all,<br> <br> In last year, i had made some research and benchmarks based on CentOS 4 to know which filesystem is better for
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp clients for about 200 users. The logical drive was created with the following settings: RAID = 5 stripe size = 32kb write policy = wrback read policy =
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting: ??????????????????? Before using gfapi: ]# dd if=/dev/urandom of=test.file bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27): Apr 11 16:07:42 thumper-9.srv
2008 Sep 25
4
Help with b97 HVM zvol-backed DomU disk performance
Hi Folks, I was wondering if anyone has an pointers/suggestions on how I might increase disk performance of a HVM zvol-backed DomU? - this is my first DomU, so hopefully its something obvious Running bonnie++ shows the DomU''s performance to be 3 orders of magnitude worse than Dom0''s, which itself is half as good as when not running xVM at all (see bottom for bonnie++ results)
2010 Jan 04
0
two nodes with different performance metrics
Two-node AFR with two clients, all four machines identical, v2.0.9, etc. I notice that one client is able to read and write much more quickly than the other and I'm wondering how I go about finding why that is. In a two-node AFR, is one node always elected to be the primary node for writes and does that apply for reads too? That would explain my situation (client0 and server0 share a
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2008 Feb 13
1
Strange performance issues under CentOS 5.1
I am still running CentOS 4.6 on our production systems, but I am starting to plan the upgrade to CentOS 5.1. I have one test system running 5.1 that is the exact same hardware configuration as my 4.6 test system. One of our builds runs about 6 times slower on the 5.1 system, even though is uses less overall CPU time. I first suspected something wrong with the disk, but the results
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than approximately 40MB/s on an ext2 file system. IMO, this is horrible performance for a 6-drive, hardware RAID 5 array. Please have a look at what I'm doing and let me know if anybody has any suggestions on how to improve the performance... System specs: ----------------- 2 x 2.8GHz Xeons 6GB RAM 1 3ware 9500S-12 2 x 6-drive,
2019 Aug 02
2
Re: nbdkit random seek performance
On Thu, Aug 01, 2019 at 03:44:31PM -0700, ivo welch wrote: > hi richard---arthur and I are working with nbdkit v1.12.3 on qemu/kvm. > > we found that our linux (ubuntu 16.04 32-bit) boot time from a local .img > file went from about 10 seconds to about 3 minutes when using the nbdkit > file plugin instead of directly connecting qemu to the file. on further > inspection with
2007 Apr 18
0
3Ware ext3 performance with CentOS 5
Oddly enough, the performance of a 3Ware 9550SX controller seems to have improved when switching from 64-bit to 32-bit CentOS 5. This is a RAID0 device with 8 x 500gig barracudas and the noatime and data=writeback tweaks to that device in /etc/fstab. It's also quite a bit better than results on the same machine using CentOS 4.4 (64-bit). This partition is used to store scratch data
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2006 Oct 20
0
3ware 9550SXU-4LP performance
Hi, I used xen-3.0.2-2 (2.6.16) before and the performance of my 3ware 9550SXU-4LP wasn''t too bad, but now with 3.0.3.0 (2.6.16.29) throughput decreased by about 10MB/sec in write performance. sync; ./bonnie++ -n 0 -r 512 -s 20480 -f -b -d /mnt/blabla -u someuser now (2.6.16.29): Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per