similar to: Unexpectedly poor 10-disk RAID-Z2 performance?

Displaying 20 results from an estimated 3000 matches similar to: "Unexpectedly poor 10-disk RAID-Z2 performance?"

2009 May 13
2
With RAID-Z2 under load, machine stops responding to local or remote login
Hi world, I have a 10-disk RAID-Z2 system with 4 GB of DDR2 RAM and a 3 GHz Core 2 Duo. It''s exporting ~280 filesystems over NFS to about half a dozen machines. Under some loads (in particular, any attempts to rsync between another machine and this one over SSH), the machine''s load average sometimes goes insane (27+), and it appears to all be in kernel-land (as nothing in
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, These patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance on virtio-blk device can get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq and
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, These patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance on virtio-blk device can get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq and
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, This patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance problems on virtio-blk device get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, This patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance problems on virtio-blk device get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty, On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote: > On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote: >> Hi, >> >> These patches try to support multi virtual queues(multi-vq) in one >> virtio-blk device, and maps each virtual queue(vq) to blk-mq's >> hardware queue. >>
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty, On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote: > On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote: >> Hi, >> >> These patches try to support multi virtual queues(multi-vq) in one >> virtio-blk device, and maps each virtual queue(vq) to blk-mq's >> hardware queue. >>
2009 Apr 17
4
unable to find any probes from the nfs provider
I want to list/use the nfs probes but I get the error "dtrace: failed to match nfs*:::: No probe matches description". Is there a way to enable nfs provider probes? My system is running snv_112 (bfu''ed from the gate archive) # dtrace -lP nfs* ID PROVIDER MODULE FUNCTION NAME dtrace: failed to match nfs*:::: No probe matches
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, These patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance on virtio-blk device can get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq and
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi, These patches try to support multi virtual queues(multi-vq) in one virtio-blk device, and maps each virtual queue(vq) to blk-mq's hardware queue. With this approach, both scalability and performance on virtio-blk device can get improved. For verifying the improvement, I implements virtio-blk multi-vq over qemu's dataplane feature, and both handling host notification from each vq and
2009 Jan 12
1
ZFS size is different ?
Hi all, I have 2 questions about ZFS. 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different: NAME USED AVAIL REFER MOUNTPOINT pool2/data2 160G 1.44T 159G /pool2/data2 pool1/data 176G 638G 175G /pool1/data1 It keep about 30,000,000 files. The content of p_pool/p1 and backup/p_backup
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2018 Jun 08
2
C7, encryption, and clevis
> > so if it would work, replace shortname with short and short1? With all of this hokey-pokey surrounding licensing and mac addresses, I wonder if this outfit is actually still in compliance with the terms of their license for this software, whatever it may be? If the software licensed to run only on Machine X and Machine X has now been junked and replace by Machine Y, then isn't the
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and have the impression that a lot of folks here are (like me) interested in getting a viable solution for a cheap, fast and reliable ZIL device. I think I can provide such a solution for about $200, but it involves a lot of development work. The basic idea: the main problem when using a HDD as a ZIL device are the cache flushes
2017 Sep 12
2
SMB data transfer performance on AD mode
On Tue, 2017-09-12 at 09:11 -0700, Jeremy Allison via samba wrote: > On Tue, Sep 12, 2017 at 12:52:29PM -0300, Dante Colo via samba wrote: > > Hi Everyone ! > > > > I note that all of samba AD server that i maintain are not so fast in terms of data transfer, more specifically none of them go over 40 MB/s , one particularly which i'm trying to find out why doesn't go
2014 Oct 04
2
massive load caused by smartvd
Hey all, I noticed that my puppet server running CentOS 6.5 was acting a little pokey. So I logged in and did what well just about anyone would've done. And ran the uptime command to have a look at the load. And it was astonishingly high! [root at puppet:~] #uptime 21:28:01 up 1:26, 3 users, load average: 107.37, 72.06, 75.52 So then I had a look at top and saw a LOT of processes
2006 Jun 29
8
Is This a Performance Concern?
I''m running on a brand new MacBook Pro with a relatively clean working set. using Mongrel in production mode on port 3000. The home page does not hit the database and I''m getting: Processing HomeController#index (for 127.0.0.1 at 2006-06-29 14:59:02) [GET] Session ID: e11f7df52bffff304ca7c88e672ef71a Parameters: {"action"=>"index",
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a