similar to: set zfs:zfs_vdev_max_pending

Displaying 20 results from an estimated 900 matches similar to: "set zfs:zfs_vdev_max_pending"

2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2006 Jul 01
1
The ZFS Read / Write roundabout
Hey all - Was playing a little with zfs today and noticed that when I was untarring a 2.5gb archive both from and onto the same spindle in my laptop, I noticed that the bytes red and written over time was seesawing between approximately 23MB/s and 0MB/s. It seemed like we read and read and read till we were all full up, then wrote until we were empty, and so the cycle went. Now: as it happens,
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2011 Apr 07
0
Update LDOM bootdisk (ZFS) from control domain
Hello, I am trying to update files on LDOM boot disk from Control Domain, but I can''t get it working fully. I have a setup that allows to quickly deploy LDOM guest domains. The Golden OS Image of guest domain boot disk is on a ZFS Volume. This ZFS Volume has been Snapshot and Cloned for the quick deployment of guest domains. Filesystem in both the control domain and guest domain is
2010 Sep 20
5
create mirror copy of existing zfs stack
Hi, I have a mirror pool tank having two devices underneath. Created in this way #zpool create tank mirror c3t500507630E020CEAd1 c3t500507630E020CEAd0 Created file system tank/home #zfs create tank/home Created another file system tank/home #zfs create tank/home/sridhar After that I have created files and directories under tank/home and tank/home/sridhar. Now I detached 2nd device i.e
2008 Dec 17
12
disk utilization is over 200%
Hello, I use Brendan''s sysperfstat script to see the overall system performance and found the the disk utilization is over 100: 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 ------ Utilisation ------ ------ Saturation ------ Time %CPU %Mem %Disk %Net CPU Mem
2008 Jun 05
0
[LLVMdev] Linux x86 testers needed!
On Thu, Jun 5, 2008 at 2:50 PM, Tanya M. Lattner <tonic at nondot.org> wrote: > > >> llvm-gcc4.0 is no longer supported, use llvm-gcc4.2. Please keep in mind > >> that you need to keep llvm-gcc in sync with llvm (same revision number). > > > > This basically means llvm-gcc needs to be rebuild every time llvm > > is built and the test run.
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2008 Jun 05
1
[LLVMdev] Linux x86 testers needed!
>>>> llvm-gcc4.0 is no longer supported, use llvm-gcc4.2. Please keep in mind >>>> that you need to keep llvm-gcc in sync with llvm (same revision number). >>> >>> This basically means llvm-gcc needs to be rebuild every time llvm >>> is built and the test run. Shouldn't this be part of >>> NewNightlyTest.pl then? >> >>
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we''ve had incidents where IMAP operations became very slow. The general symptoms are that the number of imapd, pop3d, and lmtpd processes increases, the CPU load average increases, but the ZFS I/O bandwidth decreases. At the same time, ZFS
2012 Jul 18
4
asterisk 1.8 on Solaris/sparc
I've got the latest asterisk 1.8 running on a Netra X1 with Solaris 10 u10. The system itself is happy and phone calls (between two parties) seem fine. Unfortunately, when a caller listens to a Playback recording, there seems to be moments of stutter - perhaps 1 second of stutter for every 10 seconds of Playback. The stutter is not consistent at the same point of the playback file. To