Displaying 20 results from an estimated 300 matches similar to: "Seriously degraded SAS multipathing performance"
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk
devices provided by a Netapp filer. Both the T2000 and the Netapp
have two ethernet interfaces for Iscsi, going to separate switches on
separate private networks. The scsi_vhci devices look like this in
`format'':
1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
Sorry for the long post but I know trying to decide on hardware often want to
see details about what people are using.
I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am
starting to use.
I successfully transferred a deduped zpool with 1.x TB of files and 60 or so
zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at
about 50MB/s or
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance.
I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this:
<pre>
> # zpool status
> pool: pool01
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool01 ONLINE 0 0 0
> mirror
2011 Aug 16
2
solaris 10u8 hangs with message Disconnected command timeout for Target 0
Hi,
My solaris storage hangs. I login to the console and there is messages[1]
display on the console.
I can''t login into the console and seems the IO is totally blocked.
The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2 HBA
cable connect the server and MD3000.
The symptom is random.
It is very appreciated if any one can help me out.
Regards,
Ding
[1]
Aug 16
2011 Aug 18
0
zfs-discuss Digest, Vol 70, Issue 37
Please check whether you have latest '' MPT '' patch installed on your server
? If not , please install MPT patch . It will fix the issue .
Regards,
Gowrisankar .
On Thu, Aug 18, 2011 at 5:30 PM, <zfs-discuss-request at opensolaris.org>wrote:
> Send zfs-discuss mailing list submissions to
> zfs-discuss at opensolaris.org
>
> To subscribe or unsubscribe
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS?
Keith McAndrew
Senior Systems Engineer
Northern California
SUN Microsystems - Data Management Group
<mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com
916 715 8352 Cell
CONFIDENTIALITY NOTICE
The information contained in this transmission may contain privileged and
confidential information of SUN
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names...
Presently I have two disks attached: (I removed the other 10 disks for now,
because these device names are so confusing. This way I can focus on *just*
the OS disks.)
0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2
hd 255 sec 252>
/scsi_vhci/disk at g5000c5003424396b
2006 Oct 31
0
6349232 vhci cache may not contain iscsi device information when cache is rebuilt
Author: ramat
Repository: /hg/zfs-crypto/gate
Revision: 4b26d77cdea7130b1da9746af6ad53939d24d297
Log message:
6349232 vhci cache may not contain iscsi device information when cache is rebuilt
Files:
update: usr/src/uts/common/os/sunmdi.c
update: usr/src/uts/common/sys/mdi_impldefs.h
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750) for a total of 12 disks. Every disk has its own eSATA
cable
connected to the ports on the PCI-X
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms).
I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured:
T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks
OpenSolaris Nevada Build 91
Solaris Express Community Edition snv_91 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 03 June 2008
2008 Jun 18
1
snv_81 domU: long delay during boot when dom0 has been up for a long time
This one happened today:
Gentoo linux dom0, xen 3.2.0 hypervisor (32-bit), Intel Quad-core cpu, 8GB memory.
This linux box / dom0 is up for 40 days.
OpenSolaris SXCE snv_81 PV domU (32-bit)
Problem: when starting the snv_81 domU, we''re stuck for more than 60 seconds
early during bootstrap:
v3.2.0 chgset ''unavailable''
SunOS Release 5.11 Version snv_81 32-bit
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd.
According to our tester, Oracle writes are extremely slow (high latency).
Below is a snippet of iostat:
r/s w/s
2008 May 29
1
>1TB ZFS thin provisioned partition prevents Opensolaris from booting.
Not sure where to put this but I am cc''ing the ZFS - discussion board.
I was successfull in creating iscsi shares using ZFS set shareiscsi=on with 2 thin provisioned partitions of 1TB each (zfs create -s -V 1tb idrive/d1). Access to the shares with an iscsi initiator was successful, all was smooth, until the reboot.
Upon reboot, the console reports the following errors.
WARNING:
2013 Jan 07
5
mpt_sas multipath problem?
Greetings,
We''re trying out a new JBOD here. Multipath (mpxio) is not working,
and we could use some feedback and/or troubleshooting advice.
The OS is oi151a7, running on an existing server with a 54TB pool
of internal drives. I believe the server hardware is not relevant
to the JBOD issue, although the internal drives do appear to the
OS with multipath device names (despite the fact