Displaying 20 results from an estimated 600 matches similar to: "zfs hanging during reads"
2007 Jul 31
0
controller number mismatch
Hi,
I just noticed something interesting ... don''t know whether it''s
relevant or not (two commands run in succession during a ''nightly'' run):
$ iostat -xnz 6
[...]
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.3 0.0 0.8 0.0 0.0 0.2 0.2 0 0 c2t0d0
2.2
2008 Dec 17
12
disk utilization is over 200%
Hello,
I use Brendan''s sysperfstat script to see the overall system performance and
found the the disk utilization is over 100:
15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00
15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00
------ Utilisation ------ ------ Saturation ------
Time %CPU %Mem %Disk %Net CPU Mem
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms).
I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750) for a total of 12 disks. Every disk has its own eSATA
cable
connected to the ports on the PCI-X
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies?
The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd.
According to our tester, Oracle writes are extremely slow (high latency).
Below is a snippet of iostat:
r/s w/s
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when
running iostat:
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0
0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0
0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0
0.0 65.1 0.0 119640090.2 0.0
2008 Jan 17
9
ATA UDMA data parity error
Hey all,
I''m not sure if this is a ZFS bug or a hardware issue I''m having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Level Info About My System
---------------------------------------------
- fresh
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi,
once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones.
Thanks,
budy
--
This message posted from opensolaris.org
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn''t implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?
Doing following:
o added new virtual HDD (it becomes
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs
is a very non-trivial exercise with zero tolerance for developer bugs.
I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and
a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108)
hooked up to the built-in SCSI controller (the only device on the SCSI
bus).
My initial ZFS test was to