Displaying 20 results from an estimated 1000 matches similar to: "slow sync on zfs"
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi,
I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b]
So is this variable not available in the Solaris kernel?
I''m getting really poor
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve
run across the recent "VTrak" SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
http://www.promise.com/product/product_detail_eng.asp?product_id=175
E310s SAS-connected RAID:
2007 Mar 16
8
ZFS checksum error detection
Hi all.
A quick question about the checksum error detection routines in ZFS.
Surely ZFS can decide about checksum errors in a redundant environment but
what about an non-redundant one? We connected a single RAID5 array to a
v440 as a NFS server and while doing backups and the like we see the
"zpool status -v" checksum error counters increment once in a while.
Nevertheless the
2006 Jun 22
2
ZFS throttling - how does it work?
Hi zfs-discuss,
I have some questions about throttling on ZFS
1) I know that throttling is activating while one sync is waiting for another. (http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs)
Is it possible to throttle only selected processes (e.g. nfsd) ?
2) How can I obtain some statistics about it? I want to know how often throttling is activating on my host etc.
3) Is it
2008 Mar 13
4
Disabling zfs xattr in S10u4
Hi,
I want to disable extended attributes in my zfs on s10u4. I found out
that the command to do is zfs set xattr=off <poolname>. But, I do not
see this option in s10u4.
How can I disable zfs extended attributes on s10u4?
I''m not in the zfs-discuss alias. Please respond to me directly.
Thanks
Balaji
2007 Mar 16
21
ZFS memory and swap usage
Greetings, all.
Does anyone have a good whitepaper or three on how ZFS uses memory and swap? I did some Googling, but found nothing that was useful.
The reason I ask is that we have a small issue with some of our DBA''s. We have a server with 16GB of memory, and they are looking at moving over databases to it from a smaller system. The catch is that they are moving to 10g. Oracle
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss,
If you do ''zpool create -f test A B C spare D E'' and D or E contains
UFS filesystem then despite of -f zpool command will complain that
there is UFS file system on D.
workaround: create a test pool with -f on D and E, destroy it and
that create first pool with D and E as hotspares.
I''ve tested it on s10u3 + patches - can someone confirm
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
>/
/>/ Has the following error no consequences?
/>/
/>/ Bug ID 6538021
/>/ Synopsis Need a way to force pool startup when
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi.
I''m all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the
links to disks are the bottleneck, so I''m going to use not more than 4
disks, probably.
2006 Aug 18
4
ZFS Filesytem Corrpution
Hi,
I have been seeing data corruption on the ZFS filesystem. Here are
some details. The machine is running s10 on X86 platform with a single
160Gb SATA disk. (root on s0 and zfs on s7)
...Sanjaya
--------- /etc/release ----------
-bash-3.00# cat /etc/release
Solaris 10 6/06 s10x_u2wos_09a X86
Copyright 2006 Sun Microsystems, Inc. All Rights
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
Hi Guys,
I''m playing with Blade 6300 to check performance of compressed ZFS with Oracle database.
After some really simple tests I noticed that default (well, not really default, some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on
2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All,
Two and three-node clusters with SC3.2 and S10u3 (120011-14).
If a node is rebooted when using SCSI3-PGR the node is not
able to take the zpool by HAStoragePlus due to reservation conflict.
SCSI2-PGRE is okay.
Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus
works okay with PGR and PGRE. (both SMI and EFI-labled disks)
If using scshutdown and restart all nodes then it will
2007 Jul 05
4
ZFS receive issue running multiple receives and rollbacks
Hi, all,
Environment: S10U3 running as VMWare Workstation 6 guest; Fedora 7 is
the VMWare host, 1 GB RAM
I''m creating a solution in which I need to be able to save off state on
one host, then restore it on another. I''m using ZFS snapshots with ZFS
receive and it''s all working fine, except for some strange behavior when
I perform multiple rollbacks and receives.
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object?
Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used?
Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this:
# zpool status
pool: pool01
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the
2009 Mar 25
3
anonymous dtrace?
Hello experts,
I heard that there is something called anonymous dtrace that would still
be running when I do a reboot.
Basically, I have the following problem:
The /boot/solaris/bootenv.rc file in my alternate boot environment is
getting modified when I reboot the machine after doing luactivate <ABE>.
It happens only on init 6, doesn''t happen when I do a simple reboot.
The set