similar to: Occasional storm of xcalls on segkmem_zio_free

Displaying 20 results from an estimated 2000 matches similar to: "Occasional storm of xcalls on segkmem_zio_free"

2011 Jun 24
13
Fixing txg commit frequency
Hi All, I''d like to ask about whether there is a method to enforce a certain txg commit frequency on ZFS. I''m doing a large amount of video streaming from a storage pool while also slowly continuously writing a constant volume of data to it (using a normal file descriptor, *not* in O_SYNC). When reading volume goes over a certain threshold (and average pool load over ~50%), ZFS
2012 May 07
14
Has anyone used a Dell with a PERC H310?
I''m trying to configure a DELL R720 (not a pleasant experience) which has an H710p card fitted. The H710p definitely doesn''t support JBOD, but the H310 looks like it might (the data sheet mentions non-RAID). Has anyone used one with ZFS? Thanks, -- Ian.
2013 Jan 07
8
Has anyone used a Dell with a PERC H310?
Hello Sa?o! I found you here: http://mail.opensolaris.org/pipermail/zfs-discuss/2012-May/051546.html "How about reflashing LSI firmware to the card? I read on Dell''s spec sheets that the card runs an LSISAS2008 chip, so chances are that standard LSI firmware will work on it. I can send you all the required bits to do the reflash, if you like." I got Dell Perc H310 controller
2012 Jul 02
14
HP Proliant DL360 G7
Hello, Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated. Thanks in advance! -Anh
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html What do X4500 / X4540 owners use if they''d like more
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian
2013 Jan 31
4
zfs + NFS + FreeBSD with performance prob
Hi all, I''m not sure if the problem is with FreeBSD or ZFS or both so I cross-post (I known it''s bad). Well I''ve server running FreeBSD 9.0 with (don''t count / on differents disks) zfs pool with 36 disk. The performance is very very good on the server. I''ve one NFS client running FreeBSD 8.3 and the performance over NFS is very good : For example
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI, T5220, S10U4 + patches mdb -k > ::memstat While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows: CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0 So about 2mln xcalls per second. Let''s check with dtrace:
2009 Apr 02
1
Re: Links 2003 & Grand Prix 4
WINE 1.1.18 has a regression from WINE 1.1.14. Once again, GP4 no longer installs from CD... the console outputs the following ole error repeatedly: err:ole:xCall Failed to serialize param, hres 80040155 err:ole:deserialize_param Failed to read integer 4 byte err:ole:TMStubImpl_Invoke Failed to deserialize param State, hres 80004005 err:rpc:I_RpcReceive we got fault packet with status 0x80004005
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically
2006 Jan 11
1
InstallShield Setup.exe hang
Hi InstallShield(setup.exe) seems to be hang up just after extracting files with fresh maked & installed wine 0.9.5 from source on fedora core 4. Could you tell me work around or hint ? I have to put the ole32.dll from MS WinXP original into drive_c/windows/system32 ? InstallShield output are as follows. I think Setup.exe extract files(OK) and folk co-process(iKernel.exe;OK), then try to
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed? I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit. OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2007 Oct 10
6
server-reboot
Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000 Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice] Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors. The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic. I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14). But still when trying to do zpool import
2008 Dec 28
2
zfs mount hangs
Hi, System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and 2x146GB (space pool). snv_98. After a panic the system hangs on boot and manual attempts to mount (at least) one dataset in single user mode, hangs. The Panic: Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20: Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID 0x00167f73.1c737868 UE Error(s) Dec 27
2008 Jul 05
4
iostat and monitoring
Hi gurus, I like zpool iostat and I like system monitoring, so I setup a script within sma to let me get the zpool iostat figures through snmp. The problem is that as zpool iostat is only run once for each snmp query, it always reports a static set of figures, like so: root at exodus:snmp # zpool iostat -v capacity operations bandwidth pool used avail read
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one question I can''t intelligently answer is what blocksize I should use in these tests. I assume there is something which monitors present disk activity, that I could run on my production servers, to give me some statistics of the block sizes that the users are actually performing on the production server.