similar to: 6604198 - single thread for compression

Displaying 20 results from an estimated 200 matches similar to: "6604198 - single thread for compression"

2008 May 14
2
vdev cache - comments in the source
Hello zfs-code, http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c 72 * All i/os smaller than zfs_vdev_cache_max will be turned into 73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software 74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each 75 * vdev''s vdev_cache. While it
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs on a single notebook hdd p-ata drive). After upgrading to 2007-05-25 opensolaris release bits
2007 Jan 13
2
Extremely poor ZFS perf and other observations
I''m observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) - I''ve a compressed ZFS filesystem where I''m creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn''t change nor does the file size change. The same is
2007 Sep 16
3
PLOGI errors
Hello, today we made some tests with failed drives on a zpool. (SNV60, 2xHBA, 4xJBOD connected through 2 Brocade 2800) On the log we found hundred of the following errors: Sep 16 12:04:23 svrt12 fp: [ID 517869 kern.info] NOTICE: fp(0): PLOGI to 11dca failed state=Timeout, reason=Hardware Error Sep 16 12:04:23 svrt12 fctl: [ID 517869 kern.warning] WARNING: fp(0)::PLOGI to 11dca failed. state=c
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed? I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit. OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2008 Sep 28
4
S10 (05/08) vs SNV_98 stubdom install at Xen 3.3 CentOS 5.2 Dom0 (64-bit)
[This email is either empty or too large to be displayed at this time]
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors. The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic. I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14). But still when trying to do zpool import
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2007 Apr 23
14
concatination & stripe - zfs?
I want to configure my zfs like this : concatination_stripe_pool : concatination lun0_controller0 lun1_controller0 concatination lun2_controller1 lun3_controller1 1. There is any option to implement it in ZFS? 2. there is other why to get the same configuration? thanks This message posted from opensolaris.org
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get
2007 Dec 12
2
Xend will not start, prints ominous sounding error at boot
I''m currently running snv_76 on my laptop to run a windows XP hvm domU. Everything has been working great until recently when I started getting the following messages during boot (from dmesg): Dec 11 16:31:19 pdlaptop xenstored: [ID 702911 daemon.error] Checking store ... Dec 11 16:31:19 pdlaptop xenstored: [ID 702911 daemon.error] Checking store complete. Dec 11 16:31:19 pdlaptop
2007 Aug 30
4
Samba with ZFS ACL
Hi, I''m looking for Samba, which work native ZFS ACL. With ZFS almost everything work except native ZFS ACL. I have learned on samba mailing list, that it dosn''t work while samba-3.2.0 will be released. Has anyone knows any solution to work samba-3.0.25? If any idea, please let me know. thanks This message posted from opensolaris.org
2007 Jan 22
0
Recent JDK vulnerability
On Wed, Jan 17, 2007 at 10:18:42AM +0100, Martin Blapp wrote: > Hi all, > > I just read > > http://www.sunsolve.sun.com/search/document.do?assetkey=1-26-102760-1 > > Will the freebsd fundation release new jdk binaries ? There are new binaries planned. They are built even. Just being held up by lack of testing time. -- Greg Lewis Email :
2011 May 21
1
OpenVAS Vulnerability
Hi, Please advice me about the below reported vulnerability. High OpenSSH X Connections Session Hijacking Vulnerability Risk: High Application: ssh Port: 22 Protocol: tcp ScriptID: 100584 Overview: OpenSSH is prone to a vulnerability that allows attackers to hijack forwarded X connections. Successfully exploiting this issue may allow an attacker run arbitrary shell commands with the privileges
2007 Feb 26
15
Efficiency when reading the same file blocks
if you have N processes reading the same file sequentially (where file size is much greater than physical memory) from the same starting position, should I expect that all N processes finish in the same time as if it were a single process? In other words, if you have one process that reads blocks from a file, is it "free" (meaning no additional total I/O cost) to have another process
2007 Oct 10
6
server-reboot
Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000 Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice] Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2009 Dec 08
1
Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?
I have a Solaris 10 U5 system massively patched so that it supports ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live upgrade components have been updated to Solaris 10 U8 versions from the DVD, and GRUB has been updated to support redundant menus across the UFS boot environments. I have studied the Solaris 10 Live Upgrade manual (821-0438) and am unable to find any
2010 Mar 05
4
j4500 cache flush
Since the j4500 doesn''t have a internal SAS controller, would it be safe to say that ZFS cache flushes would be handled by the host''s SAS hba? -- This message posted from opensolaris.org
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian