similar to: Sun 6120 array again

Displaying 20 results from an estimated 400 matches similar to: "Sun 6120 array again"

2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi, I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b] So is this variable not available in the Solaris kernel? I''m getting really poor
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi, I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH . It seems to be the only serious article on the net about this subject. Could someone here state on this
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2008 Jul 30
2
zfs_nocacheflush
A question regarding zfs_nocacheflush: The Evil Tuning Guide says to only enable this if every device is protected by NVRAM. However, is it safe to enable zfs_nocacheflush when I also have local drives (the internal system drives) using ZFS, in particular if the write cache is disabled on those drives? What I have is a local zfs pool from the free space on the internal drives, so I''m
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have a system connected to an external DAS (SCSI) array, using ZFS. the array has an nvram write cache, but it honours SCSI cache flush commands by flushing the nvram to disk. the array has no way to disable this behaviour. a well-known behaviour of ZFS is that it often issues cache flush commands to storage in order to ensure data
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
Hello, I am a final year computer engg student and I am planning to implement zfs on linux, I have gone through the articles posted on solaris . Please let me know about the feasibility of zfs to be implemented on linux. waiting for valuable replies. thanks in advance. On 9/14/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote: > Send
2013 Mar 18
0
Re: zfs-discuss Digest, Vol 89, Issue 12
You could always use 40-gigabit between the two storage systems which would speed things dramatically, or back to back 56-gigabit IB. ---------------------------------------- From: zfs-discuss-request@opensolaris.org Sent: Monday, March 18, 2013 11:01 PM To: zfs-discuss@opensolaris.org Subject: zfs-discuss Digest, Vol 89, Issue 12 Send zfs-discuss mailing list submissions to
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2009 Feb 17
2
DO NOT REPLY [Bug 6120] New: Default exclude file
https://bugzilla.samba.org/show_bug.cgi?id=6120 Summary: Default exclude file Product: rsync Version: 3.0.5 Platform: Other OS/Version: Linux Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: wooptoo@gmail.com QAContact:
2007 Sep 13
4
How to delegate filesystems from different pools to non-global zone
I''m trying to add filesystems from two different pools to a zone but can''t seem to find any mention of how to do this in the docs. I tried this but the second set overwrites the first one. add dataset set name=pool1/fs1 set name=pool2/fs2 end Is this possible or do I need to use different syntax? -Robert This message posted from opensolaris.org
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html What do X4500 / X4540 owners use if they''d like more
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2016 Apr 01
2
[PATCH v3 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu()
On Fri, Apr 01, 2016 at 09:14:33AM +0200, Juergen Gross wrote: > --- a/kernel/smp.c > +++ b/kernel/smp.c > @@ -14,6 +14,7 @@ > #include <linux/smp.h> > #include <linux/cpu.h> > #include <linux/sched.h> > +#include <linux/hypervisor.h> > > #include "smpboot.h" > > @@ -758,9 +759,14 @@ struct smp_sync_call_struct { >