similar to: Solaris 10 default caching segmap/vpm size

Displaying 20 results from an estimated 1000 matches similar to: "Solaris 10 default caching segmap/vpm size"

2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices should be followed. So get very much acquainted with this first : http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Then if you must, this could soothe or sting : http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide So drive carefully. -r
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2007 Feb 13
4
Best Practises => Keep Pool Below 80%?
In the ZFS Best Practises Guide here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide It says: ``Currently, pool performance can degrade when a pool is very full and file systems are updated frequently, such as on a busy mail server. Under these circumstances, keep pool space under 80% utilization to maintain pool performance.''''
2006 Oct 31
0
6256083 Need a lightweight file page mapping mechanism to substitute segmap
Author: praks Repository: /hg/zfs-crypto/gate Revision: 4c3b7ab574cc73502effa96c11c293e04fd54309 Log message: 6256083 Need a lightweight file page mapping mechanism to substitute segmap 6387639 segkpm segment set to incorrect size for amd64 Files: create: usr/src/uts/common/vm/vpm.c create: usr/src/uts/common/vm/vpm.h update: usr/src/pkgdefs/SUNWhea/prototype_com update:
2008 Dec 19
4
ZFS boot and data on same disk - is this supported?
I have read the ZFS best practice guide located at http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide However I have questions whether we support using slices for data on the same disk as we use for ZFS boot. What issues does this create if we have a disk failure in a mirrored environment? Does anyone have examples of customers doing this in production environments. I
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations> says that the number of disks in a RAIDZ should be (N+P) with N = {2,4,8} and P = {1,2}. But if you go down the page just a little further to the thumper configuration examples, none of the 3 examples follow this recommendation! I will have 10 disks to put into a
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all... I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a single rpool on cXtYd0s2, if I can even do that, and improved performance compared to having two
2009 Mar 04
5
Oracle database on zfs
Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to
2008 Sep 14
10
ZFS system requirements
Hi, this says that opensolaris only requires 512MB ram: http://dlc.sun.com/osol/docs/content/IPS/sysreq.html This says 1GB ram and a 64bit processor are recommended: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Swap_Space Am I going to have problems if I run opensolaris and zfs at the minimum requirements? -- This message posted from opensolaris.org
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends. So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability. Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2007 Sep 26
9
Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?
I''m trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not so large, so I am thinking an X4100 M2 or X4150 should be plenty. This message posted from
2009 Jul 29
2
HPEC > VPM ?
Hi - I had a client recently move their asterisk system (asterisk 1.4.26, dahdi 2.2.0.1, aex800 w/vpm module) to a "new" location, a building that's nearly 150 years old. I was not personally able to go there, but the person who did the move said the building's demarc room was "scary"-- water leaks, jumbled and frayed wiring, and all sorts of other fun. The echo on
2006 Mar 01
1
TE411P VPM
Does anyone know how to disable the VPM in software rather than removing the card altogether? The canceler isn't working as well as the software cancelers were. Aaron
2008 Sep 11
4
ZFS Panicing System Cluster Crash effect
Issues with ZFS and Sun Cluster If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic. Zpool was obviously not exported in controlled fashion because of hard crash. Storage structure is - HW RAID protected LUN from array. Zpool build on single HW LUN. Zpool created
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello, We have a new Thor here with 24TB of disk in (first of many, hopefully). We are trying to determine the bext practices with respect to file system management and sizing. Previously, we have tried to keep each file system to a max size of 500GB to make sure we could fit it all on a single tape, and to minimise restore times and impact should we experience some kind of volume
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running