Displaying 3 results from an estimated 3 matches for "85gb".
Did you mean:
5gb
2007 Mar 14
4
What's the best way to convert a whole set of file systems?
...to (probably) Reiserfs or maybe ext3,
but I need to do them one at a time because I only have enough transfer
space to accommodate the largest one, or at least that's my belief.
That would mean at least two copies per partition converted, and I have
six partitions to convert, from ~14Gb to over 85Gb (in one, only - the
rest are 30Gb or smaller).
1) Is there a good way to do whole fs conversions, specifically
from NTFS to reiserfs or ext3?
2) Do I even need to do this (i.e., do any of the CentOS/Linux
kernels support read AND write to NTFS)?
3) Is there, by any cha...
2010 Dec 09
3
ZFS Prefetch Tuning
...ata, 50 - 70 % on prefetch and 0% on
metadata. I am thinking that the majority of the prefetch misses are
due to the tablespace data files.
The configuration of the system is as follows
Sun Fire X4600 M2 8 x 2.3 GHz Quad Core Processor, 256GB Memory
Solaris 10 Update 7
ZFS Arc cache max set to 85GB
4 Zpools configured from a 6540 Storage array
* apps - single LUN (raid 5) recordsize set to 128k, from the array,
pool contains binaries and application files
* backup - 8 LUNs (varying sizes all from a 6180 array with SATA
disks) used for storing oracle dumps
* data - 5 L...
2012 Dec 01
3
6Tb Database with ZFS
...he better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the usage of memory was as follow:
(With Arc_max at 30Gb)
Kernel = 18Gb
ZFS DATA = 55Gb
Anon = 90Gb
Page Cache = 10Gb
Free = 25Gb
My system have 192Gb RAM and the database SGA = 85Gb.
I would appreciate if someone could tell me about their experience.
Best Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121201/07dbd312/attachment.html>