Displaying 20 results from an estimated 1000 matches similar to: "6Tb Database with ZFS"
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2013 May 09
4
recommended memory for zfs
Hello zfs question about memory.
I heard zfs is very ram hungry.
Service looking to run:
- nginx
- postgres
- php-fpm
- python
I have a machine with two quad core cpus but only 4 G Memory
I'm looking to buy more ram now.
What would be the recommend amount of memory for zfs across 6 drives on
this setup?
Also can 9.1 now boot to zfs from the installer?
(no tricks for post install)
Thanks
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi,
I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26.
Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic.
check the panic @
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to
2008 Jun 24
1
zfs primarycache and secondarycache properties
Moved from PSARC to zfs-code...this discussion is seperate from the case.
Eric kustarz wrote:
>
> On Jun 23, 2008, at 1:20 PM, Darren Reed wrote:
>
>> eric kustarz wrote:
>>>
>>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote:
>>>
>>>> Tim Haley wrote:
>>>>> ....
>>>>> primarycache=all | none | metadata
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that''s a quasi-database. It reads files by
mmap()ing them. (writes are done via write()). We''re talking 100TB of
data in files that are 100k->50G in size (the files have headers to tell
the app what segment to map, so mapped chunks
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77 minutes. I haven''t tried my old tool, but I would
expect the same diff to take a couple of
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2007 Sep 25
1
ZFS ARC & DNLC Limitation
Hello All,
Awhile back (Feb ''07) when we noticed ZFS was hogging all the memory
on the system, y''all were kind enough to help us use the arc_max
tunable to attempt to limit that usage to a hard value. Unfortunately,
at the time a sticky problem was that the hard limit did not include
DNLC entries generated by ZFS.
I''ve been watching the list since then and trying to
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data.
The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card.
I''m looking for suggestions
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,