Displaying 20 results from an estimated 10000 matches similar to: "Adding to arc_buf_hdr_t"
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system) this can make a
pleasant improvement over a HDD-only implementation."
For the upcoming
2008 Jun 24
1
zfs primarycache and secondarycache properties
Moved from PSARC to zfs-code...this discussion is seperate from the case.
Eric kustarz wrote:
>
> On Jun 23, 2008, at 1:20 PM, Darren Reed wrote:
>
>> eric kustarz wrote:
>>>
>>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote:
>>>
>>>> Tim Haley wrote:
>>>>> ....
>>>>> primarycache=all | none | metadata
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that''s a quasi-database. It reads files by
mmap()ing them. (writes are done via write()). We''re talking 100TB of
data in files that are 100k->50G in size (the files have headers to tell
the app what segment to map, so mapped chunks
2007 Aug 14
2
IO error on mount for encrypted dataset
Does the ARC get flushed for a dataset when it is unmounted ?
What does change when a dataset is unmounted ?
The context of the problem is this:
create a pool,
provide the pool encryption key,
create a dataset with encryption turned on,
put data into that dataset
I see it getting encrypted and written to disk by zio_write,
zfs umount -a
zfs mount -a
I can read the data back - yeah!.
However
2012 Oct 22
2
What is L2ARC write pattern?
Hello all,
A few months ago I saw a statement that L2ARC writes are simplistic
in nature, and I got the (mis?)understanding that some sort of ring
buffer may be in use, like for ZIL. Is this true, and the only metric
of write-performance important for L2ARC SSD device is the sequential
write bandwidth (and IOPS)? In particular, there are some SD/MMC/CF
cards for professional photography and
2010 Feb 20
6
l2arc current usage (population size)
Hello,
How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/brendan/entry/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc or l2arc?
--
This message posted from opensolaris.org
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2011 Apr 25
3
arcstat updates
Hi ZFSers,
I''ve been working on merging the Joyent arcstat enhancements with some of my own
and am now to the point where it is time to broaden the requirements gathering. The result
is to be merged into the illumos tree.
arcstat is a perl script to show the value of ARC kstats as they change over time. This is
similar to the ideas behind mpstat, iostat, vmstat, and friends.
The current
2010 Jun 24
1
Gonna be stupid here...
But it''s early (for me), and I can''t remember the answer here.
I''m sizing an Oracle database appliance. I''d like to get one of the
F20 96GB flash accellerators to play with, but I can''t imagine I''d be
using the whole thing for ZIL. The DB is likely to be a couple TB in size.
Couple of questions:
(a) since everything is going to be
2006 Nov 02
11
ZFS and memory usage.
ZFS works really stable on FreeBSD, but I''m biggest problem is how to
control ZFS memory usage. I''ve no idea how to leash that beast.
FreeBSD has a backpresure mechanism. I can register my function so it
will be called when there are memory problems, which I do. I using it
for ARC layer.
Even with this in place under heavy load the kernel panics, because
memory with KM_SLEEP
2009 Nov 20
1
Using local disk for cache on an iSCSI zvol...
I''m just wondering if anyone has tried this, and what the performance
has been like.
Scenario:
I''ve got a bunch of v20z machines, with 2 disks. One has the OS on it,
and the other is free. As these are disposable client machines, I''m not
going to mirror the OS disk.
I have a disk server with a striped mirror zpool, carved into a bunch of
zvols, each exported via
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 - cache device
I ran an OLTP bench mark to emulate a DMBS
One I ran the benchmark, the pool started create the database file on the
ssd cache device ???????????
can any one explain why this happening ?
is not L2ARC is used to absorb the evicted data
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2008 Jan 16
1
Understanding the ARC - arc_buf_hdr and arc_buf_t
I?m trying to understand the inner workings of the adaptive replacement cache (arc). I see there are arc_bufs and arc_buf_hdrs. Each arc_buf_hdr points to an arc_buf_t. The arc_buf_t is really one entry in a list of arc_buf_t entries. The multiple entries are accessed through the arc_buf_t?s b_next member.
Why does the arc_buf_hdr point to a list of arc_buf_ts and not just one arc_buf_t, i.e.,
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2008 Mar 27
4
dsl_dataset_t pointer during ''zfs create'' changes
I''ve noticed that the dsl_dataset_t that points to a given dataset
changes during the life time of a ''zfs create'' command. We start out
with one dsl_dataset_t* during dmu_objset_create_sync() but by the time
we are later mounting the dataset we have a different in memory
dsl_dataset_t* referring to the same dataset.
This causes me a big issue with per dataset
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi,
I''ve ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/
SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
I want to use the 2 OCZ SSDs as mirrored intent log devices, but as
the intent log needs quite a small amount of the disks (10GB?), I was
wondering
2008 Sep 08
1
6745678 zio->io_checksum == ZIO_CHECKSUM_SHA256_CCM_MAC (0x5 == 0x9), file: zio.c, line: 1498
Author: Darren Moffat <Darren.Moffat at Sun.COM>
Repository: /hg/zfs-crypto/gate
Latest revision: 32a041998ab168dc335d487020fc0cb59c85d81f
Total changesets: 1
Log message:
6745678 zio->io_checksum == ZIO_CHECKSUM_SHA256_CCM_MAC (0x5 == 0x9), file: zio.c, line: 1498
Files:
update: usr/src/uts/common/fs/zfs/zio.c
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77 minutes. I haven''t tried my old tool, but I would
expect the same diff to take a couple of