Displaying 20 results from an estimated 4000 matches similar to: "Tracking down the causes of a mysteriously shrinking ARC cache?"
2006 Mar 20
1
ARC cache issues with b35/b36; Bugs 6397610 / 6398177
> Bug ID: 6398177
> Synopsis: zfs: poor nightly build performance in 32-bit mode (high disk activity)
Part of the problem appear to be these kmem_caches:
# mdb -k
...
> ::kmastat
cache buf buf buf memory alloc alloc
name size in use total in use succeed fail
------------------------- ------ ------ ------ ---------
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2011 Apr 25
3
arcstat updates
Hi ZFSers,
I''ve been working on merging the Joyent arcstat enhancements with some of my own
and am now to the point where it is time to broaden the requirements gathering. The result
is to be merged into the illumos tree.
arcstat is a perl script to show the value of ARC kstats as they change over time. This is
similar to the ideas behind mpstat, iostat, vmstat, and friends.
The current
2009 Nov 20
0
hung pool on iscsi
Hi,
Can anyone identify whether this is a known issue (perhaps 6667208) and
if the fix is going to be pushed out to Solaris 10 anytime soon? I''m
getting badly beaten up over this weekly, essentially anytime we drop a
packet between our twenty-odd iscsi-backed zones and the filer.
Chris was kind enough to provide his synopsis here (thanks Chris):
2007 Sep 18
0
arcstat - a tool to print ARC statistics
I wrote a simple tool to print out the ARC statistics exported via
kstat. Details at
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
-neel
--
---
Neelakanth Nadgir PAE Performance And Availability Eng
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based
storage (for failover, cost, ease of expansion, and so on). As part of
this we would like to use multipathing for extra reliability, and I am
not sure how we want to configure it.
Our iSCSI backend only supports multiple sessions per target, not
multiple connections per session (and my understanding is that the
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated.
My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing)
iSCSI Storage Array that is capable of
20 MB/s random writes @ 4k and 70 MB random reads
2003 Feb 24
1
(fwd from johanhusselman@cks.co.za) Please help with smb printing
----- Forwarded message from Johan Husselmann <johanhusselman@cks.co.za> -----
From: "Johan Husselmann" <johanhusselman@cks.co.za>
Subject: Please help with smb printing
Date: Sat, 22 Feb 2003 19:17:40 +0200
To: <samba@samba.org>
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
Can tou please help me to setup smb printing from red hat 7.3 to a
windows2000
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2020 Feb 06
0
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thu, Feb 06, 2020 at 04:01:47PM +0800, Wei Wang wrote:
> There are cases that users want to shrink balloon pages after the
> pagecache depleted. The conservative_shrinker lets the shrinker
> shrink balloon pages when all the pagecache has been reclaimed.
>
> Signed-off-by: Wei Wang <wei.w.wang at intel.com>
I'd rather avoid module parameters, but otherwise looks
like
2020 Feb 06
0
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On 06.02.20 09:01, Wei Wang wrote:
> There are cases that users want to shrink balloon pages after the
> pagecache depleted. The conservative_shrinker lets the shrinker
> shrink balloon pages when all the pagecache has been reclaimed.
>
> Signed-off-by: Wei Wang <wei.w.wang at intel.com>
> ---
> drivers/virtio/virtio_balloon.c | 14 +++++++++++++-
> 1 file changed,
2020 Feb 08
0
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On 2020/02/06 17:01, Wei Wang wrote:
> There are cases that users want to shrink balloon pages after the
> pagecache depleted. The conservative_shrinker lets the shrinker
> shrink balloon pages when all the pagecache has been reclaimed.
>
> @@ -796,6 +800,10 @@ static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
> {
> unsigned long pages_freed = 0;
>
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All,
Is there a way to tune the zfs prefetch on a per pool basis? I have a
customer that is seeing slow performance on a pool the contains multiple
tablespaces from an Oracle database, looking at the LUNs associated to
that pool they are constantly at 80% - 100% busy. Looking at the output
from arcstat for the miss % on data, prefetch and metadata we are
getting around 5 - 10 % on data,
2020 Mar 05
0
[PATCH 01/22] drm/arc: Use simple encoder
The arc driver uses empty implementations for its encoders. Replace
the code with the generic simple encoder.
Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
---
drivers/gpu/drm/arc/arcpgu_hdmi.c | 10 +++-------
drivers/gpu/drm/arc/arcpgu_sim.c | 8 ++------
2 files changed, 5 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/arc/arcpgu_hdmi.c
2020 Feb 06
6
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
There are cases that users want to shrink balloon pages after the
pagecache depleted. The conservative_shrinker lets the shrinker
shrink balloon pages when all the pagecache has been reclaimed.
Signed-off-by: Wei Wang <wei.w.wang at intel.com>
---
drivers/virtio/virtio_balloon.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git
2020 Feb 06
6
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
There are cases that users want to shrink balloon pages after the
pagecache depleted. The conservative_shrinker lets the shrinker
shrink balloon pages when all the pagecache has been reclaimed.
Signed-off-by: Wei Wang <wei.w.wang at intel.com>
---
drivers/virtio/virtio_balloon.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git
2000 Feb 08
0
Bug report and PATCH in ssh-agent in openssh 1.2.2
Dear folks,
system: RH 6.1 Linux on a PIII
software: installed binaries resulting from rpm --rebuild
openssh-1.2.2-1.src.rpm, downloaded from
http://the.wiretapped.net/security/cryptography/ssh/OpenSSH/files/openssh-1.2.2-1.src.rpm
problem program: ssh-agent
problem description:
When execute
ssh-agent startx -- -bpp 32
ssh-agent does not pass the -bpp 32 to startx.
Why problem exists:
2010 Jul 24
0
ARC/VM question
I have a semi-theoretical question about the following code in arc.c,
arc_reclaim_needed() function:
/*
* take ''desfree'' extra pages, so we reclaim sooner, rather than later
*/
extra = desfree;
/*
* check that we''re out of range of the pageout scanner. It starts to
* schedule paging if freemem is less than lotsfree and needfree.
* lotsfree is the high-water mark
1999 Apr 30
0
Latest stable version 2.0.3 with Arc/info
Hello,
First I must say, I am not an expert with arc/info, but one of our users uses it regularly and wants to work with files directly on our samba server, which runs under linux 2.0.36. It seems there are two modules that she regularly uses arc and arc edit. When she does complex, heavy operations such as a 'clean' using arc, it just sticks there permanently after sorting, however doing