Displaying 20 results from an estimated 1100 matches similar to: "zfs kills box, memory related?"
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_min....Anyway, I set c_max to 1GB.
After a workload run....:
> arc::print -tad
{
. . .
ffffffffc02e29e8
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2007 May 24
3
Problem with numerical integration and optimization with BFGS
Hi R users,
I have a couple of questions about some problems that I am facing with
regard to numerical integration and optimization of likelihood
functions. Let me provide a little background information: I am trying
to do maximum likelihood estimation of an econometric model that I have
developed recently. I estimate the parameters of the model using the
monthly US unemployment rate series
2011 Feb 03
1
ZFS Write Performance Issues
We seem to be having write issues with zfs, does anyone see anything in the
following:
bash-3.00# kstat -p -n arcstats
zfs:0:arcstats:c 655251456
zfs:0:arcstats:c_max 5242011648
zfs:0:arcstats:c_min 655251456
zfs:0:arcstats:class misc
zfs:0:arcstats:crtime 5699201.4918501
zfs:0:arcstats:data_size 331404288
zfs:0:arcstats:deleted 408216
zfs:0:arcstats:demand_data_hits
2009 May 30
1
Problems with power management
I do not seem to be able to get any where with the power management
functions. I have had a look at the xenpm Wiki page, but it hasn''t
helped. Its probably something completely obvious, but I can''t see it.
I running Xen 3.4.0 on Centos 5.3 x86_64 using the gitco RPMs on an
Intel S5000PSL motherboard with 2 x Xeon 5410s. Dom0 is running the
latest Centos 5.3 kernel.
My xm
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello.
We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt
scsi buses, skge GigE network) as a NFS backend with ZFS for
distribution of free software like Debian (cdimage.debian.org,
ftp.se.debian.org) and have run into some performance issues.
We are running SX snv_48 and have run with a raidz2 with 7x300G for a
while now, just added another 7x300G raidz2 today but
2010 Mar 18
2
aumentar tamaño de memoria a mas de 4Gb
Hola de nuevo,
Esta es la información de mi sesion:
R version 2.10.1 (2009-12-14)
i386-pc-mingw32
locale:
[1] LC_COLLATE=Spanish_Spain.1252 LC_CTYPE=Spanish_Spain.1252
[3] LC_MONETARY=Spanish_Spain.1252 LC_NUMERIC=C
[5] LC_TIME=Spanish_Spain.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
Lo que yo prentendo es
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello.
I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago).
Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system.
Memory
2019 Nov 15
4
[PATCH 0/2] drm/nouveau: remove some set but not used variables
zhengbin (2):
drm/nouveau: remove set but not used variable 'pclks','width'
drm/nouveau: remove set but not used variable 'mem'
drivers/gpu/drm/nouveau/dispnv04/arb.c | 6 ++----
drivers/gpu/drm/nouveau/nouveau_ttm.c | 4 ----
2 files changed, 2 insertions(+), 8 deletions(-)
--
2.7.4
2009 Nov 23
2
[PATCH 1/3] drm/nouveau: Update the CRTC arbitration parameters on FB depth switch.
Signed-off-by: Francisco Jerez <currojerez at riseup.net>
---
drivers/gpu/drm/nouveau/nv04_crtc.c | 37 +++++++++++++++++++++-------------
1 files changed, 23 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nv04_crtc.c b/drivers/gpu/drm/nouveau/nv04_crtc.c
index 2ab9f30..0a5cfc1 100644
--- a/drivers/gpu/drm/nouveau/nv04_crtc.c
+++ b/drivers/gpu/drm/nouveau/nv04_crtc.c
2019 Dec 31
2
[PATCH] drm/nouveau: remove set but unused variable.
The local variable `pclks` is defined and set but not used and can
therefore be removed.
Issue found by coccinelle.
Signed-off-by: Wambui Karuga <wambui.karugax at gmail.com>
---
drivers/gpu/drm/nouveau/dispnv04/arb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/dispnv04/arb.c b/drivers/gpu/drm/nouveau/dispnv04/arb.c
index
2018 Sep 19
1
[PATCH] drm: nouveau: remove a redundant local variable 'pclks'
The local variable 'pclks' is never used after being assigned.
hence it should be redundant and can be removed.
Signed-off-by: zhong jiang <zhongjiang at huawei.com>
---
drivers/gpu/drm/nouveau/dispnv04/arb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/dispnv04/arb.c b/drivers/gpu/drm/nouveau/dispnv04/arb.c
index c79160c..cae8f71
2012 Apr 15
7
[Bug 48742] New: fbotexture -arb misrenders on nv43
https://bugs.freedesktop.org/show_bug.cgi?id=48742
Bug #: 48742
Summary: fbotexture -arb misrenders on nv43
Classification: Unclassified
Product: Mesa
Version: git
Platform: x86 (IA32)
OS/Version: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/DRI/nouveau
2007 Mar 16
21
ZFS memory and swap usage
Greetings, all.
Does anyone have a good whitepaper or three on how ZFS uses memory and swap? I did some Googling, but found nothing that was useful.
The reason I ask is that we have a small issue with some of our DBA''s. We have a server with 16GB of memory, and they are looking at moving over databases to it from a smaller system. The catch is that they are moving to 10g. Oracle
2003 Feb 06
1
Trouble with include/exclude patterns
I'm using rsync 2.5.4 on my RedHat 7.3 client laptop and rsync 2.5.5
on my RedHat 8.0 server. On the client, I have a directory "rpm" with
5 subdirectories, out of which I only want to copy the one called SRPMS
across. I also have another directory ".mozilla" out of which I want to
copy across 2 files. I have constructed the following rsync invocation:
rsync -e ssh -av
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always
2008 May 20
1
how to save many trees within a loop?
Hi,
I would like to save many trees created in a loop as
different ones. I can plot them, but I can not get the
whole tree properties saved. I paste the script below
Also, How can I perform multivariate trees?, that is
predicting a vector o values, not a simple scalar
observation.
Thanks in advance
Angel
for (i in 1:7){#loop para hacer arb
arb=arb[i]#contador
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%