similar to: ZFS slows down over a couple of days

Displaying 20 results from an estimated 1000 matches similar to: "ZFS slows down over a couple of days"

2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi, I want to move all the ZFS fs from one pool to another, but I don''t want to "gain" an extra level in the folder structure on the target pool. On the source zpool I used zfs snapshot -r tank at moveTank on the root fs and I got a new snapshot in all sub fs, as expected. Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/... which would place all zfs fs
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2010 Nov 21
10
Running on Dell hardware?
> From: Edward Ned Harvey [mailto:shill at nedharvey.com] > > I have a Dell R710 which has been flaky for some time.? It crashes about once > per week.? I have literally replaced every piece of hardware in it, and > reinstalled Sol 10u9 fresh and clean. It has been over 3 weeks now, with no crashes, and me doing everything I can to get it to crash again. So I''m going to
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running
2011 Apr 25
3
arcstat updates
Hi ZFSers, I''ve been working on merging the Joyent arcstat enhancements with some of my own and am now to the point where it is time to broaden the requirements gathering. The result is to be merged into the illumos tree. arcstat is a perl script to show the value of ARC kstats as they change over time. This is similar to the ideas behind mpstat, iostat, vmstat, and friends. The current
2009 May 06
12
Monitoring ZFS host memory use
Hi, Please forgive me if my searching-fu has failed me in this case, but I''ve been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS ARC is using up the memory, but that the script Nagios is using to check memory usage simply sees, say 96% RAM used, and alerts. The
2010 Oct 13
40
Running on Dell hardware?
I have a Dell R710 which has been flaky for some time. It crashes about once per week. I have literally replaced every piece of hardware in it, and reinstalled Sol 10u9 fresh and clean. I am wondering if other people out there are using Dell hardware, with what degree of success, and in what configuration? The failure seems to be related to the perc 6i. For some period around the time
2010 Apr 05
0
Why does ARC grow above hard limit?
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it happening? mdb''s ::memstat reports 1.5 GB used - does this include ARC as well or is
2010 Mar 21
1
arc_summary.pl results
Was wondering if anyone can see any issues with the ARC in the following output? bash-3.00# ./arc_summary.pl System Memory: Physical RAM: 6023 MB Free Memory : 784 MB LotsFree: 90 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 1159 MB (arcsize) Target Size (Adaptive): 2106 MB (c) Min Size (Hard Limit): 624 MB (zfs_arc_min) Max Size (Hard Limit): 4999 MB (zfs_arc_max) ARC Size
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello, I have a situation where a host, which is booted off its ''rpool'', need to temporarily import the ''rpool'' of another host, edit some files in it, and export the pool back retaining its original name ''rpool''. Can this be done ? Here is what I am trying to do: # zpool import -R /a rpool temp-rpool # zfs set mountpoint=/mnt
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote: > Well, after a very stressful weekend, I think I have things largely > working. Turns out that most of the above issues were caused by the linux > permissions of the exports for all three volumes (they had been reset to > 600; setting them to 774 or 770 fixed many of the issues). Of course, I >
2008 May 26
2
SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
Hi All, I am running nevada 79 BFU''ed to 82. The machine is a Ultra 20 with 4GB memory. I have several Windows XP domU''s configured and registered. When ever I try to start the fourth domain I get an out of memory exception: Not enough memory is available, and dom0 cannot be shrunk any further Each of my domains only uses 256 so I thought there would be sufficient memory
2007 Mar 23
1
Consolidating LVM volumes..
Hi, Something I haven't done before is reduce the number of volumes on my server.. Here is my current disk setup.. [root at server1 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-RootVol00 15G 1.5G 13G 11% / /dev/md0 190M 42M 139M 24% /boot /dev/mapper/VolGroup00-DataVol00 39G 16G
2010 Oct 28
0
Good write, but slow read speeds over the network
Hi all, I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I am experiencing different speeds when when writing to and reading from the pool. The pool itself consists of two FC LUNs that each build a vdev (no comments on that please, we discussed that already! ;) ). Now, I am having a couple of AFP clients that access this pool either via FastEthernet or even GiBitEthernet.
2008 Jan 28
5
XEN - ZFS
Hello, I read, and know, that ZFS + XEN are not works well together. (I''ve only 512MB for Dom0). How I can disable/off all ZFS stuff to have more usable memory to Dom0. Regards Maciej
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0