Displaying 20 results from an estimated 1000 matches similar to: "sub-optimal ZFS performance"
2010 Mar 02
9
Filebench Performance is weird
Greeting All
I am using Filebench benchmark in an "Interactive mode" to test ZFS
performance with randomread wordload.
My Filebench setting & run results are as follwos
------------------------------------------------------------------------------------------
filebench> set $filesize=5g
filebench> set $dir=/hdd/fs32k
filebench> set $iosize=32k
filebench> set
2009 Jan 24
3
zfs read performance degrades over a short time
I appear to be seeing the performance of a local ZFS file system degrading over a short period of time.
My system configuration:
32 bit Athlon 1800+ CPU
1 Gbyte of RAM
Solaris 10 U6
SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc
2x250 GByte Western Digital WD2500JB IDE hard drives
1 zfs pool (striped with the two drives, 449 GBytes total)
1 hard drive has
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2011 Apr 25
3
arcstat updates
Hi ZFSers,
I''ve been working on merging the Joyent arcstat enhancements with some of my own
and am now to the point where it is time to broaden the requirements gathering. The result
is to be merged into the illumos tree.
arcstat is a perl script to show the value of ARC kstats as they change over time. This is
similar to the ideas behind mpstat, iostat, vmstat, and friends.
The current
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded.
i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2008 Mar 27
3
kernel memory and zfs
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
root at servername:~/zonecfg #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all,
I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has
32 GB RAM installed. I am running Sol11Expr on this host and I use it to
primarily serve Netatalk AFP shares. From day one, I have noticed that
the amount of free RAM decereased and along with that decrease the
overall performance of ZFS decreased as well.
Now, since I am still quite a Solaris newbie, I seem to
2007 Nov 08
5
mdb ::memstat including zfs buffer details?
Hey all -
Just a quick one...
Is there any plan to update the mdb ::memstat dcmd to present ZFS
buffers as part of the summary?
At present, we get something like:
> ::memstat
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 28859 112 13%
Anon 34230
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated.
My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing)
iSCSI Storage Array that is capable of
20 MB/s random writes @ 4k and 70 MB random reads
2007 Jan 23
0
Understanding ::memstat in terms of the ARC
Hello all,
I have a question. Below are two ::memstat outputs about 5 days apart.
The interesting thing is the "anonymous" memory shows 2GB, though the
two major hogs of that memory (two MySQL instances) claim to be
consuming about 6.2GB (checked via pmap).
Also, it seems like the ARC keeps creeping the kernel memory over the
4GB limit I set for the ARC (zfs_arc_max). What I was also,
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello.
I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago).
Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system.
Memory
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
having to do it at each level.
(Maybe it was in a dream...)
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2010 May 26
1
Factor to Numeric
Dear All,
I have a data frame with State and 12 Months as column. I want to convert
all the 12 month column from factor to numeric.
any help will be greatly appreciated.
> str(data)
'data.frame': 33 obs. of 9 variables:
$ State : Factor w/ 33 levels "Andaman and Nicobar Islands",..: 1 2 3 4
5 6 7 8 9 10 ...
$ April : Factor w/ 21 levels
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.