similar to: ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)

Displaying 20 results from an estimated 300 matches similar to: "ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)"

2008 Jun 24
1
zfs primarycache and secondarycache properties
Moved from PSARC to zfs-code...this discussion is seperate from the case. Eric kustarz wrote: > > On Jun 23, 2008, at 1:20 PM, Darren Reed wrote: > >> eric kustarz wrote: >>> >>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote: >>> >>>> Tim Haley wrote: >>>>> .... >>>>> primarycache=all | none | metadata
2017 Apr 14
2
ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Hi, I’m new here so apologies if this has been answered before. I have a box that uses ZFS for everything (ubuntu 17.04) and I want to create a libvirt pool on that. My ZFS pool is named „big" So i do: > zfs create big/zpool > virsh pool-define-as --name zpool --source-name big/zpool --type zfs > virsh pool-start zpool > virsh pool-autostart zpool > virsh pool-list >
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2017 Apr 23
0
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thies C. Arntzen wrote: > Hi, > > I’m new here so apologies if this has been answered before. > > I have a box that uses ZFS for everything (ubuntu 17.04) and I want to > create a libvirt pool on that. My ZFS pool is named „big" > > So i do: > > > zfs create big/zpool > > virsh pool-define-as --name zpool --source-name big/zpool --type zfs > >
2017 Apr 24
1
Re: ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
Thank you for your reply. I have managed to create a virtual machine on my ZFS-filesystem using virt-install:-) It seems to me that my version of libvirt (Ubuntu 17.04) has problems enumerating the devices when "virsh vol-list“ is used. The volumes are available for virt-install but not thru virsh or virt-manager. As to when the volumes disappear in virsh vol-list - I have no idea. I’m not
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2013 Mar 06
0
where is the free space?
hi All, Ubuntu 12.04 and glusterfs 3.3.1. root at tipper:/data# df -h /data Filesystem Size Used Avail Use% Mounted on tipper:/data 2.0T 407G 1.6T 20% /data root at tipper:/data# du -sh . 10G . root at tipper:/data# du -sh /data 13G /data It's quite confused. I also tried to free up the space by stopping the machine (actually LXC VM) with no lock. After umounting the space
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated. My current setup is : OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing) iSCSI Storage Array that is capable of 20 MB/s random writes @ 4k and 70 MB random reads
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
Hello, I'm currently using dovecot 1.2.11 on FreeBSD 8.0 with ZFS filesystems. So far, so good, it works quite nicely, but I have a couple glitches. Each user has his own zfs partition, mounted on /home/<user> (easier to set per user quotas) and mail is stored in their home. From day one, when people check their mail via imap, a lot of indexes corruption occured : dovecot:
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
All, Running Samba 3.5.4 on Solaris 10 with ZFS file system. I have issues where we have shared group folders. In these folders a userA in GroupA create file just fine with the correct inherited permissions 660. Problem is when userB in GroupA reads and modifies that file, with M$ office apps, the permissions get whacked to 060+ and the file becomes read only by everyone. I did
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello. I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago). Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system. Memory
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote: > On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote: >> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote: >> > Btrfs cannot handle having logically non-contiguous requests submitted. For >> > example if you have >> > >> > Logical: [0-4095][HOLE][8192-12287]
2013 Nov 26
0
Dedup on read-only snapshots
According to https://github.com/g2p/bedup/tree/wip/dedup-syscall "The clone call is considered a write operation and won''t work on read-only snapshots." Is this fixed on newer kernels? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at
2009 Dec 30
3
what happens to the deduptable (DDT) when you set dedup=off ???
I tried the deduplication feature but the performance of my fileserver dived from writing 50MB/s via CIFS to 4MB/s. what happens to the deduped blocks when you set dedup=off? are they written back to disk? is the deduptable deleted or is it still there? thanks -- This message posted from opensolaris.org
2010 Mar 02
2
dedup source code
Hello ZFS experts: I would like to study ZFS de-duplication feature. Can someone please let me know which directory/files I should be looking at? Thanks in advance. -- This message posted from opensolaris.org
2018 Feb 17
0
Proper way to load pre-generated LLVM IR into JIT module and dedup type definitions?
What is the proper way to load a set of pre-generated LLVM IR and make it available to runtime JIT modules such that the same types aren't given new names and inlining and const propagation can still take place?   My attempt so far: I compile a set of C functions to LLVM IR offline via 'clang -c -emit-llvm -S -ffast-math -msse2 -O3 -o MyLib.ll MyLib.c' For each runtime JIT module, I
2011 Jun 29
0
SandForce SSD internal dedup
This article raises the concern that SSD controllers (in particular SandForce) do internal dedup, and in particular that this could defeat ditto-block style replication of critical metadata as done by filesystems including ZFS. http://storagemojo.com/2011/06/27/de-dup-too-much-of-good-thing/ Along with discussion of risk evaluation, it also suggests that filesystems could vary each copy in some
2011 Sep 16
2
Dedup (again)
Hi all, Back in March someone asked about deduplication in Centos and I replied I'm using LessFS. I want to report that my overall experience is that I have performance issue up to the point that I would like to abandon it. The OP was asking http://www.opendedup.org/ How is it? Thanks Fajar