similar to: ZFS with SSD ZIL vs XFS

Displaying 20 results from an estimated 800 matches similar to: "ZFS with SSD ZIL vs XFS"

2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ? Probably not. If there is, it would probably favor
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
2017-10-10 18:27 GMT+02:00 Jeff Darcy <jeff at pl.atyp.us>: > Probably not. If there is, it would probably favor XFS. The developers > at Red Hat use XFS almost exclusively. We at Facebook have a mix, but > XFS is (I think) the most common. Whatever the developers use tends to > become "the way local filesystems work" and code is written based on > that profile,
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
Last time I've read about tiering in gluster, there wasn't any performance gain with VM workload and more over doesn't speed up writes... Il 10 ott 2017 9:27 PM, "Bartosz Zi?ba" <kontakt at avatat.pl> ha scritto: > Hi, > > Have you thought about using an SSD as a GlusterFS hot tiers? > > Regards, > Bartosz > > > On 10.10.2017 19:59, Gandalf
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all, I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did: I created a file to use as a backing store for my new pool: mkfile 1g /data01/test2/1gtest Created a new pool: zpool create ziltest2 /data01/test2/1gtest Added the
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello, as usual, I'm few years behind trends so I have learned about LVM thin volumes recently and I especially like that your volumes can be "sparse" - that you can have 1TB thin volume on 250GB VG/thin pool. Is it somehow possible to use that with libvirt? I have found this post from 2014: https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html which says
2017 Aug 01
3
Corrupt index files
On Mon, Jul 24, 2017 at 07:56:23PM +0300, Aki Tuomi wrote: > Well, dovecot does not really guarantee access concurrency safety if you access indexes using more than one instance of dovecot at the same time. Pardon my ignorance, but how does Dovecot handle when an IMAP client connects multiple times concurrently? Does it not launch multiple instances? > Nevertheless, did you try w/o
2015 Sep 21
3
New software based on libvirt
Hello, I'm introducing to you the decentralized cloud Cherrypop. Combining libvirt and LizardFS (as of now) it becomes a cloud completely without masters. Thus, any node is sufficient for the cloud to be up and therefore no wasted resources and no single point of failure. It's still pretty crude software but will work with some tinkering. Hope you try it and like it! For more
2017 Jul 24
2
Corrupt index files
On Mon, Jul 24, 2017 at 08:39:36AM +0300, Aki Tuomi wrote: > Do you have users accessing the files concurrently from more than one > dovecot instance at a time? Yes. Apparently it is fairly common behavior for some IMAP clients to open up multiple connections to the same mailbox. Some times the multiple accesses came from different servers (stand alone IMAP client and a webmail system), but
2017 Jul 21
4
Corrupt index files
I am running Dovecot IMAP on Linux, on a LizardFS storage cluster with Maildir storage. This has worked well for most of the accounts for several months. However in the last couple of weeks we are seeing increasing errors regarding corrupted index files. Some of the accounts affected are unable to retrieve messages due to timeouts. It appeared the problems were due to the accounts being accessed
2009 Jan 23
1
ZIL FOID
I need some clarification on the FOID handed to zil_commit. I wrote a dscript to watch entry and return of zil_commit_writer. Here is an example output: <pre> 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211310 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211324 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211386
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2008 Feb 19
0
ZIL encryption preparation work
Author: Darren Moffat <darrenm at opensolaris.org> Repository: /hg/zfs-crypto/gate Latest revision: a75f21839b8aba305660e5746fe66c1171d8b2d3 Total changesets: 1 Log message: ZIL encryption preparation work Files: update: usr/src/uts/common/fs/zfs/sys/zio.h update: usr/src/uts/common/fs/zfs/sys/zio_impl.h update: usr/src/uts/common/fs/zfs/zfs_log.c update: usr/src/uts/common/fs/zfs/zil.c
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some additional information) are available at http://www.meangrape.com/ 2007/08/oscon-zfs/ Jay Edwards jay at meangrape.com http://www.meangrape.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org