similar to: ZFS with SSD ZIL vs XFS

Displaying 20 results from an estimated 2000 matches similar to: "ZFS with SSD ZIL vs XFS"

2017 Oct 10
1
ZFS with SSD ZIL vs XFS
2017-10-10 18:27 GMT+02:00 Jeff Darcy <jeff at pl.atyp.us>: > Probably not. If there is, it would probably favor XFS. The developers > at Red Hat use XFS almost exclusively. We at Facebook have a mix, but > XFS is (I think) the most common. Whatever the developers use tends to > become "the way local filesystems work" and code is written based on > that profile,
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ? Probably not. If there is, it would probably favor
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL on SSD, in gluster environment ? I've tried to compare both on another SDS (LizardFS) and I haven't seen any tangible performance improvement. Is gluster different ?
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all, I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did: I created a file to use as a backing store for my new pool: mkfile 1g /data01/test2/1gtest Created a new pool: zpool create ziltest2 /data01/test2/1gtest Added the
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2017 Nov 08
0
Adding a slack for communication?
It's great idea! :) But think about creating Slack for all RedHat provided opensource projects. For example one Slack workspace with separated Gluster, Ceph, Fedora etc. channels. I can't wait for it! Bartosz On 08.11.2017 22:22, Amye Scavarda wrote: > From today's community meeting, we had an item from the issue queue: > https://github.com/gluster/community/issues/13
2018 Jan 19
0
Backup Solutions for GlusterFS
Probably all file-level backups? ;) Rsync is the simplest option. Regards, Bartosz > On 19 Jan 2018, at 08:27, Kadir <qkadir at yahoo.com> wrote: > > Hi, > > What is the backup solutions for glusterfs? Does glusterfs suppors any backup solutions. > > Sincerely, > Kadir > _______________________________________________ > Gluster-users mailing list >
2003 Jun 25
1
socks5 support for -D
here's an up-to-date patch, should apply to both openbsd and non-openbsd versions of openssh. i did only test ipv4 addresses. Index: channels.c =================================================================== RCS file: /cvs/src/usr.bin/ssh/channels.c,v retrieving revision 1.191 diff -u -r1.191 channels.c --- channels.c 24 Jun 2003 08:23:46 -0000 1.191 +++ channels.c 25 Jun 2003 12:14:19
2017 Jun 22
0
Disable write-back in tiering volume
Hi all! I made GlusterFS volume based on few HDD at few servers and I wanted to add SSD based tiering. Write-back caching is problem for this case, because SSD are in only one server, so its failure results in failure of all tier and data loss. I want to disable write-back caching and write from client nodes directly to HDD volume (cold tier). How can I do that? My cluster is based on GlusterFS
2009 Jan 23
1
ZIL FOID
I need some clarification on the FOID handed to zil_commit. I wrote a dscript to watch entry and return of zil_commit_writer. Here is an example output: <pre> 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211310 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211324 : FOID 129644 Completed in 0 ms 2009 Jan 23 23:34:36: ZIL Commit : Seq 183211386
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2008 Feb 19
0
ZIL encryption preparation work
Author: Darren Moffat <darrenm at opensolaris.org> Repository: /hg/zfs-crypto/gate Latest revision: a75f21839b8aba305660e5746fe66c1171d8b2d3 Total changesets: 1 Log message: ZIL encryption preparation work Files: update: usr/src/uts/common/fs/zfs/sys/zio.h update: usr/src/uts/common/fs/zfs/sys/zio_impl.h update: usr/src/uts/common/fs/zfs/zfs_log.c update: usr/src/uts/common/fs/zfs/zil.c
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some additional information) are available at http://www.meangrape.com/ 2007/08/oscon-zfs/ Jay Edwards jay at meangrape.com http://www.meangrape.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2007 Nov 28
0
[storage-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Nicolas Dorfsman wrote: > Le 27 nov. 07 ? 16:17, Torrey McMahon a ?crit : > >> According to the array vendor the 99xx arrays no-op the cache flush >> command. No need to set the /etc/system flag. >> >> http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_arrays >> >> > > > Perfect ! > > Thanks Torrey. > > Just realize