Displaying 20 results from an estimated 5000 matches similar to: "Multiple SLOG devices per pool"
2011 Mar 01
14
Good SLOG devices?
Hi
I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here....
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks,
This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device?
The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I''m wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the LSI
controllers are PCI-E.
Supermicro have several LSI controllers. AOC-USASLP-L8i with the
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all...
I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks).
But i did not find anything about 100% slog activity (~115MB/s) blocks
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200, but it
involves a lot of development work.
The basic idea: the main problem when using a HDD as a ZIL device
are the cache flushes
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2008 Oct 26
4
Cannot remove slog device from zpool
Hello,
I''ve looked quickly through the archives and haven''t found mention of
this issue. I''m running SXCE (snv_99), which I believe uses zfs version
13. I had an existing zpool:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081026/a5e2f25b/attachment.html>
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
>/
/>/ Has the following error no consequences?
/>/
/>/ Bug ID 6538021
/>/ Synopsis Need a way to force pool startup when
2016 Mar 11
4
NetApp NFS vs. ZFS and NFS for Maildir
Hi,
I'm evaluating to switch from NetApp to a ZFS appliance (like Qsan). Our
setup is Dovecot, Maildir for email storage and NFS to share mailboxes
(more than 30k users) across POP/IMAP and MX servers.
NetApp NFS works fine also under high load but have some limitation for
inode numbers per Volume and is expensive (but recently their prices
have dropped).
ZFS, I read, suggest to create
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux
2011 Aug 11
19
Intel 320 as ZIL?
Are any of you using the Intel 320 as ZIL? It''s MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to 80%).
Anyone have experience with this?
Ray
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the
2008 May 20
4
awstats, webalizer or...
So what does everyone out there use to generate web statistics these
days? Are the tried and true awstats or webalizer still the best out
there?
Ray
2010 Aug 25
6
(preview) Whitepaper - ZFS Pools Explained - feedback welcome
Hello list,
while following this list for more then 1 year, I feel that this list was a great way to get insights into ZFS. Thank you all for contributing.
Over the last month''s I was writing a little "whitepaper" trying to consolidate the knowledge collected here. It has now reached a "beta" state and I would like to share the result with you. I call it
-
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or