Displaying 20 results from an estimated 157 matches for "slogged".
Did you mean:
logged
2008 Sep 03
8
SAS or SATA HBA with write cache
Anyone know of a SATA and/or SAS HBA with battery backed write cache?
Seems like using a full-blown RAID controller and exporting each individual drive back to ZFS as a single LUN is a waste of power and $$$. Looking for any thoughts or ideas.
Thanks.
-Matt
--
This message posted from opensolaris.org
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device also rated at Y iops will
my zpool now theoretically be able to handle 2Y iops? Or close to
that?
Thanks,
Ray
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks,
This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device?
The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I''m wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the LSI
controllers are PCI-E.
Supermicro have several LSI controllers. AOC-USASLP-L8i with the
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2011 Mar 01
14
Good SLOG devices?
Hi
I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here....
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all...
I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks).
But i did not find anything about 100% slog activity (~115MB/s) blocks
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn''t the slog a copy of the
in-memory intent log? Wouldn''t it just simply replicate the data that''s
in the other log, checked against what''s in RAM? And presumably there
isn''t that much data in the slog so there isn''t that much to check?
Or
2008 Oct 26
4
Cannot remove slog device from zpool
Hello,
I''ve looked quickly through the archives and haven''t found mention of
this issue. I''m running SXCE (snv_99), which I believe uses zfs version
13. I had an existing zpool:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081026/a5e2f25b/attachment.html>
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2008 Feb 06
3
slogging my way thru oracle, not adapter? gem install fails?
<ubuntu_gutsy> me@ubuntu:~/workspace/oracle/ro$ gem install
activerecord-oci-adapter
Bulk updating Gem source index for: http://gems.rubyforge.org
ERROR: While executing gem ... (Gem::GemNotFoundException)
Could not find activerecord-oci-adapter (> 0) in any repository
<ubuntu_gutsy> me@ubuntu:~/workspace/oracle/ro$ rake db:migrate
(in /home/me/workspace/oracle/ro)
rake
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
>/
/>/ Has the following error no consequences?
/>/
/>/ Bug ID 6538021
/>/ Synopsis Need a way to force pool startup when
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2008 Mar 20
5
[LLVMdev] testsuite problems after merge
I'm seeing ~100 new failures in the gcc testsuite due to the test file
being doubled or tripled, as below. This appears to affect only
files that were newly imported from gcc-4.2 in the recent merge. Does
anybody have an idea for how to mechanize fixing these (I doubt you
can count on the APPLE LOCAL comment being there)? If there's no
better way than slogging through
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS will crash, and the whole
zpool is permanently gone, even after reboots.
Using opensolaris,