Displaying 20 results from an estimated 8000 matches similar to: "Balancing reads across mirror sets"
2009 Jun 10
6
Asymmetric mirroring
Hello everyone,
I''m wondering if the following makes sense:
To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS
drives. For high IOPS, I believe it is best to let ZFS stripe them, instead
of doing a raidz1 across them. Therefore, I would like to mirror the drives
for reliability.
Now, I''m wondering if I can get away with using a large capacity 7200
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello,
When approaching hosting providers for services, the first question
many of them asked us was about the amount of IOPS the disk system
should support.
While we stress-tested our service, we recorded between 4000 and 6000
"merged io operations per second" as seen in "iostat -x" and collectd
(varies between the different components of the system, we have a few
such
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
and another review here:
http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
Regards,
--
Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com
Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2008 Feb 22
6
Damm Small Solaris
Hi,
for what it''s worth:
There''s now a new Live CD for Solaris called Damm Small Solaris:
http://www.sunhelp.ru/archives/179-Damn_Small_Solaris_0.1.1_English_Page.html
In contrast to Belenix this Live CD works in Qemu - even without kqemu
load the performance is not so bad. One important missing are the
network driver for the network adapter emulated by Qemu. But they
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all,
I''m putting together a OpenSolaris ZFS-based system and need help
picking hardware.
I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
OS & 4*(4+2) RAIDZ2 for SAN]
http://rackmountpro.com/productpage.php?prodid=2418
Regarding the mobo, cpus, and memory - I searched goggle and the ZFS
site and all I came up with so far is that, for a
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all,
I''m putting together a OpenSolaris ZFS-based system and need help
picking hardware.
I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
OS & 4*(4+2) RAIDZ2 for SAN]
http://rackmountpro.com/productpage.php?prodid=2418
Regarding the mobo, cpus, and memory - I searched goggle and the ZFS
site and all I came up with so far is that, for a
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might
have more insights here; and wrap long lines.
On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote:
> Hi, everyone.
>
> Recently I am doing some tests on the VM storage+memory migration with
> KVM/QEMU/libvirt. I use the following migrate command through virsh:
> "virsh migrate --live
2010 Sep 13
6
Hardware performance question : Disk RPM speed & Xen Performance
Hello,
I am a relatively new user of Xen virtualization, so you''ll have to forgive
the simplistic nature of my question.
I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan
on utilizing this server with Xen.
The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM
drives that came with the server with faster 300gb
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi,
as I have learned from the discussion about which SSD to use as ZIL
drives, I stumbled across this article, that discusses short stroking
for increasing IOPs on SAS and SATA drives:
http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a ZIL on a zpool that is mainly used for file
2007 Nov 02
7
Force SATA1 on AOC-SAT2-MV8
I have a supermicro AOC-SAT2-MV8 and am having some issues getting drives to work. From what I can tell, my cables are to long to use with SATA2. I got some drives to work by jumpering them down to sata1, but other drives I can''t jumper without opening the case and voiding the drive warranty. Does anyone know if there is a system setting to drop it back to SATA1? I use zfs on a raid2 if
2017 Nov 03
2
low end file server with h/w RAID - recommendations
John R Pierce wrote:
> On 11/2/2017 9:21 AM, hw wrote:
>> Richard Zimmerman wrote:
>>> hw wrote:
>>>> Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can
2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi
What are the disk IOPS required for Asterisk call recording?
I am trying to find out number of disks required in RAID array to record
500 calls.
Is there any formula to calculate IOPS required by Asterisk call
recording? This will help me to find IOPS for different scale.
If I assume that Asterisk will write data on disk every second for each
call, I will need disk array to support minimum
2011 Oct 28
2
How can we horizontally scale Dovecot across multiple servers?
Hi,
How can we horizontally scale Dovecot across multiple servers? Do we require
to install independent instances of Dovecot on each server?
We are planning to use a NAS/SAN device using ZFS or EFS for email storage.
Each logical unit will be of 10TB and similarly as the no: of user increases
we are planning to add multiple 10TB units.
In this case how we can manage the email storage on
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200, but it
involves a lot of development work.
The basic idea: the main problem when using a HDD as a ZIL device
are the cache flushes
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days.
Does anyone have any insight into a next-gen X4500 / X4540? Does some
other Oracle / Sun partner make a comparable system that is fully
supported by Oracle / Sun?
http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html
What do X4500 / X4540 owners use if they''d like more
2008 May 06
11
I need storage server advice
Hi:
I need advice on implementing a storage server. I really do not have
the $ to spend for a Dell iSCSI storage divice and I am thinking
trunning CentOS 5.x with ftp or FreeNAS. Here is what I am looking at
and concerned about.
Situation:
My current storage needs are approximately 1.5 TB annually. This will
increase to about 3.5 TB annually over the next 5 years (rough est.).
This box
2007 Jun 07
3
Announcing NexentaCP(b65) with ZFS/Boot integrated installer
Announcing new direction of Open Source NexentaOS development:
NexentaCP (Nexenta Core Platform).
NexentaCP is Dapper/LTS-based core Operating System Platform distributed
as a single-CD ISO, integrates Installer/ON/NWS/Debian and provides
basis for Network-type installations via main or third-party APTs (NEW).
First "unstable" b65-based ISO with ZFS/Boot-capable installer available
as
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to;
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available?
Yours
Markus Kovero
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton:
http://www.linux-mag.com/id/7839
anyone have views on whether this sort of caching would be useful for
the MDT? My feeling is that MDT reads are probably pretty random but
writes might benefit...?
GREG
--
Greg Matthews 01235 778658
Senior Computer Systems Administrator
Diamond Light Source, Oxfordshire, UK