Displaying 20 results from an estimated 10000 matches similar to: "ZFS+native SATA"
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI
providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume
consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp
clients for about 200 users.
The logical drive was created with the following settings:
RAID = 5
stripe size = 32kb
write policy = wrback
read policy =
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than
approximately 40MB/s on an ext2 file system. IMO, this is horrible
performance for a 6-drive, hardware RAID 5 array. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive,
2012 Jan 04
9
Stress test zfs
Hi all,
I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB
memory RIght now oracle . I''ve been trying to load test the box with
bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more
than a couple K for writes. Any suggestions? Or should I take this to a
bonnie++ mailing list? Any help is appreciated. I''m kinda
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed
CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz
4GB ECC memory
4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm
to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi,
We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe.
Each disk (of 4) is divided up like this
/ 6GB UFS s0
Swap 8GB s1
/var 6GB UFS s3
Metadb 50MB UFS s4
/data 48GB ZFS s5
For SVM we do a 4 way mirror on /,swap, and /var
So we have 3 SVM mirrors
d0=root (sub mirrors d10, d20, d30, d40)
d1=swap (sub mirrors d11, d21,d31,d41)
2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems
to be some sort of fundamental disagreement between ext3 and 3ware's
hardware RAID5 mode that trashes write performance. As a representative
example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode
(256KB stripe size) with a software RAID0 stripe on top (also 256KB
chunks). bonnie++ results look
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m
using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on
B is running extremely slowly. if i run the zfs send on A and redirect output
to a file, it sends at 2MB/sec. but when i use ''zfs send
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance is Z-Raid to this configuration, given the
complete randomness and size of these accesses?
Would
2009 Apr 09
0
ZFS stripe over EMC write performance.
What is the best write performance improvement anyone has seen (if any)
on a ZFS stripe over EMC SAN?
I''d be interested to hear results for both - striped and non-striped EMC
config.
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2006 Oct 12
18
Write performance with 3ware 9550
I have two identical servers. The only difference is that the first
one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched)
CPU: dual Opteron 280
Memory: 16GB
Raid card: 3ware 9550Sx-8LP
Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular,
but good enough. The second one
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent:
> Brent Jones wrote:
> >
> > Using mbuffer can speed it up dramatically, but
> > this seems like a hack> without addressing a real problem with zfs
> > send/recv.> Trying to send any meaningful sized snapshots
> > from say an X4540 takes> up to 24 hours, for as little as 300GB
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote:
> Hi Andy,
> my comments below...
> note that I didn''t see zfs-discuss at opensolaris.org in the CC for the
> original...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been asked to implement a zfs based solution using storedge 6130 and
>> im chasing my own
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at
2010 Apr 19
4
upgrade zfs stripe
hi there,
since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it?
thanks in advance
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi.
bash-3.00# uname -a
SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc
I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes).
Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2008 Jul 09
2
disk questions: geom and zfs
hail,
I have a 7-stable:
[matheus@xxx /usr/home/matheus]$ uname -a
FreeBSD xxx 7.0-STABLE FreeBSD 7.0-STABLE #2: Sun Jul 6 15:03:26 BRT 2008
root@lamneth:/usr/obj/usr/src/sys/xxx_7 i386
and there exists three geom things.
gconcat status
Name Status Components
concat/concat0 UP ad4
ad5
gmirror status
Name Status Components
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting:
???????????????????
Before using gfapi:
]# dd if=/dev/urandom of=test.file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached
to a box running Solaris X86 10 Release 4. The drives were set up in one
big pool with raidz, and it worked great for about a month. On the 4th, we
had the system kernel panic and crash, and it''s now behaving very badly.
Here''s what diagnostic data I''ve been able to collect so far:
In the