Displaying 20 results from an estimated 6000 matches similar to: "Compression block sizes"
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2010 May 31
2
mirror writes 10x slower than individual writes
I have an odd setup at present, because I''m testing while still building my machine.
It''s an Intel Atom D510 mobo running snv_134 2GB RAM with 2 SATA drives (AHCI):
1: Samsung 250GB old laptop drive
2: WD Green 1.5TB drive (idle3 turned off)
Ultimately, it will be a time machine backup for my Mac laptop. So I have installed Netatalk 2.1.1 which is working great.
Read
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering
what drives to put in the bays. My chassis is a Supermicro SC846A, so the
backplane supports SAS or SATA; my controllers are LSI3081E, again
supporting SAS or SATA.
Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM
drive in both SAS and SATA configurations; the SAS model offers
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume
2009 Nov 08
5
Disk I/O in RAID-Z as new disks are added/removed
Hello,
As I understand it, in a traditional RAID 5 setup adding new disks to the pool provides more overall I/O as the load is spread out across multiple disks.
What exactly is this relationship in a RAID-Z setup? What should one expect in terms of overall I/O performance as disks are added and/or removed? I understand that the checksum data is distributed across all disks, unlike a traditional
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
and another review here:
http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
Regards,
--
Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com
Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump?
Thanks,
~~sa
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way mirrored setup of N disks you get the write performance of
the worst disk and a read performance that is
2010 Apr 10
41
Secure delete?
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest
recommended version and am seeing radically difference performance
when testing with iozone than I did in February of 2008. I am using
Solaris 10 U5 with all the latest patches.
This is the performance achieved (on a 32GB file) in February last
year:
KB reclen write rewrite read reread
33554432
2008 May 21
11
Per-user home filesystems and OS-X Leopard anomaly
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes it easy to support
and export per-user filesystems. The problem I encountered was when
using ZFS to create exported per-user filesystems and the OS-X
automounter to perform the necessary mount magic.
OS-X
2010 Apr 19
4
upgrade zfs stripe
hi there,
since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it?
thanks in advance
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched