Displaying 20 results from an estimated 20000 matches similar to: "I/O freeze after a disk failure"
2006 May 19
3
Oracle on ZFS vs. UFS
Hi,
I''m preparing a personal TPC-H benchmark. The goal is not to measure or
optimize the database performance, but to compare ZFS to UFS in similar
configurations.
At the moment I''m preparing the tests at home. The test setup is as
follows:
. Solaris snv_37
. 2 x AMD Opteron 252
. 4 GB RAM
. 2 x 80 GB ST380817AS
. Oracle 10gR2 (small SGA (320m))
The disks also contain the OS
2006 Dec 08
22
ZFS Usage in Warehousing (lengthy intro)
Dear all,
we''re currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
We''re currently running the database side on various SF V440''s attached via
dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
(obviously in a SAN) shared between many systems. Performance is mediocre
in terms
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss.
I mounted the zfs-based
2007 May 12
3
zfs and jbod-storage
Hi.
I''m managing a HDS storage system which is slightly larger than 100 TB
and we have used approx. 3/4. We use vxfs. The storage system is
attached to a solaris 9 on sparc via a fiberswitch. The storage is
shared via nfs to our webservers.
If I was to replace vxfs with zfs I could utilize raidz(2) instead of
the built-in hardware raid-controller.
Are there any jbod-only storage
2008 May 18
2
possible zfs bug? lost all pools
after trying to mount my zfs pools in single user mode I got the following
message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as
it was last accessed by another system (host: gw.bb1.matik.com.br hostid:
0xbefb4a0f). See: http://www.sun.com/msg/ZFS-8000-EY
any zpool cmd returned nothing else as not existing zfs, seems the zfs info on
disks
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi,
my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller.
So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :)
I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast!
I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all.
One of our server had a panic and now can''t mount the zpool anymore!
Here is what I get at boot:
Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200:
Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67
00009000), file: ../../common/fs/zfs/space_map.c, line: 126
Mar 21 11:09:17 SERVER142
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2006 Sep 28
13
jbod questions
Folks,
We are in the process of purchasing new san/s that our mail server
runs on (JES3). We have moved our mailstores to zfs and continue to
have checksum errors -- they are corrected but this improves on the
ufs inode errors that require system shutdown and fsck.
So, I am recommending that we buy small jbods, do raidz2 and let zfs
handle the raiding of these boxes. As we need more
2007 May 03
5
ZFS vs UFS2 overhead and may be a bug?
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid
says this problem also exists on Solaris hence this email.]
Summary: on ZFS, overhead for reading a hole seems far worse
than actual reading from a disk. Small buffers are used to
make this overhead more visible.
I ran the following script on both ZFS and UF2 filesystems.
[Note that on FreeBSD cat uses a 4k buffer and md5 uses a 1k
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2013 Oct 24
4
ZFS on Linux in production?
We are a CentOS shop, and have the lucky, fortunate problem of having
ever-increasing amounts of data to manage. EXT3/4 becomes tough to
manage when you start climbing, especially when you have to upgrade, so
we're contemplating switching to ZFS.
As of last spring, it appears that ZFS On Linux http://zfsonlinux.org/
calls itself production ready despite a version number of 0.6.2, and
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted.
Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error,
After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem.
The client is Solaris 9 U7.
I can mount the filesytem just fine but I am unable to write to it.
showmount -e shows my mount is set for everyone.
the dfstab file has option rw set.
So what gives?
Phillip
--
This message posted from opensolaris.org
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller