Displaying 20 results from an estimated 500 matches similar to: "file access algorithm within pools"
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750) for a total of 12 disks. Every disk has its own eSATA
cable
connected to the ports on the PCI-X
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2008 Sep 16
1
Interesting Pool Import Failure
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn''t import under
Solaris 10 U5. While this is a simple scenario, and the data was not
terribly important, I think the exercise should at least give some piece of
mind to those who
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2007 Jul 31
0
controller number mismatch
Hi,
I just noticed something interesting ... don''t know whether it''s
relevant or not (two commands run in succession during a ''nightly'' run):
$ iostat -xnz 6
[...]
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.3 0.0 0.8 0.0 0.0 0.2 0.2 0 0 c2t0d0
2.2
2008 Jun 17
6
mirroring zfs slice
Hi All,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 254G 230K 254G 0% ONLINE -
root # echo |format
Searching for disks...done
2008 Jan 17
9
ATA UDMA data parity error
Hey all,
I''m not sure if this is a ZFS bug or a hardware issue I''m having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Level Info About My System
---------------------------------------------
- fresh
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs
is a very non-trivial exercise with zero tolerance for developer bugs.
I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and
a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108)
hooked up to the built-in SCSI controller (the only device on the SCSI
bus).
My initial ZFS test was to
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare.
Everything went well, but the resilvering process seems to be taking an eternity:
# zpool status
pool: bigpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn''t get found.
So, I looked up what the new name for the hot spare was, then added
it to the pool with "zpool
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2007 Mar 07
0
anyone want a Solaris 10u3 core file...
I executed sync just before this happened....
ultra:ultra# mdb -k unix.0 vmcore.0
Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy md ip sctp
usba fctl nca crypto zfs random nfs ptm cpc fcip sppp lofs ]
> $c
vpanic(7b653bd8, 7036fca0, 7036fc70, 7b652990, 0, 60002d0b480)
zio_done+0x284(60002d0b480, 0, a8, 7036fca0, 0, 60000b08d80)
zio_vdev_io_assess+0x178(60002d0b480, 8000,
2007 Mar 28
20
Gzip compression for ZFS
Adam,
With the blog entry[1] you''ve made about gzip for ZFS, it raises
a couple of questions...
1) It would appear that a ZFS filesystem can support files of
varying compression algorithm. If a file is compressed using
method A but method B is now active, if I truncate the file
and rewrite it, is A or B used?
2) The question of whether or not to use bzip2 was raised in
the
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance.
I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2007 Oct 08
6
zfs boot issue, changing device id
Hi,
Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)...
1/ Standard install on DISK A.
2/ zfs boot install on DISK B.
3/ I change the boot order and my zfs boot works fine.
4/ I install grub on the mbr of DISK B
5/ I disconnect and replace DISK A with DISK B
6/ Reboot, get the grub menu select Solaris ZFS and it panics that it
cannot mount root path @ device XXX...
This is not a ZFS
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk