Displaying 20 results from an estimated 200 matches similar to: "LVM and ZFS"
2009 Jul 07
0
[perf-discuss] help diagnosing system hang
Interresting... I wonder what differs between your system and mine. With my dirt-simple stress-test:
server1# zpool create X25E c1t15d0
server1# zfs set sharenfs=rw X25E
server1# chmod a+w /X25E
server2# cd /net/server1/X25E
server2# gtar zxf /var/tmp/emacs-22.3.tar.gz
and a fully patched X42420 running Solaris 10 U7 I still see these errors:
Jul 7 22:35:04 merope Error for Command:
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting
with this problem and I wanted to throw it out here again.
All of our hardware is from Silicon Mechanics (SuperMicro chassis and
motherboards).
Up until now, all of the hardware has had a single 24-disk expander /
backplane -- but we recently got one of the new SC847-based models with
24 disks up front and 12 in the
2010 Feb 24
0
disks in zpool gone at the same time
Hi,
Yesterday I got all my disks in two zpool disconected.
They are not real disks - LUNS from StorageTek 2530 array.
What could that be - a failing LSI card or a mpt driver in 2009.06?
After reboot got four disks in FAILED state - zpool clear fixed
things with resilvering.
Here is how it started (/var/adm/messages)
Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info]
/pci at 0,0/pci10de,5d at
2008 Dec 04
11
help diagnosing system hang
Hi all,
First, I''ll say my intent is not to spam a bunch of lists, but after
posting to opensolaris-discuss I had someone communicate with me offline
that these lists would possibly be a better place to start. So here we
are. For those on all three lists, sorry for the repetition.
Second, this message is meant to solicit help in diagnosing the issue
described below. Any hints on
2011 Feb 05
2
kernel messages question
Hi
I keep getting these messages on this one box. There are issues with at least one of the drives in it, but since there are some 80 drives in it, that''s not really an issue. I just want to know, if anyone knows, what this kernel message mean. Anyone?
Feb 5 19:35:57 prv-backup scsi: [ID 365881 kern.info] /pci at 7a,0/pci8086,340e at 7/pci1000,3140 at 0 (mpt1):
Feb 5 19:35:57
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get
2009 Jan 17
2
Comparison between the S-TEC Zeus and the Intel X25-E ??
I''m looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they''re
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked at the Intel X25-E series, and they look comparable in
performance. At about 20% of the cost.
http://www.intel.com/design/flash/nand/extreme/index.htm
Can anyone enlighten me as to any possible difference between an STEC
2011 Aug 09
7
Disk IDs and DD
Hiya,
Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe);
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63>
/pci at 0,0/pci10de,cb84 at 5/disk at 0,0
1. c8t1d0 <ATA -ST9160314AS -SDM1
2008 Jan 17
9
ATA UDMA data parity error
Hey all,
I''m not sure if this is a ZFS bug or a hardware issue I''m having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Level Info About My System
---------------------------------------------
- fresh
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn''t implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?
Doing following:
o added new virtual HDD (it becomes
2010 Mar 09
0
snv_133 mpt_sas driver
Hi all,
Today a new message has been seen in my system and another freeze has
happen to it.
The message is :
Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf
Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start
do passthru error 16
Mar 9 06:20:01 zfs01 scsi: [ID 243001 kern.warning] WARNING:
/pci at 0,0/pci8086,3410 at 9/pci1000,3150 at 0 (mpt2):
Mar 9
2009 Dec 12
0
Messed up zpool (double device label)
Hi!
I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered.
After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All,
It may be this not the correct mailing list, but I''m having a ZFS issue
when a disk is failing.
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1).
It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each
of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x
LSI
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a
replacement that as far as i can tell should be adequate.
a history: this could well be some bizarro edge case, as the pool doesn''t
have the cleanest lineage. initial creation happened on NexentaCP inside
vmware in linux. i had given the virtual machine raw device access to 4
500gb drives and 1 ~200gb
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2010 Jun 18
6
WD caviar/mpt issues
I know that this has been well-discussed already, but it''s been a few months - WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting out lots of beloved " Log info 31080000 received for target" messages, and just generally not working right.
(SM 836EL1 and 836TQ chassis - though I have several variations on theme depending on date of purchase: 836EL2s,
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks...
May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6):
May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3):
May 9 16:47:43 sol timeout: abort request, target=0
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives,
divided into three pools with each pool a single eight-disk RAID-Z2. (Boot
is an SSD connected to motherboard SATA.)
This morning I got a cheerful email from my monitoring script: "Zchecker has
discovered a problem on bigdawg." The full output is
2019 Dec 06
0
tinc-pre* between gentoo and raspbian
Dear all,
I have a bit of a complicated tinc setup yielding weird results that I
cannot explain. I would be glad if maybe someone here could help me out.
I have 3 machines (with IP addresses in my tinc network)
machine A (10.0.0.2) runs gentoo, tinc-1.1_pre17, behind router Y
machine B (10.0.0.3) runs gentoo, tinc-1.1pre15, behind router X
machine C (10.0.0.1) runs raspbian, tinc-1.1pre15, behind