similar to: Single-disk rpool with inconsistent checksums, import fails

Displaying 20 results from an estimated 100 matches similar to: "Single-disk rpool with inconsistent checksums, import fails"

2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB and write operations generates massive read operations. Before every spa sync phase zfs reads space maps from disk. I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps. Now space maps, intent log, spa history are compressed. Not I''m thinking about disabling checksums. All
2009 Jul 16
1
An amusing scrub
Today, I ran a scrub on my rootFS pool. I received the following lovely output: # zpool status larger_root ? pool: larger_root ?state: ONLINE ?scrub: scrub completed after 307445734561825856h29m with 0 errors on Wed Jul 15 21:49:02 2009 config: ??????? NAME??????? STATE???? READ WRITE CKSUM ??????? larger_root? ONLINE?????? 0???? 0???? 0 ????????? c4t1d0s0? ONLINE?????? 0???? 0???? 0 errors: No
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2012 Jan 17
6
Failing WD desktop drive in mirror, how to identify?
I have a desktop system with 2 ZFS mirrors. One drive in one mirror is starting to produce read errors and slowing things down dramatically. I detached it and the system is running fine. I can''t tell which drive it is though! The error message and format command let me know which pair the bad drive is in, but I don''t know how to get any more info than that like the serial number
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum, I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2010 Feb 18
2
Killing an EFI label
Since this seems to be a ubiquitous problem for people running ZFS, even though it''s really a general Solaris admin issue, I''m guessing the expertise is actually here, so I''m asking here. I found lots of online pages telling how to do it. None of them were correct or complete. I think. I seem to have accomplished it in a somewhat hackish fashion, possibly not
2011 Nov 19
0
"zfs hold" and "zfs send" on a readonly pool
Hello, all. I''m in the process of repairing a corrupted unmirrored rpool, and my current idea was to evacuate all reachable data by "zfs send" to the redundant data pool, then recreate and repopulate the rpool with copies=2. As I previously wrote, my machine crashes when trying to import the rpool in any sort of read-write mode, however I got it to import with
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2006 Jul 03
8
[raidz] file not removed: No space left on device
On a system still running nv_30, I''ve a small RaidZ filled to the brim: 2 3 root at mir pts/9 ~ 78# uname -a SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP 0 3 root at mir pts/9 ~ 50# zfs list NAME USED AVAIL REFER MOUNTPOINT mirpool1 33.6G 0 137K /mirpool1 mirpool1/home 12.3G 0 12.3G /export/home mirpool1/install 12.9G
2011 Jun 24
13
Fixing txg commit frequency
Hi All, I''d like to ask about whether there is a method to enforce a certain txg commit frequency on ZFS. I''m doing a large amount of video streaming from a storage pool while also slowly continuously writing a constant volume of data to it (using a normal file descriptor, *not* in O_SYNC). When reading volume goes over a certain threshold (and average pool load over ~50%), ZFS
2008 Nov 16
8
Mirror and RaidZ on only 3 disks
Hi, I have a small Linux server PC at home (Intel Core2 Q9300, 4 GB RAM), and I''m seriously considering switching to OpenSolaris (Indiana, 2008.11) in the near future, mainly because of ZFS. The idea is to run the existing CentOS 4.7 system inside a VM and let it NFS mount home directories and other filesystems from OpenSolaris. I might migrate more services from Linux over time, but for
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all I did some test about MySQL''s Insert performance on ZFS, and met a big performance problem,*i''m not sure what''s the point*. Environment 2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel). A Java client run 8 threads concurrency insert into one Innodb table: *~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1 ~600 qps when sync_binlog=10
2008 May 04
2
Inconcistancies with scrub and zdb
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other pool is another story. First lesson (I think) is you should scrub your pools, at least those backed by
2011 Sep 01
1
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subsequent data migration went fine. However, when I attempted to attach the second side mirrors as a mirror of the ZFS pool, all
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we''ve had incidents where IMAP operations became very slow. The general symptoms are that the number of imapd, pop3d, and lmtpd processes increases, the CPU load average increases, but the ZFS I/O bandwidth decreases. At the same time, ZFS
2010 Apr 04
15
Diagnosing Permanent Errors
I would like to get some help diagnosing permanent errors on my files. The machine in question has 12 1TB disks connected to an Areca raid card. I installed OpenSolaris build 134 and according to zpool history, created a pool with zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c4t0d5 c4t0d6 c4t0d7 c4t1d0 c4t1d1 c4t1d2 c4t1d3 I then backed up 806G of files to the machine, and had
2009 Jul 01
14
can''t boot 2009.06 domU on Xen 3.4.1 / CentOS 5.3 dom0
I''ve got a CentOS 5.3 dom0 with Xen 3.4.1-rc5 (or so). I''ve tried the same stuff below with 3.4.0, no difference. I''m trying to install 2009.06 PV domU based on instructions from [1] and [2]. I can run the install fine, I can also get the kernel and boot archive (from [2]) after the install. But for the life of me I can''t get the installed domU to boot. If I