similar to: A disk on Thumper giving random CKSUM error counts

Displaying 20 results from an estimated 1000 matches similar to: "A disk on Thumper giving random CKSUM error counts"

2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2008 Jan 30
2
Convert MBOX
To all, I am using dovecot --version 1.0.10 and trying to convert MBOXes to MailDir's with the end goal of creating one folder filled with users old MBOXes that when they log in for the first time will be converted to Mail Dir format. I tried this and it did not work and gave me this output; <snip> default_mail_env = maildir:%h/mail/ #convert_mail =
2008 Dec 01
4
Is SUNWhd for Thumper only?
I read Ben Rockwood''s blog post about Thumpers and SMART (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd package only work on a Thumper? Can I use this on my snv_101 system with AMD 64 bit processor and nVidia SATA?
2009 Sep 17
0
stat() performance on files on zfs vs. ufs
Hi, Bug ID: 6775100 stat() performance on files on zfs should be improved was fixed in snv_119. I wanted to do a quick comparison between snv_117 and snv_122 on my workstation to see what kind of improvement there is. I wrote a small C program which does a stat() N times in a loop. This is of course a micro-benchmark. Additionally it doesn''t cover a case if doing stat() on not cached
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts: A customer has X4500 and the boot drives mirrored (c5t0d0s0 and c5t4d0s0) by SVM, The ZFS uses the two other partitions on these two drives(c5t0d0s3 and c5t4d0s3). If we need to replace the disk drive c5t0d0, do we need to do anything on the ZFS (c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive replacement procedure? Below is the summary of their current ZFS
2018 Sep 06
0
bad udp cksum
I have seen such and issue but it depends on the environment structure. I have seen it mostly on VM's and it was resolved. The hardware and software details of the setup might help to understand what's causing it. Eliezer ---- Eliezer Croitoru Linux System Administrator Mobile: +972-5-28704261 Email: eliezer at ngtech.co.il -----Original Message----- From: CentOS [mailto:centos-bounces
2018 Aug 10
0
bad udp cksum
> Hi, > > Recently I'm noticing an interesting issue. > My CentOS servers are trying to send logs to a logging server via 514/udp, > however I'm not receiving anything. > > I did the following on CentOS > *tcpdump -vvv -nn udp -i esn160 port 514* > > In another session on the same server: > *nc syslog-server -u 514* > > tcpdump started to show me
2003 Jun 28
1
'vinum list' wierd output
Hi, I'm not sure this is supposed to happen (my computer rebooted halfway through adding an other drive to a volume): [long lines needed] (02:44:50 <~>) 0 $ sudo vinum ld D storage State: up Device /dev/ad0d Avail: 38166/381 66 MB (100%) D worthless State: up Device /dev/ad2d Avail: 38038/380 38 MB (100%) D barracuda
2007 Mar 20
1
High Pitched Noise
Question: After about having the server running for about an hour, our callers occationally hear a high pitched beep that lasts the entire call. In some cases, the noise doesn't start until a minute or 2 into the call, while others last the entire call. In some of the more serious cases, calls are dropped after the noise has occurred as well. Another symptom has been really bad static on a
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2006 Jan 28
4
bad udp cksum by dns request in domU
Hello I use XEN 3.0 in debian sarge. I have a domU1 for routing and firewall. This domU1 use 2 network interfaces which is on a bridge ''gate'' and the other on bridge ''lan''. dumU2 use one interface (eth0) on bridge ''lan''. Then I make a ping out of domU2 to www.debian.de so I get no answer. The name isn''t resolved. A ping to IP of
2008 Jul 02
0
Q: grow zpool build on top of iSCSI devices
Hi all. We currenty move out a number of iSCSI servers based on Thumpers (x4500) running both, Solaris 10 and OpenSolaris build 90+. The targets on the machines are based on ZVOLs. Some of the clients use those iSCSI "disks" to build mirrored Zpools. As the volumes size on the x4500 can easily be grown I would like to know if that growth in space can be propagated to the client
2018 Aug 09
4
bad udp cksum
Hi, Recently I'm noticing an interesting issue. My CentOS servers are trying to send logs to a logging server via 514/udp, however I'm not receiving anything. I did the following on CentOS *tcpdump -vvv -nn udp -i esn160 port 514* In another session on the same server: *nc syslog-server -u 514* tcpdump started to show me messages like: *[bad udp cksum 0x3ce9 -> 0xb0f5!] SYSLOG,
2007 Jan 25
4
high density SAS
Well Solaris SAS isn''t there yet but anyway just found some interesting high density SAS/SATA enclosures. <http://xtore.com/product_list.asp?cat=JBOD> The XJ 2000 is like the x4500 in that it holds 48 drives, however with the XJ 2000 2 drives are on each carrier and you can get to them from the front. I don''t like xtore in general but the 24 bay (2.5" SAS) and 48
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8