similar to: Diagnosing Permanent Errors

Displaying 20 results from an estimated 4000 matches similar to: "Diagnosing Permanent Errors"

2010 Jul 16
6
Lost zpool after reboot
Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2010 Feb 24
0
disks in zpool gone at the same time
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it started (/var/adm/messages) Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /pci at 0,0/pci10de,5d at
2012 Jan 08
0
Pool faulted in a bad way
Hello, I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2010 Mar 02
11
Expand zpool capacity
Hello, Experts. I''ve got a problem. I''m trying to expand my main zpool (rpool), but don''t know how to do that. (i''m 100% newbie in non-windows world) I use Osol under Vmware on Windows. I had a pretty small vhdd -> only 12gb. Yesterday i decided to expand my virtual drive to 20gb. (After several tries to upgrade the OS to a newest dev-releases and
2010 Apr 13
6
12-15 TB RAID storage recommendations
Hello listmates, I would like to build a 12-15 TB RAID 5 data server to run under ContOS. Any recommendations as far as hardware, configuration, etc? Thanks. Boris.
2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after loading stage2, and doesn''t recognise the fstype when examining the disk loaded from an alernate source. This is with SX-151. Here''s hoping a future version (with grub2?) resolves this, as well as lets us boot from raidz. Just a note for the archives in case it helps someone else get back the afternoon
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
1999 Nov 22
3
status of openssh for solaris?
In message <19991122110826.A23851 at wdawson-sun.sbs.siemens.com>, Willard Dawson writes: >I just tried to compile, this time with openssh-1.2pre14, openssl-0.9.4 >and egd-0.6. I get considerably further along, but still not completely >compiled. Here are the last bits: > >gcc -g -O2 -Wall -I/usr/local/ssl/include -DETCDIR=\"/usr/local/etc\" -DSSH_PR
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey! I have a strange condition in one of the servers that I don't where to start looking. I login to the server via SSH (cant doit any other way) and anything that I type is slow HTTP sessions timeout waiting for screen redraw. So, the server is acting "slow". server is bare metal. no virtual services. no alarms in the disk raid note: server was restarted because of power failure.
2009 Feb 24
44
Motherboard for home zfs/solaris file server
Hello, I am building a home file server and am looking for an ATX mother board that will be supported well with OpenSolaris (onboard SATA controller, network, graphics if any, audio, etc). I decided to go for Intel based boards (socket LGA 775) since it seems like power management is better supported with Intel processors and power efficiency is an important factor. After reading several
2010 Jun 14
3
Diagnosing some OCFS2 error messages
Hello. I am experimenting with OCFS2 on Suse Linux Enterprise Server 11 Service Pack 1. I am performing various stress tests. My current exercise involves writing to files using a shared-writable mmap() from two nodes. (Each node mmaps and writes to different files; I am not trying to access the same file from multiple nodes.) Both nodes are logging messages like these: [94355.116255]
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2003 Dec 15
4
IP 500/600 1.1.0 Firmware
Has anyone on the list been able to locate and try out the 1.1.0 firmware? It was released in November, but I have yet to get my hands on it. The Polycom site has way more docs online, but the link to the firmware only brings up the release notes. -sb
2004 Jan 08
3
Progress on the Polycom front...
Hello, Good news on the Polycom front for those that are interested. It looks like we may get a dedicated Engineer for Polycom/Asterisk!!! Happy Day! Here's the message I got tonight: Matt: I heard back from our VP of Engineering- she is prepared to have an individual dedicated to working on the Digium- Asterisk project. Can we discuss again Friday or mid next week? Scott Willard
2011 Dec 21
8
Any rhyme or reason to disk dev names?
Hello, I am curious to know if there is an easy way to guess or identify the device names of disks. Previously the /dev/dsk/c0t0d0s0 system made sense to me... I had a SATA controller card with 8 ports, and they showed up with the numbers 1-8 in the "t" position of the device name. But I just built a new system with two LSI SAS HBAs in it, and my device names are along the lines of:
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick, what was the value of 'si' in top ? Best Regards, Strahil Nikolov ?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????: >It was found that the software NIC team created in Centos was having >issues due to a failing network cable. The team was going berserk with >up/down changes. > > >On Fri, Jul 3,
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on