Displaying 20 results from an estimated 200 matches similar to: "Unable to allocate dma memory for extra SGL"
2011 May 10
5
Tuning disk failure detection?
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci15d9,400 at 0 (mpt_sas0):
May 5 04:33:44 dev-zfs4 mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31110610
And errors for the drive were
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All,
It may be this not the correct mailing list, but I''m having a ZFS issue
when a disk is failing.
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1).
It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each
of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x
LSI
2010 Jun 18
6
WD caviar/mpt issues
I know that this has been well-discussed already, but it''s been a few months - WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting out lots of beloved " Log info 31080000 received for target" messages, and just generally not working right.
(SM 836EL1 and 836TQ chassis - though I have several variations on theme depending on date of purchase: 836EL2s,
2008 Feb 05
2
ZFS+ config for 8 drives, mostly reads
Hi,
I posted in the Solaris install forum as well about the fileserver I''m building for media files but wanted to ask more specific questions about zfs here. The setup is 8x500GB SATAII drives to start and down the road another 4x750 SATAII drives, the machine will mostly be doing reads and streaming data over GigaE.
-I''m under the impression that ZFS+(ZFS2) is similar to
2012 Jun 12
2
lost ZFS pool
hail,
I write just to make sure its dead. I've lost the first disk on a ZFS pool (jbod). Now I can't
mount it with only the second disk. The first disk clicks to death :(
[root@optimus ~]# zpool status
pool: pool
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing
2015 Apr 18
2
Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
Edgar, thanks for the help!
-------- Original Message --------
Subject: Re: Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
From: Edgar Pettijohn III <edgar at pettijohn-web.com>
To: David Gessel <gessel at blackrosetech.com>
Date: Sat Apr 18 2015 16:30:07 GMT+0300 (Arabic Standard Time)
>
> On Apr 18, 2015, at 8:00
2010 Oct 19
0
NFS/SATA lockups (svc_cots_kdup no slots free & sata port time out)
I have a Solaris 10 U8 box (142901-14) running as an NFS server with
a 23 disk zpool behind it (three RAIDZ2 vdevs).
We have a single Intel X-25E SSD operating as an slog ZIL device
attached to a SATA port on this machine''s motherboard.
The rest of the drives are in a hot-swap enclosure.
Infrequently (maybe once every 4-6 weeks), the zpool on the box stops
responding and although we
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and
removing "set sata:sata_func_enable = 0x5" from /etc/system to
re-enable NCQ, I am again observing drive disconnect error messages.
This in spite of the patch description which claims multiple fixes
in this area:
6587133 repeated DMA command timeouts and device resets on x4500
6538627 x4500 message logs contain multiple
2011 Aug 16
2
solaris 10u8 hangs with message Disconnected command timeout for Target 0
Hi,
My solaris storage hangs. I login to the console and there is messages[1]
display on the console.
I can''t login into the console and seems the IO is totally blocked.
The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2 HBA
cable connect the server and MD3000.
The symptom is random.
It is very appreciated if any one can help me out.
Regards,
Ding
[1]
Aug 16
2015 Apr 18
0
Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
On Apr 18, 2015, at 9:09 AM, David Gessel wrote:
> Edgar, thanks for the help!
>
> -------- Original Message --------
> Subject: Re: Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
> From: Edgar Pettijohn III <edgar at pettijohn-web.com>
> To: David Gessel <gessel at blackrosetech.com>
> Date: Sat Apr
2015 Apr 18
6
Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
-------- Original Message --------
Subject: Re: Dovecot 2.2.16: disappearing messages, mismatched summaries, duplicated messages, excessive full re-downloads
From: Timo Sirainen <tss at iki.fi>
To: David Gessel <gessel at blackrosetech.com>
Date: Sat Apr 18 2015 15:48:28 GMT+0300 (Arabic Standard Time)
> No. My best guess is that (your) ZFS+FreeBSD is simply not behaving the way
2007 Jan 10
1
Solaris 10 11/06
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I''m also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be available):
Solaris 10 Update 3 (11/06) Patches
sparc Patches
* 118833-36 SunOS 5.10:
2010 Feb 07
0
disk devices missing but zfs uses them ?
Hello I have a strange issue,
I''m having a setup with 24 disk enclosure connected with LSI3801-R. I created two pools. Pool have 24 healthy disks.
I disabled LUN persistency on the LSI adapter.
When I cold boot the server (power off by pulling all power cables), a warning is shown on the console during boot:
....
WARNING: /pci/path/ ... (mpt0):
wwn for traget has changed
2007 Sep 19
7
ZFS Solaris 10 Update 4 Patches
The latest ZFS patches for Solaris 10 are now available:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
ZFS Pool Version available with patches = 4
These patches will provide access to all of the latest features and bug
fixes:
Features:
PSARC 2006/288 zpool history
PSARC 2006/308 zfs list sort option
PSARC 2006/479 zfs receive -F
PSARC 2006/486 ZFS canmount
2010 Mar 09
0
snv_133 mpt_sas driver
Hi all,
Today a new message has been seen in my system and another freeze has
happen to it.
The message is :
Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf
Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start
do passthru error 16
Mar 9 06:20:01 zfs01 scsi: [ID 243001 kern.warning] WARNING:
/pci at 0,0/pci8086,3410 at 9/pci1000,3150 at 0 (mpt2):
Mar 9
2009 Jan 10
3
Problems with dovecot
Hi all,
Ihave jsut been compiling dovecot 1.1.6-1.1.8 on a solaris 10 x86
machine with sun studio 12.
It compiles correctly but when i want to run dovecot i am getting this
error message
Jan 10 17:16:16 Carolyn dovecot: [ID 762119 mail.info] Dovecot v1.1.8
starting up
Jan 10 17:16:16 Carolyn dovecot: [ID 107833 mail.warning] auth(default):
Growing pool 'mysql driver' with: 1024
Jan
2008 Aug 19
2
assertion failed
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I just switched over to dovecot 1.1.2 on our live system last night. I'm
seeing the following errors in the logs:
Aug 19 10:01:03 goku dovecot: [ID 107833 mail.crit] Panic: IMAP(elevin):
file in
dex-sync.c: line 39 (index_mailbox_set_recent_uid): assertion failed:
(seq_range
_exists(&ibox->recent_flags, uid))
I did apply the assertion fix
2008 May 14
2
Dovecot crash in 1.0.9
I've been having a problem with a crashing Dovecot the last couple of
months where it
crashes after a couple of weeks of uptime. Been a bit difficult to
diagnose. Anyway,
we're now running 1.0.9 on a Sun Fire T1000 running Solaris 10.
Any good ideas on how to pin-point the problem?
- Peter
Here's the output from syslog when the last crash happened:
May 14 11:46:50 ifm.liu.se
2008 Aug 04
2
Help with auto vacation replies
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I'm testing out dovecot with postfix. I'm running postfix 2.5.2 with
dovecot version 1.1.1 with dovecot sieve plugin version 1.1.5.
I have everything compiled and working except for the sieve plugin. I'm
now trying to test out the sieve plugin and having no luck in getting a
simple auto vacation reply to work. Here is my
2008 May 12
3
Automounted home dirs not working
I'm testing Dovecot as a possible replacement for UW. In my environment
the home directories are automounted via NFS from a NetApp. In general
this works fine, but Dovecot isn't picking up the automounted
directories. Consider the case of Arthur Dent, test user:
May 12 10:30:24 testbed dovecot: [ID 107833 mail.info] imap-login:
Login: user=<adent>, method=PLAIN,