search for: mailaddr

Displaying 18 results from an estimated 18 matches for "mailaddr".

Did you mean: emailaddr
2008 Jun 11
3
mdmonitor not triggering program on fail events
...rrect way" I'm trying to make mdmonitor to execute a program when it detects a fail event automatically. Currently, from what I see, init is calling mdmonitor with these options mdadm --monitor --scan -f (note that the --program is not there) and this is in my /etc/mdadm.conf MAILADDR root PROGRAM /root/program_2_run.sh short of hacking the mdmonitor script to hardcode the program there, is there an alternate, more elegant way? Thanks.
2015 Apr 17
1
userdb username changed
...4/dovecot (...) Apr 17 09:27:34 imap21 dovecot: auth-worker(27661): Debug: sql(ppp at example.net): SELECT at.userid AS user, at.home AS home, at.uid AS uid, at.gid AS gid, concat('*:storage=', at.quotabytes, 'b:messages=', at.quotamessages) AS quota_rule FROM auth at INNER JOIN mailaddr mt ON at.userid = mt.userid WHERE mt.mailaddress = 'ppp at example.net' OR at.userid = 'ppp at example.net' Apr 17 09:27:34 imap21 dovecot: auth-worker(27661): Debug: sql(ppp at example.net): username changed ppp at example.net -> uppp Apr 17 09:27:34 imap21 dovecot: auth: Debug:...
2018 Jan 24
2
Centos´s way of handling mdadm
Hi, what?s the proposed way of handling mdadm in Centos 7? I did not get any notification when a disk in a RAID1 failed, and now that the configuration has changed after resolving the problem, I might be supposed to somehow update /etc/mdadm.conf. Am I not supposed to be notified by default when something goes wrong with an array? How do I update /etc/mdadm.conf? I?m used to
2005 May 21
1
Software RAID CentOS4
...he new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions were active hda and hdc but only hda on the other partitions. I has to add the other hdc partitions with mdadm -a. My mdadm.conf looks like; # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md1 super-minor=1 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md3 super-minor=3 ARRAY /dev/md2 super-minor=2 Shouldn't there be more information for mdadm to work with? How do you replace a failed drive and have it auto-configured? TIA Gerald
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F) sdg1[3] sde1[1] sdd1[0] 1949480960 blocks super 1.2 512K chunks 2 near-copies [4/3...
2014 Feb 14
1
lda+ldap multiple users
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 14 Feb 2014, matthias lay wrote: > On 02/14/2014 08:27 AM, Steffen Kaiser wrote: >> On Fri, 7 Feb 2014, matthias lay wrote: >> >>> I experienced that if a Mailaddress matches several users the delivery is >>> aborted. >>> >>> ---------------- >>> dovecot: auth: Error: ldap(christian.test at securepoint.de): LDAP search >>> returned multiple entries >>> dovecot: auth: ldap(christian.test at securepoint.d...
2010 Oct 19
3
more software raid questions
...l: md: running: <sda1> Oct 19 18:29:41 fcshome kernel: raid1: raid set md0 active with 1 out of 2 mirrors Oct 19 18:29:41 fcshome kernel: md: ... autorun DONE. and here's /etc/mdadm.conf: # cat /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR fredex ARRAY /dev/md0 level=raid1 num-devices=2 uuid=4eb13e45:b5228982:f03cd503:f935bd69 ARRAY /dev/md1 level=raid1 num-devices=2 uuid=5c79b138:e36d4286:df9cf6f6:62ae1f12 which doesn't say anything about md125 or md126,... might they be some kind of detritus or fragments left...
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=5b017f95:b7e266cc:...
2018 Jan 24
0
Centos´s way of handling mdadm
.../etc/mdadm.conf. > > Am I not supposed to be notified by default when something goes wrong > with an array? How do I update /etc/mdadm.conf? mdadm --detail --scan >> /etc/mdadm.conf.new > I?m used to all this working automagically. man mdadm (check MAILADDR and MAILFROM) -- LF
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > > /proc/mdstat: > Personalities : [raid10] > md127 : active raid10 sdf1[2](F) sdg1[3] sde1[1] sdd1[0] > 1949480960 blocks super...
2009 Nov 11
2
Lost raid when server reboots
...RAY /dev/md0 level=raid1 num-devices=2 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe and it's ok. My mdadm.conf is: DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe devices=/dev/iopsda1,/dev/iopsdb1 MAILADDR root mdmonitor init script is activated. Why md0 is not activated when I reboot this server? How can I do this persistent between reboots?? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
2011 Mar 08
0
Race condition with mdadm at bootup?
...boot process isn't good enough to know where to look. I tried to issue 'mdadm -A -s /dev/md/md_dXX' after booting, but all it does is complain about "No suitable drives found for /dev....." Here is the mdadm.conf file: ------------------------------------- MAILADDR root PROGRAM /root/bin/record_md_events.sh DEVICE partitions ##DEVICE /dev/sd* <<---- this didn't help. AUTO +imsm +1.x -all ## Host OS root arrays: ARRAY /dev/md0 metadata=1.0 num-devices=2 spares=1 UUID=75941adb:33e8fa6a:095a70fd:6fe72c69 ARRAY &...
2014 Feb 07
3
lda+ldap multiple users
Hi list and timo, I use dovecot lda with ldap to do a email => user lookup. I experienced that if a Mailaddress matches several users the delivery is aborted. ---------------- dovecot: auth: Error: ldap(christian.test at securepoint.de): LDAP search returned multiple entries dovecot: auth: ldap(christian.test at securepoint.de): unknown user dovecot: lda: Error: user christian.test at securepoint.de: A...
2014 Feb 14
1
lda+ldap multiple users
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 7 Feb 2014, matthias lay wrote: > I experienced that if a Mailaddress matches several users the delivery is > aborted. > > ---------------- > dovecot: auth: Error: ldap(christian.test at securepoint.de): LDAP search > returned multiple entries > dovecot: auth: ldap(christian.test at securepoint.de): unknown user > dovecot: lda: Error: user c...
2011 Apr 12
8
GUI Software Raid Monitor Software
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2015 Nov 05
1
[PATCH 1/2] test-data: phony-guests: Don't use *.tmp.* temporary files.
...#39;/dev/sdb1']); $g->md_create ('root', ['/dev/sda2', '/dev/sdb2']); - open (my $mdadm, '>', "mdadm.tmp.$$") or die; + open (my $mdadm, '>', "fedora.mdadm") or die; print $mdadm <<EOF; MAILADDR root AUTO +imsm +1.x -all @@ -123,9 +123,9 @@ EOF } elsif ($ENV{LAYOUT} eq 'btrfs') { - push (@images, "fedora-btrfs.img.tmp.$$"); + push (@images, "fedora-btrfs.img-t"); - open (my $fstab, '>', "fstab.tmp.$$") or die; + open (my $fstab, &...
2011 Nov 23
8
[PATCH 0/8] Add MD inspection support to libguestfs
This series fixes inspection in the case that fstab contains references to md devices. I've made a few changes since the previous posting, which I've summarised below. [PATCH 1/8] build: Create an MD variant of the dummy Fedora image I've double checked that no timestamp is required in the Makefile. The script will not run a second time to build fedora-md2.img. [PATCH 2/8]