search for: e8ee

Displaying 16 results from an estimated 16 matches for "e8ee".

2007 Oct 19
1
Dovecot v1.0.3 -> Sieve "redirect" command returning Sendmail exit status 75
...address. When this rule matches, deliver returns in the log (which is attached) that "Sendmail process terminated abnormally, exit status: 75" What would cause this, and how can I resolve it? Thanks. -- Elisamuel Resto <samuel at dragonboricua.net> ID: 0x18615F19 / FP: B66D 1C2A E8EE B922 1D9C D98F D2D5 FB61 1861 5F19 -------------- next part -------------- exim[30998]: 2007-10-18 22:26:44 1IihZU-00083y-90 <= me at isp.net H=mx.isp.net [1.2.3.4] P=esmtp S=1658 id=20071018222643.55kjplbxdq80o8kg at webmail.isp.net T="redirect me" exim[31003]: 2007-10-18 22:27:00 1Ii...
2016 Jun 02
0
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2013 Oct 28
1
Unable to provision VM attaching it directly to a OVS bridge
Reposting from virt-tools mailing list: Hi! I'm facing a problem that could be triggered by some lacking of support from libvirt on Open vSwitch (or could be my mistake). I have interests in researching on virtual networks and SDN. To keep things simple, I've decided to use libvirt/virt-tools to manage VM's since my focus is on the network, instead of using a full feature system
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2012 Dec 25
1
Cannot Join Existing Windows 2003 Domain
Trying to add a new samba 4 domain controller to an existing Windows 2003 domain. There are two existing domain controllers: dc1.home.aaronson.com and dc2.home.aaronson.com. As you can see below, samba 4 dies during the join. I am stumped. Dcdiag throws no errors on the existing controllers. Any ideas/ ubuntu at sulu:/usr/local/samba# sudo bin/samba-tool domain join home.aaronson.com
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2003 Apr 13
2
Problem in getting tftp transfer to succeed
...6 c45f ..4.=..r.....&._ 0x00e0 2826 6681 3f21 5058 450f 84a2 00e8 0f0f (&f.?!PXE....... 0x00f0 0f83 9b00 89f3 8ec0 bed8 a2e8 0d1a 2666 ..............&f 0x0100 8b47 0a66 a31a 8fbe b5a3 e8fe 1926 8b47 .G.f.........&.G 0x0110 20e8 171a e8f1 19be cea3 e8ee 1926 8b47 .............&.G 0x0120 22e8 071a e8e1 19be e7a3 e8de 1926 8b47 "............&.G 0x0130 24e8 f719 e8d1 19be 00a4 e8ce 1926 8b47 $............&.G 0x0140 26e8 e719 e8c1 1966 31f6 2666 0fb7 4720 &......f1.&f..G. 0x0150 263b 4724 77...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on c992c952-10ad-458e-a722-b6e470bb4bb9. sources=0 [2] sinks=1 [2017-10-25 10:40:26.709609] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 625649c9-e8ee-46c0-a6b1-4593118ffdb7. sources=0 [2] sinks=1 [2017-10-25 10:40:26.713385] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 932a53c1-754a-4e57-8a7e-258596fd24d0 [2017-10-25 10:40:26.717342] I [MSGID: 108026] [afr-self-h...