similar to: After Filter Runs Even When Chain Is Halted

Displaying 20 results from an estimated 800 matches similar to: "After Filter Runs Even When Chain Is Halted"

2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
The HBA is an HP H220. We haven?t really benchmarked individual drives ? all 12 drives are utilized in one RAID-10 array, I?m unsure how we would test individual drives without breaking the array. Trying ?hdparm -tT /dev/sda? now ? it?s been running for 25 minutes so far? Kelly On 2016-05-25, 2:12 PM, "centos-bounces at centos.org on behalf of Dennis Jacobfeuerborn"
2016 May 25
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
[merging] The HBA the drives are attached to has no configuration that I?m aware of. We would have had to accidentally change 23 of them ? Thanks, Kelly On 2016-05-25, 1:25 PM, "Kelly Lesperance" <klesperance at blackberry.com> wrote: >They are: > >[root at r1k1 ~] # hdparm -I /dev/sda > >/dev/sda: > >ATA device, with non-removable media > Model
2016 May 25
3
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote: > LSI/Avago?s web pages don?t have any downloads for the SAS2308, so I think > I?m out of luck wrt MegaRAID. > > Bounced the node, confirmed MPT Firmware 15.10.09.00-IT. > HP Driver is v 15.10.04.00. > > Both are the latest from HP. > > Unsure why, but the module itself reports version 20.100.00.00: > > [root at r1k1 sys] # cat
2008 Jun 16
6
Restarting BackgrounDRb Means Restarting Mongrel?
I''ve noticed that I can never just restart BackgrounDRb by itself. As soon as I do that, Mongrel can no longer pass requests off to it. The only way to get it working again is by restarting Mongrel. As annoying as that may be, I suppose I could justify it to myself, since BackgrounDRb is essentially resetting itself and its connections. But what seems odd is that Mongrel (or, rather,
2016 Jun 01
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote: > I did some additional testing - I stopped Kafka on the host, and kicked > off a disk check, and it ran at the expected speed overnight. I started > kafka this morning, and the raid check's speed immediately dropped down to > ~2000K/Sec. > > I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). > The raid check is now running
2016 May 25
3
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
John R Pierce wrote: > On 5/25/2016 11:44 AM, Kelly Lesperance wrote: >> The HBA is an HP H220. > > OH. its a very good idea to verify the driver is at the same revision > level as the firmware. not 100% sure how you do this under CentOS, my > H220 system is running FreeBSD, and is at revision P20, both firmware > and driver. HP's firmware, at least what I
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2005 Mar 05
1
S-code for piecewise regression
Dear R-helpers, a S-code for piecewise regressions was provided by Toms & Lesperance (2003) Ecology, 84, 2034-2041 (paper can be found on the web).The code is quite complete with different types of transitions around breakpoints and model selection fonctions. It doesn't work directly under R due to some "translation" problems I guess. However I reckon that it would be a
2011 Mar 30
3
Test errors in fuctional test after adding before_filter :login_required to controller
Hello, I added the "before_filter" to my controllers to require a login of the user. Here''s an example of my Unit Controller with the added before_filter: IN THE ATTACHED FILE When executing the tests with rake test, I get different error messages. To show you my errors, I only executed the unit controller test with the following line: ruby -Itest
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Jul 16
0
No more filter chain halted messages?
I''ve been trying to debug test failures when converting my project to rails 3, and I noticed that controllers are redirecting inside of before filters. It''s very difficult to figure out which of my 8 before_filters caused the unexpected redirect, however, since the "filter chain halted as XXX rendered or redirected" message is MIA. I read through the new
2008 Oct 03
2
Filter chain halted as [:check_authentication] rendered_or_r
I have this page that you login in from. You get authenticated and then bumped over to the appropriate page depending on what your role is: Traveler, Travel Manager, Admin. All pieced work except for the role asssociated with Travel Managers who get tossed out, apprarently when they hit a before_filter to check authenication. However, it seams that they are properly getting authenticated and moved
2007 May 30
2
Bug? Filter chain halted as [#<ActionController::Filters::..
OK so I''ve been trying to follow the tutorial here: http://rails.homelinux.org/ When I simply do "scaffold: category" in the controller.rb, everything works fine. BUT after I generate the controllers and other files using scaffold AS A SCRIPT (script/generate scaffold categories), DESTROY (or DELETE) does not work for any record in a table. This is the error I got: Processing
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Hdparm didn?t get far: [root at r1k1 ~] # hdparm -tT /dev/sda /dev/sda: Timing cached reads: Alarm clock [root at r1k1 ~] # On 2016-05-25, 2:44 PM, "Kelly Lesperance" <klesperance at blackberry.com> wrote: >The HBA is an HP H220. > >We haven?t really benchmarked individual drives ? all 12 drives are utilized in one RAID-10 array, I?m unsure how we would test
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
What is the HBA the drives are attached to? Have you done a quick benchmark on a single disk to check if this is a raid problem or further down the stack? Regards, Dennis On 25.05.2016 19:26, Kelly Lesperance wrote: > [merging] > > The HBA the drives are attached to has no configuration that I?m aware of. We would have had to accidentally change 23 of them ? > > Thanks, >
2009 Jan 12
1
RTCP SR transmission error, rtcp halted
Hi, While looking for the cause of disturbance in call I found this error coming in console RTCP SR transmission error, rtcp halted Google search only shows some bug reports relating to MOH and Hold. What could cause this message? Could this be a symptom causing call disturbance? Where should I start digging to find out the reason for this error? I am using Asterisk 1.4.19 with zaptel 1.4.9.2
2007 Mar 20
0
FreeBSD, BTX halted
Hi, I try to full virtualize FreeBSD, and it does not boot. I made a VNC screenshot, attached, and also made a joint screenshot. I know about the bug, I am going to enforce (+1) that I also encounter the bug. Would you know any workaround? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2017 Nov 08
0
DC's are unavailable when PDC halted
On Wed, 8 Nov 2017 17:20:09 +0100 Ervin Hegedüs <airween at gmail.com> wrote: > Hi, > > > On Wed, Nov 08, 2017 at 03:21:28PM +0000, Rowland Penny wrote: > > On Wed, 8 Nov 2017 14:33:28 +0100 > > Ervin Hegedüs <airween at gmail.com> wrote: > > > > > When I turned off the open-ldap2, and open-ldap works, then the > > > wbinfo -a returns
2017 Nov 10
0
DC's are still unavailable when PDC halted
Cc Bcc: Subject: DC's still are unavailable when PDC halted Reply-To: Hi folks, I've completely re-installed my DC's and Linux member. I've followed the docs step-by-step on Samba's wiki page, everything is works as well. Here is what I see on my member # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain 192.168.255.98 open-client.wificloud.local open-client #