similar to: Slow zfs import solved (beware iDRAC/ILO)

Displaying 20 results from an estimated 600 matches similar to: "Slow zfs import solved (beware iDRAC/ILO)"

2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
I don't think it is related with iDRAC itself but some configuration is wrong or there is some hw error. Did you check battery of raid controller? Do you use disks in jbod mode or raid mode? On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson <ryanwilk at gmail.com> wrote: > Thanks for the suggestion. I tried both of these with no difference in > performance.I have tried several
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
I would be really supprised if the problem was related to Idrac. The Idrac processor is a stand alone cpu with its own nic and runs independent of the main CPU. That being said it does have visibility into the whole system. try using dmidecode to compare the systems and take a close look at the raid controllers and what size and form of cache they have. On 02/26/2018 11:34 AM, Ryan
2018 Feb 27
0
Gluster performance / Dell Idrac enterprise conflict
What is your gluster setup? Please share volume details where vms ate stored. It could be that the slow host is having arbiter volume. Alex On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanwilk at gmail.com> wrote: > Here is info. about the Raid controllers. Doesn't seem to be the culprit. > > Slow host: > Name PERC H710 Mini (Embedded) > Firmware Version
2018 Feb 27
1
Gluster performance / Dell Idrac enterprise conflict
All volumes are configured as replica 3. I have no arbiter volumes. Storage hosts are for storage only and Virt hosts are dedicated Virt hosts. I've checked throughput from the Virt hosts to all 3 gluster hosts and am getting ~9Gb/s. On Tue, Feb 27, 2018 at 1:33 AM, Alex K <rightkicktech at gmail.com> wrote: > What is your gluster setup? Please share volume details where vms ate
2018 Feb 22
0
Gluster performance / Dell Idrac enterprise conflict
"Did you check the BIOS/Power settings? They should be set for high performance. Also you can try to boot "intel_idle.max_cstate=0" kernel command line option to be sure CPUs not entering power saving states. On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson <ryanwilk at gmail.com> wrote: > > > I have a 3 host gluster replicated cluster that is providing storage for
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
Thanks for the suggestion. I tried both of these with no difference in performance.I have tried several other Dell hosts with Idrac Enterprise and getting the same results. I also tried a new Dell T130 with Idrac express and was getting over 700 MB/s. Any other users had this issues with Idrac Enterprise?? On Thu, Feb 22, 2018 at 12:16 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
I've tested about 12 different Dell servers. Ony a couple of them have Idrac express and all the others have Idrac Enterprise. All the boxes with Enterprise perform poorly and the couple that have express perform well. I use the disks in raid mode on all of them. I've tried a few non-Dell boxes and they all perform well even though some of them are very old. I've also tried
2018 Feb 22
3
Gluster performance / Dell Idrac enterprise conflict
I have a 3 host gluster replicated cluster that is providing storage for our RHEV environment. We've been having issues with inconsistent performance from the VMs depending on which Hypervisor they are running on. I've confirmed throughput to be ~9Gb/s to each of the storage hosts from the hypervisors. I'm getting ~300MB/s disk read spead when our test vm is on the slow Hypervisors
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
Here is info. about the Raid controllers. Doesn't seem to be the culprit. Slow host: Name PERC H710 Mini (Embedded) Firmware Version 21.3.4-0001 Cache Memory Size 512 MB Fast Host: Name PERC H310 Mini (Embedded) Firmware Version 20.12.1-0002 Cache Memory Size 0 MB Slow host: Name PERC H310 Mini (Embedded) Firmware Version 20.13.1-0002 Cache Memory Size 0 MB Slow host: Name PERC H310 Mini
2006 Oct 31
0
6345872 rmformat core dump
Author: phitran Repository: /hg/zfs-crypto/gate Revision: 138a94631b14d3dffa268a110113925f2c83be0a Log message: 6345872 rmformat core dump Files: update: usr/src/cmd/rmformat/rmf_misc.c
2016 Aug 22
2
Instruction itineraries and fence/barrier instructions
On Mon, Aug 22, 2016 at 11:40 AM, Matt Arsenault <arsenm2 at gmail.com> wrote: > > > On Aug 22, 2016, at 11:20, Phil Tomson via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > We improved our instruction itineraries and now we're seeing our > testcases for fence instructions break. > > > > For example, we have this testcase: > >
2016 Aug 22
3
Instruction itineraries and fence/barrier instructions
We improved our instruction itineraries and now we're seeing our testcases for fence instructions break. For example, we have this testcase: @write_me = external global i32 @read_me = external global i32 ; Function Attrs: nounwind define i32 @xstg_intrinsic(i32 %foo) #0 { entry: ; CHECK: store r0, r1, 0, 32 ; CHECK-NEXT: fence 2 %foo.addr = alloca i32, align 4 store i32 %foo,
2013 Jan 31
1
CyberPower on OpenIndiana
Hello, I have a CyberPower CP1500PFCLCD and I'm trying to get nut to see it under OpenIndiana. All search online suggested that it basically can't happen, but I decided to ask again, maybe things have changed since then. The UPS can be seen by the system: # /usr/sbin/cfgadm -lv usb9/1.1 Ap_Id Receptacle Occupant Condition Information When Type
2011 Aug 25
2
nut 2.6.1 on OpenSolaris/OpenIndiana doesn't find Tripp-Lite ECO550 UPS
Hello, 1) On OpenIndiana 151, I configured nut 2.6.1 as configure --prefix=/opt/nut/2.6.1 --with-cgi --with-hal --with-user=ups --with-group=nut CC=cc The result is -e Configuration summary: ====================== build serial drivers: yes build USB drivers: yes build SNMP drivers: yes build neon based XML driver: yes build Powerman PDU client driver: no enable SSL development code: yes
2016 Jan 07
2
TableGen error message: top-level forms in instruction pattern should have void types
On Thu, Jan 7, 2016 at 1:35 PM, Krzysztof Parzyszek <kparzysz at codeaurora.org > wrote: > On 1/7/2016 3:25 PM, Phil Tomson wrote: > >> >> That's better, but now I get: >> >> XSTGInstrInfo.td:902:3: error: In RelAddr: XSTGRELADDR node requires >> exactly 2 operands! >> >> Which makes some sense as XSTGRELADDR is defined as: >> def
2018 Feb 16
0
Fwd: gluster performance
I am forwarding this for Ryan, @Ryan - did you join the gluster users mailing list yet? That may be why you are having issues sending messages. ----- Forwarded Message ----- From: "Ryan Wilkinson" <ryan at centriserve.net> To: Bturner at redhat.com Sent: Wednesday, February 14, 2018 4:46:10 PM Subject: gluster performance I have a 3 host gluster replicated cluster that is
2018 Mar 06
2
Failed connections 7.6 to 5.2
Trying to connect to a Dell iDRAC 6. The iDRAC reports it is running OpenSSH 5.2. From Fedora Linux 20 with OpenSSH 6.4p1, connections succeed. From Fedora Linux 23 with OpenSSH 7.2p2, connections succeed. From Fedora Linux 27 with OpenSSH 7.6p1, connections fail prior to prompting for a password. The message is, "Received disconnect from (IP address) port 22:11: Logged out." Trying
2005 Aug 18
3
Mouse Problems
Hello Folks, I hope that you are well. I have installed CentOS 4.1 and I have an MS mouse. The mouse pointer can be moved around BUT I cannot click on anything menus and such, even during the install the mouse isn't operational for clickage. Any thoughts. Any help would be most welcome. I am using an Avocent SwitchView MP, but its made no difference in installing Core 3 or any other OS.
2013 Sep 24
1
dmesg and syslog errors in CentOS 6.4 on Dell R720 server
Hi, I have updated the firmware for perc raid controller card, network card, IDRAC firmware and the BIOS version on Dell R720 server. I have installed CentOS 6.4 and updated with all the latest packages using yum -y update. # cat /var/log/messages | grep -i error Sep 23 14:09:35 x24 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 23 14:09:35 x24 kernel: ACPI
2020 May 28
2
Re: Provide NBD via Browser over Websockets
On Mon, 15 Oct 2018, Nir Soffer wrote: > On Sat, Oct 13, 2018 at 9:45 PM Eric Wheeler <nbd@lists.ewheeler.net> wrote: > Hello all, > > It might be neat to attach ISOs to KVM guests via websockets.  Basically > the  browser would be the NBD "server" and an NBD client would run on the > hypervisor, then use `virsh change-media vm1 hdc