Displaying 20 results from an estimated 2000 matches similar to: "Gluster performance / Dell Idrac enterprise conflict"
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
Thanks for the suggestion. I tried both of these with no difference in
performance.I have tried several other Dell hosts with Idrac Enterprise and
getting the same results. I also tried a new Dell T130 with Idrac express
and was getting over 700 MB/s. Any other users had this issues with Idrac
Enterprise??
On Thu, Feb 22, 2018 at 12:16 AM, Serkan ?oban <cobanserkan at gmail.com>
wrote:
2018 Feb 22
0
Gluster performance / Dell Idrac enterprise conflict
"Did you check the BIOS/Power settings? They should be set for high performance.
Also you can try to boot "intel_idle.max_cstate=0" kernel command line
option to be sure CPUs not entering power saving states.
On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson <ryanwilk at gmail.com> wrote:
>
>
> I have a 3 host gluster replicated cluster that is providing storage for
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
I've tested about 12 different Dell servers. Ony a couple of them have
Idrac express and all the others have Idrac Enterprise. All the boxes with
Enterprise perform poorly and the couple that have express perform well. I
use the disks in raid mode on all of them. I've tried a few non-Dell boxes
and they all perform well even though some of them are very old. I've also
tried
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
I don't think it is related with iDRAC itself but some configuration
is wrong or there is some hw error.
Did you check battery of raid controller? Do you use disks in jbod
mode or raid mode?
On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson <ryanwilk at gmail.com> wrote:
> Thanks for the suggestion. I tried both of these with no difference in
> performance.I have tried several
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
Here is info. about the Raid controllers. Doesn't seem to be the culprit.
Slow host:
Name PERC H710 Mini (Embedded)
Firmware Version 21.3.4-0001
Cache Memory Size 512 MB
Fast Host:
Name PERC H310 Mini (Embedded)
Firmware Version 20.12.1-0002
Cache Memory Size 0 MB
Slow host:
Name PERC H310 Mini (Embedded)
Firmware Version 20.13.1-0002
Cache Memory Size 0 MB
Slow host:
Name PERC H310 Mini
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
I would be really supprised if the problem was related to Idrac.
The Idrac processor is a stand alone cpu with its own nic and runs
independent of the main CPU.
That being said it does have visibility into the whole system.
try using dmidecode to compare the systems and take a close look at the
raid controllers and what size and form of cache they have.
On 02/26/2018 11:34 AM, Ryan
2018 Feb 27
1
Gluster performance / Dell Idrac enterprise conflict
All volumes are configured as replica 3. I have no arbiter volumes.
Storage hosts are for storage only and Virt hosts are dedicated Virt
hosts. I've checked throughput from the Virt hosts to all 3 gluster hosts
and am getting ~9Gb/s.
On Tue, Feb 27, 2018 at 1:33 AM, Alex K <rightkicktech at gmail.com> wrote:
> What is your gluster setup? Please share volume details where vms ate
2018 Feb 27
0
Gluster performance / Dell Idrac enterprise conflict
What is your gluster setup? Please share volume details where vms ate
stored. It could be that the slow host is having arbiter volume.
Alex
On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanwilk at gmail.com> wrote:
> Here is info. about the Raid controllers. Doesn't seem to be the culprit.
>
> Slow host:
> Name PERC H710 Mini (Embedded)
> Firmware Version
2018 Feb 16
0
Fwd: gluster performance
I am forwarding this for Ryan, @Ryan - did you join the gluster users mailing list yet? That may be why you are having issues sending messages.
----- Forwarded Message -----
From: "Ryan Wilkinson" <ryan at centriserve.net>
To: Bturner at redhat.com
Sent: Wednesday, February 14, 2018 4:46:10 PM
Subject: gluster performance
I have a 3 host gluster replicated cluster that is
2010 Oct 14
0
Slow zfs import solved (beware iDRAC/ILO)
Just a note to pass on in case anyone runs into the same situation.
I have a DELL R510 that is running just fine, up until the day that I needed to import a pool from a USB hard drive. I plug in the disk, check it with rmformat and try to import the zpool. And it sits there for practically forever, not responding. The machine still responds to network connections etc., it''s just the
2018 Mar 06
2
Failed connections 7.6 to 5.2
Trying to connect to a Dell iDRAC 6. The iDRAC reports it is running
OpenSSH 5.2.
From Fedora Linux 20 with OpenSSH 6.4p1, connections succeed.
From Fedora Linux 23 with OpenSSH 7.2p2, connections succeed.
From Fedora Linux 27 with OpenSSH 7.6p1, connections fail prior to
prompting for a password. The message is, "Received disconnect from (IP
address) port 22:11: Logged out." Trying
2013 Sep 24
1
dmesg and syslog errors in CentOS 6.4 on Dell R720 server
Hi,
I have updated the firmware for perc raid controller card, network card,
IDRAC firmware and the BIOS version on Dell R720 server. I have installed
CentOS 6.4 and updated with all the latest packages using yum -y update.
# cat /var/log/messages | grep -i error
Sep 23 14:09:35 x24 kernel: ERST: Error Record Serialization Table (ERST)
support is initialized.
Sep 23 14:09:35 x24 kernel: ACPI
2013 May 14
5
4.2.2 pci-passthrough crashes Dell Poweredge R710
Hello everyone,
i just updated from 4.2.1 to 4.2.2. If i try to fire up my win2k8 domU
with a pci device attached, the dom0 machine hardcrashes.
my system log (idrac) shows the following:
CPU 2 has an internal error (IERR).
A bus fatal error was detected on a component at bus 0 device 0 function 0.
CPU 1 machine check detected.
and plenty of other entries. The machine hardresets then.
If i
2010 Jul 19
3
Accessing console for Xen 4.0 with 2.6.31 pvops kernel on Dell Poweredge R610
I am currently using a Dell PowerEdge server R610 with Xen 4.0 installed and
the 2.6.31.13 pvops kernel.
I am accessing the server console using the iDRAC KVM feature of the dell
management console.
Does anyone know how to configure the console option in the grub menu so
that all the boot messages can be seen on the
mgmt console?
Currently I can view only the Xen bootup messages if I dont specify
2020 May 28
2
Re: Provide NBD via Browser over Websockets
On Mon, 15 Oct 2018, Nir Soffer wrote:
> On Sat, Oct 13, 2018 at 9:45 PM Eric Wheeler <nbd@lists.ewheeler.net> wrote:
> Hello all,
>
> It might be neat to attach ISOs to KVM guests via websockets. Basically
> the browser would be the NBD "server" and an NBD client would run on the
> hypervisor, then use `virsh change-media vm1 hdc
2019 Jun 12
1
Speculative attack mitigations
Hi folks,
Firstly; apologies in advance for what is a head wrecker of keeping on top of the speculative mitigations and also if this is a duplicate email; my first copy didn't seem to make it into the archive. Also a disclaimer that I may have misunderstood elements of the below but please bear with me.
I write this hoping to find out a bit more about the state of the relevant kernel
2013 Aug 29
7
[PATCH 0/3] x86: mwait_idle improvements ported from Linux
1: x86/mwait_idle: remove assumption of one C-state per MWAIT flag
2: x86/mwait_idle: export both C1 and C1E
3: x86/mwait_idle: initial C8, C9, C10 support
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
2006 Mar 28
12
Rails & PHP
Hi there - I wanted to know if anyone has used Rails & PHP on the same
production server and whether they''ve experienced any problems.
I''m looking to install rails on our production soon, however I would
like to know if there are any issues I need to be aware about.
Many thanks,
Jared.
2013 Nov 11
1
[PATCH] x86/idle: reduce contention on ACPI register accesses
Other than when they''re located in I/O port space, accessing them when
in MMIO space (currently) implies usage of some sort of global lock: In
-unstable this would be due to the use of vmap(), is older trees the
necessary locking was introduced by 2ee9cbf9 ("ACPI: fix
acpi_os_map_memory()"). This contention was observed to result in Dom0
kernel soft lockups during the loading of
2010 Sep 17
2
Constant vs Nonstop vs Invariant TSC question
>From /xen-unstable.hg/xen/arch/x86/cpu/intel.c
if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
(c->x86 == 0x6 && c->x86_model >= 0x0e))
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
if (cpuid_edx(0x80000007) & (1u<<8)) {
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
set_bit(X86_FEATURE_NONSTOP_TSC,