Displaying 20 results from an estimated 1200 matches similar to: "centos7.6 nvme support"
2020 Jun 21
1
About support for AMD ROME CPUs
Hi, all
We use AMD Rome CPUs, like EPYC 7452.
RHEL said that RHEL 7.6.6 began to support these cpus(detials:
https://access.redhat.com/support/policy/amd)
, but we found that CentOS7.6 with kernel 3.10.0-957.21.3.el7.x86_64 also
works fine.
So, my questions are:
1) Is there any place that i can find the cpu support info?
2) Dose the kernel 3.10.0-957.21.3.el7.x86_64 from CentOS7.6 already
2006 Mar 28
2
Asterisk & SMP: Is irqbalance Redundant on 2.6 Kernels?
Asterisk users,
I posted the following email to the Fedora users list
<https://www.redhat.com/archives/fedora-list/2006-March/msg04154.html>
and it got no responses, so now I'm calling on your expertise. Please
take a look at it and share your knowledge on the subject with me.
Additionally, let me know if you believe what I am trying to do is
critically flawed. It's possible
2019 Mar 08
1
samba-tool domain provision stuck when using python3
Hello, everyone.
I am testing samba 4.10RC4's the compatibility with python3. I compiled samba 4.10rc4 successfully in CentOS7.6 with Python3.6.6. but when I run
samba-tool domain provision --use-rfc2307 --interactive
It stuck and did not give any error messages.
then I enable debug
samba-tool domain provision --use-rfc2307 --interactive -d7
it throw info likeļ¼
INFO: Current debug
2019 Feb 22
2
winbind causing huge timeouts/delays since 4.8
Hello!
I want to share some findings with the community about hugh
timeouts/delays since upgraded to samba 4.8 end of last year and a patch
fixing this in our setup. It would be great if someone from samba dev
team could take a look and if acceptable apply the patch to the common
code base. It may also affect current stable and release candidates.
The patch expects the patch from BUG 13503
2020 Jun 23
0
About support for AMD ROME CPUs (Alexander Dalloz)
Hi, Alexander
Thanks for your reply .
Because we using CentOS7.6 with kernel 3.10.0-957.21.3.el7.x86_64 in
production .
We want to know which version of CentOS(or CentOS7.6) started to support
AMD ROME CPUs and is it necessary to upgrade ?
Thanks.
2006 Feb 09
1
Optimizing Linux to run Asterisk
Could anyone either recommend a website or howto on optimizing Linux to
run asterisk. Such examples of what I mean are..
Renice of asterisk pid's
Forcing irq smp_affinity (For interupt hogging T1 cards)
.. That kind of stuff, I looked on the wiki and nothing directly
mentions server optimization. Or, is this something that *should* be
totally irrelevent when dealing with Asterisk.
P.S. I
2013 May 25
1
Question regarding dropped packets
Hi,
I''ve got a machine that''s hosting a number of virtual machines (58 + the
dom0) and I''m having some weird issues with packet loss and high latency
that happens when ksoftirqd spikes in CPU usage.
The machine itself is CentOS 5 x86_64 running the stock Xen packages
(3.1.2). It''s a dual quad-core w/ hyperthreading, 48 gigs of ram, and
using Adaptec
2010 May 16
1
Digium TE121P + DAHDI
Hi,
Just trying to figure out / solve some echo issues with one of our systems.
The server is a Dell R710 running Asterisk 1.6.x and Dahdi 2.2.x
The card in the server is a TE121P (with VPMADT032 echo canceller)
We are experiencing some weird echo issues with our Cisco 79xx phones, only
when we are dialing out via the PRI. The echo is like a side tone, where we
can hear ourselves speaking. The
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All,
We have a xen server and using 8 core processor.
I can see that there is 99% iowait on only core 0.
02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88
02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11
02:28:54 AM
2010 Sep 13
1
irq 58 nobody cared.
I built a new server about 10 days ago running CentOS 5.latest,
and it's been presenting a message shortly after booting:
irq 58: nobody cared (try booting with the "irqpoll" option)
Call Trace:
<IRQ> [<ffffffff800bb712>] __report_bad_irq+0x30/0x7d
[<ffffffff800bb945>] note_interrupt+0x1e6/0x227
[<ffffffff800bae41>] __do_IRQ+0xbd/0x103
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty,
On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote:
> On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote:
>> Hi,
>>
>> These patches try to support multi virtual queues(multi-vq) in one
>> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
>> hardware queue.
>>
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty,
On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote:
> On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote:
>> Hi,
>>
>> These patches try to support multi virtual queues(multi-vq) in one
>> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
>> hardware queue.
>>
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote:
> qemu-nvme: 148MB/s
> vhost-nvme + google-ext: 230MB/s
> qemu-nvme + google-ext + eventfd: 294MB/s
> virtio-scsi: 296MB/s
> virtio-blk: 344MB/s
>
> "vhost-nvme + google-ext" didn't get good enough performance.
I'd expect it to be on par of qemu-nvme with ioeventfd but the question
is: why should it be better? For
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote:
> qemu-nvme: 148MB/s
> vhost-nvme + google-ext: 230MB/s
> qemu-nvme + google-ext + eventfd: 294MB/s
> virtio-scsi: 296MB/s
> virtio-blk: 344MB/s
>
> "vhost-nvme + google-ext" didn't get good enough performance.
I'd expect it to be on par of qemu-nvme with ioeventfd but the question
is: why should it be better? For
2015 Dec 01
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On Tue, 2015-12-01 at 17:02 +0100, Paolo Bonzini wrote:
>
> On 01/12/2015 00:20, Ming Lin wrote:
> > qemu-nvme: 148MB/s
> > vhost-nvme + google-ext: 230MB/s
> > qemu-nvme + google-ext + eventfd: 294MB/s
> > virtio-scsi: 296MB/s
> > virtio-blk: 344MB/s
> >
> > "vhost-nvme + google-ext" didn't get good enough performance.
>
>
2007 Oct 24
7
Compatibility Issues with dell poweredge 1950 and TE110P card
Has anyone had any compatibility issues with a TE110P card installed on a
Dell Poweredge 1950? I noted the following error on the LCD display of the
Dell Poweredge 1950:
E1711 PCI PErr Slot 1 E171F PCIE Fatal Error B0 D4 F0.
The Dell hardware owners manual states that it means the system BIOS has
reported a PCI parity error on a component that resides in PCI configuration
space at bus 0,
2015 Dec 02
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On Tue, 2015-12-01 at 11:59 -0500, Paolo Bonzini wrote:
> > What do you think about virtio-nvme+vhost-nvme?
>
> What would be the advantage over virtio-blk? Multiqueue is not supported
> by QEMU but it's already supported by Linux (commit 6a27b656fc).
I expect performance would be better.
Seems google cloud VM uses both nvme and virtio-scsi. Not sure if
virtio-blk is also
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme?
What would be the advantage over virtio-blk? Multiqueue is not supported
by QEMU but it's already supported by Linux (commit 6a27b656fc).
To me, the advantage of nvme is that it provides more than decent performance on
unmodified Windows guests, and thanks to your vendor extension can be used
on Linux as well with speeds comparable to
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme?
What would be the advantage over virtio-blk? Multiqueue is not supported
by QEMU but it's already supported by Linux (commit 6a27b656fc).
To me, the advantage of nvme is that it provides more than decent performance on
unmodified Windows guests, and thanks to your vendor extension can be used
on Linux as well with speeds comparable to
2015 Nov 20
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Thanks Ming,
from a first quick view this looks great. I'll look over it in a bit
more detail once I get a bit more time.