Displaying 20 results from an estimated 5000 matches similar to: "estimated number of years to TBW math"
2021 Jul 05
1
Problems with CentOS 8 kickstart
Hi All,
I am having problems with a kickstart install of CentOS 8
When I try to do a completely automated install using PXE/UEFI it get to the point where it reads the kickstart config file.
Then I see the following message
"kickstart install Started cancel waiting for multipath siblings for nvme0n1"
This is what I have in the kickstart file
# Clear the Master Boot Record
zerombr
#
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
On Fri, 2015-11-20 at 09:58 +0100, Paolo Bonzini wrote:
>
> On 20/11/2015 09:11, Ming Lin wrote:
> > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
> >>
> >> On 18/11/2015 06:47, Ming Lin wrote:
> >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> >>> }
> >>>
>
2019 Oct 12
0
qeum on centos 8 with nvme disk
I have CentOS 8 install solely on one nvme drive and it works fine and
relatively quickly.
/dev/nvme0n1p4????????? 218G?? 50G? 168G? 23% /
/dev/nvme0n1p2????????? 2.0G? 235M? 1.6G? 13% /boot
/dev/nvme0n1p1????????? 200M? 6.8M? 194M?? 4% /boot/efi
You might want to partition the device (p3 is swap)
Alan
On 13/10/2019 10:38, Jerry Geis wrote:
> Hi All - I use qemu on my centOS 7.7 box that
2019 Oct 12
7
qeum on centos 8 with nvme disk
Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD
disks.
I installed an nVME drive in the computer also. I tried to insall CentOS8
on it
(the physical /dev/nvme0n1 with the -hda /dev/nvme0n1 as the disk.
The process started installing but is really "slow" - I was expecting with
the nvme device it would be much quicker.
Is there something I am missing how to
2019 Dec 12
1
Re: nvme, spdk and host linux version
On Thu, Dec 12, 2019 at 5:40 AM Michal Privoznik <mprivozn@redhat.com> wrote:
>
> On 11/27/19 4:12 PM, Mauricio Tavares wrote:
> > I have been following the patches on nvme support on the list and was
> > wondering: If I wanted to build a vm host to be on the bleeding edge
> > for nvme and spdk fun in libvirt, which linux distro --
> > fedora/ubuntu/centos/etc
2021 Jul 05
3
Problems with CentOS 8 kickstart
On Mon, 5 Jul 2021 at 07:15, Hooton, Gerard <g.hooton at ucc.ie> wrote:
>
> Hi All,
> I am having problems with a kickstart install of CentOS 8
> When I try to do a completely automated install using PXE/UEFI it get to the point where it reads the kickstart config file.
> Then I see the following message
> "kickstart install Started cancel waiting for multipath
2019 Oct 13
5
qeum on centos 8 with nvme disk
>6 hours are too much. First of all you need to check your nvme
>performace (dd can help? dd if=/dev/zero of=/test bs=1M count=10000 andd
>see results. If you want results more benchmark oriented you could try
>bonnie++ as suggested by Jerry).
>Other this, have you got kvm module loaded and enabled cpu
>virtualization option in the BIOS?
>If yes, have you got created the VM
2015 Sep 27
0
[RFC PATCH 0/2] virtio nvme
On Wed, 2015-09-23 at 15:58 -0700, Ming Lin wrote:
> On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote:
> > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote:
> > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
<SNIP>
> > IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with
> > LBA + length based on SGL
2019 Dec 12
0
Re: nvme, spdk and host linux version
On 11/27/19 4:12 PM, Mauricio Tavares wrote:
> I have been following the patches on nvme support on the list and was
> wondering: If I wanted to build a vm host to be on the bleeding edge
> for nvme and spdk fun in libvirt, which linux distro --
> fedora/ubuntu/centos/etc -- should I pick?
>
For NVMe itself it probably doesn't matter as it doesn't require any
special
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote:
> On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote:
> > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > > > Hi Ming & Co,
>
>
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote:
> On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote:
> > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > > > Hi Ming & Co,
>
>
2017 Apr 19
0
centos 7 and nvme
On Apr 19, 2017, at 4:25 PM, jsl6uy js16uy <js16uy at gmail.com> wrote:
> Hello all, and hope all is well
> Has anyone installed / on an nvme ssd for Cent 7? Would anyone know if that
> is supported?
> I have installed using Arch Linux, but at the time, mid last year, had to
> patch grub to recognize nvme. Arch is obviously running a much more recent
> kernel.
> Not
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
From: Mihai Rusu <dizzy at google.com>
This implements the device side for an NVMe vendor extension that
reduces the number of MMIO writes which can result in a very large
performance benefit in virtualized environments.
See the following link for a description of the mechanism and the
kernel NVMe driver changes to support this vendor extension:
2019 Feb 24
0
Nvme m.2 disk problem
Hi list,
I'm running Centos 7.6 on an Corsair Force MP500 120 GB. Root fs is ext4
and this drive is ~1 year old.
System works very well except on boot.
During boot process I got always a file system check on nvme drive.
Running smartctl on this drive I got this:
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2006 Jun 21
0
Some R-Tcl/Tk-BWidget newbie questions.
Dear list,
Could somebody who is more experienced with the Tcl/Tk interface from R
please help me clarify the issues I've put below with ### --> tags?
Several things go wrong, and it's probably because of messy code, but I
have a difficult time finding out what is the cause.
Thanks very much,
JeeBee.
require(tcltk) || stop("Package tcltk is not available.")
# Add path to
2016 Dec 05
0
Huge write amplification with thin provisioned logical volumes
Hi,
I've noticed huge write amplification problem with thinly provisioned
logical volumes and I wondered if anyone can explain why it happens and if
and how can be fixed. The behavior is the same on Centos 6.8 and Centos
7.2.
I have a NVME card (Intel DC P3600 -2 TB) on which I create a thinly
provisioned logical volume:
pvcreate /dev/nvme0n1
vgcreate vgg /dev/nvme0n1
lvcreate
2017 Apr 19
2
centos 7 and nvme
Hello all, and hope all is well
Has anyone installed / on an nvme ssd for Cent 7? Would anyone know if that
is supported?
I have installed using Arch Linux, but at the time, mid last year, had to
patch grub to recognize nvme. Arch is obviously running a much more recent
kernel.
Not afraid todo some empirical leg work. Just asking if anyone had tried
already
thanks all for any/all help
regards
2019 Nov 27
4
nvme, spdk and host linux version
I have been following the patches on nvme support on the list and was
wondering: If I wanted to build a vm host to be on the bleeding edge
for nvme and spdk fun in libvirt, which linux distro --
fedora/ubuntu/centos/etc -- should I pick?