search for: gibbings

Displaying 20 results from an estimated 678 matches for "gibbings".

2012 Dec 06
3
LVM Checksum error when using persistent grants (#linux-next + stable/for-jens-3.8)
Hey Roger, I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree. Basically I can do ''pvscan'' on xvd* disk and quite often I get checksum errors: # pvscan /dev/xvdf PV /dev/xvdf2 VG VolGroup00 lvm2 [18.88 GiB / 0 free] PV /dev/dm-14 VG vg_x86_64-pvhvm lvm2 [4.00 GiB / 68.00 MiB free] PV /dev/dm-12 VG vg_i386-pvhvm lvm2
2019 Nov 30
5
[PATCH nbdkit 0/3] filters: stats: More useful, more friendly
- Use more friendly output with GiB and MiB/s. - Measure time per operation, providing finer grain stats - Add missing stats for flush I hope that these changes will help to understand and imporve virt-v2v performance. Nir Soffer (3): filters: stats: Show size in GiB, rate in MiB/s filters: stats: Measure time per operation filters: stats: Add flush stats filters/stats/stats.c | 117
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote: > On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote: >> This patch series is the follow up on the discussions we had before about >> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation >> for virito devices (https://patchwork.kernel.org/patch/10417371/). There >> were suggestions
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote: > On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote: >> This patch series is the follow up on the discussions we had before about >> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation >> for virito devices (https://patchwork.kernel.org/patch/10417371/). There >> were suggestions
2006 Nov 26
1
ext3 4TB fs limit on amd64 (FAQ?)
Hi, I've a question about the max. ext3 FS size. The ext3 FAQ explains that the limit is 4TB. http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html | Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size is | limited by the maximal block device size, which is 2TB. In 2.6 the maximum | (32-bit CPU) limit is of block devices is 16TB, but ext3 supports only up | to 4TB.
2019 Nov 30
0
[PATCH nbdkit 1/3] filters: stats: Show size in GiB, rate in MiB/s
I find bytes and bits-per-second unhelpful and hard to parse. Using GiB for sizes works for common disk images, and MiB/s works for common storage throughput. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \ statsfile=/dev/stderr \ --run 'qemu-img convert
2019 Nov 30
0
[PATCH nbdkit 2/3] filters: stats: Measure time per operation
Previously we measured the total time and used it to calculate the rate of different operations. This is incorrect and hides the real throughput. A more useful way is to measure the time we spent in each operation. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \
2011 Jul 30
1
offline root lvm resize
So here goes... First some back story -Centos 5 with latest updates as of yesterday. kernel is 2.6.18-238.19.1.el5 -setup is raid 1 for /boot and lvm over raid6 for everything else - The / partition (lvm "RootVol") had run out of room... (100% full, things where falling appart...) I resized the root volume (from 20GiB to 50GiB). This was done from a fedora 15 livecd,
2016 Jun 01
2
Migration problem - takes 5 minutes to start moving the memory
Hi, I'm facing a strange issue while doing a migration from an hypervisor to another one. The migration takes for ever to start moving the memory. The VM had no workload what so ever, just a basic ubuntu image. The versions on the hypervisors are: libvirt 1.2.21, qemu 1.2.3 Command to launche the migration: virsh migrate --verbose --live --abort-on-error --tunnelled --p2p --auto-converge
2019 Dec 04
1
Re: [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
On Wed, Dec 04, 2019 at 01:45:54AM +0200, Nir Soffer wrote: > On Mon, Dec 2, 2019 at 12:28 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > > > > > I have pushed some parts of these patches in order to reduce the delta > > between your patches and upstream. However still some problems with > > the series: > > > > Patch 1: Same problem with
2018 Jul 23
0
[RFC 0/4] Virtio uses DMA API for all devices
On Mon, Jul 23, 2018 at 11:58:23AM +0530, Anshuman Khandual wrote: > On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote: > > On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote: > >> This patch series is the follow up on the discussions we had before about > >> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation > >> for virito
2013 Aug 16
5
OT: laptop recommendations for CentOS6
Hi all, First of all, sorry for the OT. I need to buy a new laptop for my work. My prerequisites are: - RAM: 6/8 GiB (preferably 8 GiB) - Processor: Core i7 - Disk: up to 500 GiB for SATA, 128 GiB for SSD. - Graphics card: Intel HD (I really hate to use Nvidia or ATI Radeon graphics cards). The most important tasks will be: - Surf the web :) - Read email - And the Most important
2019 Nov 30
0
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
On Sat, Nov 30, 2019 at 9:13 AM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Sat, Nov 30, 2019 at 02:17:06AM +0200, Nir Soffer wrote: > > Previously we measured the total time and used it to calculate the rate > > of different operations. This is incorrect and hides the real > > throughput. A more useful way is to measure the time we spent in each > >
2019 Dec 02
2
Re: [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
I have pushed some parts of these patches in order to reduce the delta between your patches and upstream. However still some problems with the series: Patch 1: Same problem with scale as discussed before. Patch 2: At least the documentation needs to be updated since it no longer matches what is printed. The idea of collecting the time taken in each operation is good on its own, so I pushed
2007 Apr 11
2
HD/Partitions/RAID setup
I have a machine that's been configured as follows using its BIOS tools: SATA-0 is a 160 GiB drive used as boot SATA-1 and SATA-2 are both 500 GiB drives and were configured as a RAID-1 in BIOS. When the system boots up, BIOS reports 1 160 GiB SATA drive, and 1 Logical volume as RAID-1 ID#0 500 GiB, which is what I would expect it to report, as the two drives are now raided
2009 Sep 21
0
received packet with own address as source address
Hi, we''re running 4 xen servers on our network with multiple network cards. During HighLoad we experince degeneration of inbound network traffic through our loadbalancer. I might found a reason for it, in Dom0 dmesg output looks as follows: [1157421.975910] eth0: received packet with own address as source address [1157421.975957] eth0: received packet with own address as source
2019 Nov 30
2
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
On Sat, Nov 30, 2019 at 02:17:06AM +0200, Nir Soffer wrote: > Previously we measured the total time and used it to calculate the rate > of different operations. This is incorrect and hides the real > throughput. A more useful way is to measure the time we spent in each > operation. > > Here is an example run with this change: > > $ ./nbdkit --foreground \ > --unix
2019 Nov 30
1
Re: [PATCH nbdkit 1/3] filters: stats: Show size in GiB, rate in MiB/s
On Sat, Nov 30, 2019 at 02:17:05AM +0200, Nir Soffer wrote: > I find bytes and bits-per-second unhelpful and hard to parse. Using GiB > for sizes works for common disk images, and MiB/s works for common > storage throughput. > > Here is an example run with this change: > > $ ./nbdkit --foreground \ > --unix /tmp/nbd.sock \ > --exportname '' \ >
2019 Nov 30
0
[PATCH nbdkit v2 1/3] filters: stats: Add size in GiB, show rate in MiB/s
I find bytes and bits-per-second unhelpful and hard to parse. Add also size in GiB, and show rate in MiB per second. This works well for common disk images and storage. Here is an example run with this change: $ ./nbdkit --foreground \ --unix /tmp/nbd.sock \ --exportname '' \ --filter stats \ file file=/var/tmp/dst.img \ statsfile=/dev/stderr \ --run 'qemu-img
2015 Apr 01
1
can't mount an LVM volume inCentos 5.10
I have a degraded raid array (originally raid-10, now only two drives) that contains an LVM volume. I can see in the appended text that the Xen domains are there but I don't see how to mount them. No doubt this is just ignorance on my part but I wonder if anyone would care to direct me? I want to be able to retrieve dom-0 and one of the dom-Us to do data recovery, the others are of