Displaying 20 results from an estimated 683 matches for "gib".
Did you mean:
gb
2012 Dec 06
3
LVM Checksum error when using persistent grants (#linux-next + stable/for-jens-3.8)
Hey Roger,
I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.
Basically I can do ''pvscan'' on xvd* disk and quite often I get checksum errors:
# pvscan /dev/xvdf
PV /dev/xvdf2 VG VolGroup00 lvm2 [18.88 GiB / 0 free]
PV /dev/dm-14 VG vg_x86_64-pvhvm lvm2 [4.00 GiB / 68.00 MiB free]
PV /dev/dm-12 VG vg_i386-pvhvm lvm2 [4.00 GiB / 68.00 MiB free]
PV /dev/dm-11 VG vg_i386 lvm2 [4.00 GiB / 68.00 MiB free]
PV /dev/sda VG guests lvm2 [931.51 GiB / 220.51 GiB f...
2019 Nov 30
5
[PATCH nbdkit 0/3] filters: stats: More useful, more friendly
- Use more friendly output with GiB and MiB/s.
- Measure time per operation, providing finer grain stats
- Add missing stats for flush
I hope that these changes will help to understand and imporve virt-v2v
performance.
Nir Soffer (3):
filters: stats: Show size in GiB, rate in MiB/s
filters: stats: Measure time per operation
f...
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
...B size.
dd if=/dev/zero of=/dev/vda bs=8M count=1024 oflag=direct
With and without patches bandwidth which has a bit wide range does not
look that different from each other.
Without patches
===============
---------- 1 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.95557 s, 4.4 GB/s
---------- 2 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.05176 s, 4.2 GB/s
---------- 3 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.88314 s, 4.6 GB/s
---------- 4 ---------
1...
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
...B size.
dd if=/dev/zero of=/dev/vda bs=8M count=1024 oflag=direct
With and without patches bandwidth which has a bit wide range does not
look that different from each other.
Without patches
===============
---------- 1 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.95557 s, 4.4 GB/s
---------- 2 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.05176 s, 4.2 GB/s
---------- 3 ---------
1024+0 records in
1024+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.88314 s, 4.6 GB/s
---------- 4 ---------
1...
2006 Nov 26
1
ext3 4TB fs limit on amd64 (FAQ?)
Hi,
I've a question about the max. ext3 FS size. The ext3 FAQ explains that
the limit is 4TB.
http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html
| Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size
is | limited by the maximal block device size, which is 2TB. In 2.6 the
maximum | (32-bit CPU) limit is of block devices is 16TB, but ext3
supports only up | to 4TB.
2019 Nov 30
0
[PATCH nbdkit 1/3] filters: stats: Show size in GiB, rate in MiB/s
I find bytes and bits-per-second unhelpful and hard to parse. Using GiB
for sizes works for common disk images, and MiB/s works for common
storage throughput.
Here is an example run with this change:
$ ./nbdkit --foreground \
--unix /tmp/nbd.sock \
--exportname '' \
--filter stats \
file file=/var/tmp/dst.img \
statsfile=/dev/stderr \...
2019 Nov 30
0
[PATCH nbdkit 2/3] filters: stats: Measure time per operation
...tmp/nbd.sock \
--exportname '' \
--filter stats \
file file=/var/tmp/dst.img \
statsfile=/dev/stderr \
--run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock'
(100.00/100%)
elapsed time: 2.150 s
write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
This show that the actual time waiting for storage was only 0.4 seconds,
but elapsed time was 2.1 seconds. I think the missing time is in flush()
which we do not measure...
2011 Jul 30
1
offline root lvm resize
...Centos 5 with latest updates as of yesterday. kernel is
2.6.18-238.19.1.el5
-setup is raid 1 for /boot and lvm over raid6 for everything else
- The / partition (lvm "RootVol") had run out of room... (100%
full, things where falling appart...)
I resized the root volume (from 20GiB to 50GiB). This was done from a
fedora 15 livecd, seemed like a better idea than doing it on a live
system at the time.... After the resize the content of all the lvs
could be mounted and all data was still there (all this from within
fedora).
The problem is when i try to reboot into centos as the...
2016 Jun 01
2
Migration problem - takes 5 minutes to start moving the memory
...--p2p --auto-converge --copy-storage-inc --xml vm-6160.xml 6160 qemu+tls://<destination_hypervisor>/system
Here is the log output, look at the time elapsed:
root at virt-hv009:~# virsh domjobinfo 6160
Job type: Unbounded
Time elapsed: 27518 ms
Data processed: 21.506 GiB
Data remaining: 29.003 GiB
Data total: 50.509 GiB
Memory processed: 0.000 B
Memory remaining: 520.820 MiB
Memory total: 520.820 MiB
File processed: 21.506 GiB
File remaining: 28.494 GiB
File total: 50.000 GiB
Constant pages: 0
Normal pages: 0
Normal...
2019 Dec 04
1
Re: [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
...tion
> throughput, and add
> information that was not available before, like total number of ops.
> Showing two rate
> values per operation looks confusing to me.
>
> But how about this:
>
> ----------------------------------------------
> total: 2299 ops, 2.172 s, 6.00 GiB, 2.76 GiB/s
> 520.73 MiB/s write, 2.23 GiB/s zero
> -----------------------------------------------
> write: 1271 ops, 0.356 s, 1.13 GiB, 3.19 GiB/s
> zero: 1027 ops, 0.012 s, 4.86 GiB, 405.00 GiB/s
> extents: 1 ops, 0.000 s, 2.00 GiB, 485.29 GiB/s
> flush: 2 ops, 1.252 s...
2018 Jul 23
0
[RFC 0/4] Virtio uses DMA API for all devices
...flag=direct
>
> With and without patches bandwidth which has a bit wide range does not
> look that different from each other.
>
> Without patches
> ===============
>
> ---------- 1 ---------
> 1024+0 records in
> 1024+0 records out
> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.95557 s, 4.4 GB/s
> ---------- 2 ---------
> 1024+0 records in
> 1024+0 records out
> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 2.05176 s, 4.2 GB/s
> ---------- 3 ---------
> 1024+0 records in
> 1024+0 records out
> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.88...
2013 Aug 16
5
OT: laptop recommendations for CentOS6
Hi all,
First of all, sorry for the OT. I need to buy a new laptop for my
work. My prerequisites are:
- RAM: 6/8 GiB (preferably 8 GiB)
- Processor: Core i7
- Disk: up to 500 GiB for SATA, 128 GiB for SSD.
- Graphics card: Intel HD (I really hate to use Nvidia or ATI Radeon
graphics cards).
The most important tasks will be:
- Surf the web :)
- Read email
- And the Most important task: I need to install...
2019 Nov 30
0
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
...ats \
> > file file=/var/tmp/dst.img \
> > statsfile=/dev/stderr \
> > --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock'
> > (100.00/100%)
> > elapsed time: 2.150 s
> > write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
> > zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
> > extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
> >
> > This show that the actual time waiting for storage was only 0.4 seconds,
> > but elapsed time was 2.1 seconds. I think the m...
2019 Dec 02
2
Re: [PATCH nbdkit v2 3/3] filters: stats: Add flush stats
I have pushed some parts of these patches in order to reduce the delta
between your patches and upstream. However still some problems with
the series:
Patch 1: Same problem with scale as discussed before.
Patch 2: At least the documentation needs to be updated since it no
longer matches what is printed. The idea of collecting the time taken
in each operation is good on its own, so I pushed
2007 Apr 11
2
HD/Partitions/RAID setup
I have a machine that's been configured as follows using its BIOS tools:
SATA-0 is a 160 GiB drive used as boot
SATA-1 and SATA-2 are both 500 GiB drives and were configured as a
RAID-1 in BIOS.
When the system boots up, BIOS reports 1 160 GiB SATA drive, and 1
Logical volume as RAID-1 ID#0 500 GiB, which is what I would expect it
to report, as the two drives are now raided tog...
2009 Sep 21
0
received packet with own address as source address
...0 overruns:0
frame:0
TX packets:539964989 errors:0 dropped:0 overruns:0
carrier:0
collisions:0
txqueuelen:1000
RX bytes:90307422234 (84.1 GiB) TX bytes:158414382227 (147.5
GiB)
Interrupt:16
Memory:f8000000-f8012100
eth1 Link encap:Ethernet HWaddr 00:1e:c9:ed:ee:88
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX...
2019 Nov 30
2
Re: [PATCH nbdkit 2/3] filters: stats: Measure time per operation
...9;' \
> --filter stats \
> file file=/var/tmp/dst.img \
> statsfile=/dev/stderr \
> --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock'
> (100.00/100%)
> elapsed time: 2.150 s
> write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
> zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
> extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
>
> This show that the actual time waiting for storage was only 0.4 seconds,
> but elapsed time was 2.1 seconds. I think the missing time is in flush(...
2019 Nov 30
1
Re: [PATCH nbdkit 1/3] filters: stats: Show size in GiB, rate in MiB/s
On Sat, Nov 30, 2019 at 02:17:05AM +0200, Nir Soffer wrote:
> I find bytes and bits-per-second unhelpful and hard to parse. Using GiB
> for sizes works for common disk images, and MiB/s works for common
> storage throughput.
>
> Here is an example run with this change:
>
> $ ./nbdkit --foreground \
> --unix /tmp/nbd.sock \
> --exportname '' \
> --filter stats \
> file file=/...
2019 Nov 30
0
[PATCH nbdkit v2 1/3] filters: stats: Add size in GiB, show rate in MiB/s
I find bytes and bits-per-second unhelpful and hard to parse.
Add also size in GiB, and show rate in MiB per second. This works
well for common disk images and storage.
Here is an example run with this change:
$ ./nbdkit --foreground \
--unix /tmp/nbd.sock \
--exportname '' \
--filter stats \
file file=/var/tmp/dst.img \
statsfile=/dev/stderr \
-...
2015 Apr 01
1
can't mount an LVM volume inCentos 5.10
...olGroup00
Volume group "VolGroup00" is not exported
root at Microknoppix:/home/knoppix# vgchange -ay VolGroup00
8 logical volume(s) in volume group "VolGroup00" now active
root at Microknoppix:/home/knoppix# lvscan
ACTIVE '/dev/VolGroup00/Dom0' [40.00 GiB] inherit
ACTIVE '/dev/VolGroup00/babine' [100.00 GiB] inherit
ACTIVE '/dev/VolGroup00/centos-template' [100.00 GiB] inherit
ACTIVE '/dev/VolGroup00/bulkley-old' [100.00 GiB] inherit
ACTIVE '/dev/VolGroup00/ubuntu...