Displaying 5 results from an estimated 5 matches for "293m".
Did you mean:
293
2018 May 22
1
Re: Create qcow2 v3 volumes via libvirt
...dev/mapper/RT--vg-root 51G 21G 28G 42% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 472M 155M 293M 35% /boot
192.168.0.16:/volume1/fileLevel 8.1T 2.5T 5.6T 31% /mnt/nfs/fileLevel
tmpfs 789M 0 789M 0% /run/user/1000
I would prefer to not get caught out again with this machine pausing,
how can I determine how much space is being used up by 'deleted'...
2013 Jun 19
1
Weird I/O hangs (9.1R, arcsas, interrupt spikes on uhci0)
...ms to continue normally.
Environment: FreeBSD 9.1R GENERIC on amd64, using ZFS, on a ARC1320 PCIe with 24x Seagate ST33000650SS (3rd party arcsas.ko driver).
It's easy to observe these hangs under write load, e.g. with 'zpool iostat 1':
void 22.4T 42.6T 34 2.73K 1.07M 293M
void 22.4T 42.6T 20 2.74K 623K 289M
void 22.4T 42.6T 144 2.62K 4.83M 279M
void 22.4T 42.6T 13 2.60K 437K 283M
void 22.4T 42.6T 0 0 0 0 <-- hang starts
void 22.4T 42.6T 0 0 0 0
void 22....
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2018 May 01
4
Re: Create qcow2 v3 volumes via libvirt
I have been using internal snapshots on production qcow2 images for a
couple of years, admittedly as infrequently as possible with one
exception and that exception has had multiple snapshots taken and
removed using virt-manager's GUI.
I was unaware of this:
> There are some technical downsides to
> internal snapshots IIUC, such as inability to free the space used by the
> internal
2003 Mar 30
1
[RFC][patch] dynamic rolling block and sum sizes II
...36M
4239G 4095K 1060K 6 10M 37M
16T 8191K 2091K 6 20M 73M
64T 15M 4126K 6 40M 145M
130T 15M 8359K 7 89M 293M
file length block_len block_count s2length xmit sums array_size
50 16K 1 2 6 36
65M 16K 4202 3 28K 147K
1063M 16K 66K 4 531K 2392K...