search for: newdisk

Displaying 14 results from an estimated 14 matches for "newdisk".

2007 Aug 07
5
Extending RAIDZ.
...extend RAIDZ. Here is the legend: << >> - block boundaries D<x> - data block P<x> - parity block N<x> - new parity block U - unused * - if offset in I/O request is less that this marker we use four disks only, if greater - we use five disks After adding ''NewDisk'' to RAIDZ vdev, we have something like this: Disk0 Disk1 Disk2 Disk3 NewDisk <<P00 D00 D01 D02 U P01 D03 D04 D05 U P02 D06>> <<P03 D07>> U <<P04 D08>> <<P05 D09 U P06 D10 D11 D12>> U <&lt...
2016 Jan 13
0
virsh attach-device : Bus 'pci.0' does not support hotplugging.
...2' cache='none' io='native'/> <source file='/var/lib/libvirt/images/guest.qcow2'/> <target dev='vdc' bus='virtio'/> </disk> After executing the command I am getting - [root@cent7 ~]# virsh attach-device lib-virt-man-001 newDisk.xml error: Failed to attach device from newDisk.xml error: internal error: unable to execute QEMU command 'device_add': Bus 'pci.0' does not support hotplugging. [root@cent7 ~]# virsh version Compiled against library: libvirt 1.2.17 Using library: libvirt 1.2.17 Using API: QEMU 1....
2012 Jan 29
2
Advise on recovering 2TB RAID1
Hi all, I have one drive fails on a software 2TB RAID1. I have removed the failed partition from mdraid and now ready to replace the failed drive. I want to ask for opinion if there is better way to do that other than: 1. Put the new HDD. 2. Use parted to recreate the same partition scheme. 3. Use mdadm to rebuild the RAID. Especially #2 is rather tricky. I have to create an exact partition
2013 Jul 18
0
Seeing data corruption with g_multipath utility
...pool: poola state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM poola ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 multipath/newdisk4 ONLINE 0 0 0 multipath/newdisk2 ONLINE 0 0 0 errors: No known data errors * * * * *gmultipath status:* * * Name Status Components multipath/newdisk2 OPTIMAL da7 (ACTIVE)...
2010 Nov 23
1
drive replaced from spare
I have a x4540 with a single pool made from a bunch of raidz1''s with 2 spares (solaris 10 u7). Been running great for over a year, but I''ve had my first event. A day ago the system activated one of the spares c4t7d0, but given the status below, I''m not sure what to do next. # zpool status pool: pool1 state: ONLINE scrub: resilver completed after 2h25m
2002 Oct 28
3
memdisk hard disk image
Can anyone please tell me how to create a bootable 10MB hard disk image for use with memdisk? I'd like to use FreeDOS. And I know little of DOS. Thanks.
2011 Jun 08
1
Resizing ext4 fedora qemu guest
...rack, 23970 cylinders, total 385089536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Outside of the guest, I see this: # virt-filesystems --long --parts --lvs -h -a newdisk.img Name Type Size Parent /dev/vg_custserv/lv_home lv 184G /dev/vg_custserv /dev/vg_custserv/lv_root lv 50G /dev/vg_custserv /dev/vg_custserv/lv_swap lv 5.9G /dev/vg_custserv /dev/sda1 partition 500M /dev/sda /dev/sda2...
2011 Jun 08
1
Resizing ext4 fedora qemu guest
...rack, 23970 cylinders, total 385089536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Outside of the guest, I see this: # virt-filesystems --long --parts --lvs -h -a newdisk.img Name Type Size Parent /dev/vg_custserv/lv_home lv 184G /dev/vg_custserv /dev/vg_custserv/lv_root lv 50G /dev/vg_custserv /dev/vg_custserv/lv_swap lv 5.9G /dev/vg_custserv /dev/sda1 partition 500M /dev/sda /dev/sda2...
2008 Dec 17
11
zpool detach on non-mirrored drive
I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS
2012 Oct 08
5
[PATCH v4 0/5] Finish hotplugging support.
This rounds off hotplugging support by allowing you to add and remove drives at any stage (before and after launch). Rich.
2012 Oct 08
3
[PATCH v3 0/3] Add support for disk labels and hotplugging.
This is, I guess, version 3 of this patch series which adds disk labels and hotplugging (only hot-add implemented so far). The good news is .. it works! Rich.
2017 Mar 31
6
[PATCH 0/3] Fix some quoting issues.
Fix some quoting issues by introducing Unicode quotes. Note this intentionally only affects end-user messages and documentation. Rich.
2015 Jun 14
2
[PATCH] pod: Use F<> for filenames instead of C<>.
...-This should copy C</home> from the guest into the current directory. +This should copy F</home> from the guest into the current directory. =head2 Run virt-df. @@ -349,7 +349,7 @@ Using L<virt-sparsify(1)>, make a disk image more sparse: virt-sparsify /path/to/olddisk.img newdisk.img -Is C<newdisk.img> still bootable after sparsifying? Is the resulting +Is F<newdisk.img> still bootable after sparsifying? Is the resulting disk image smaller (use C<du> to check)? =head2 B<*> "sysprep" a B<shut off> Linux guest. diff --git a/fish...
2012 May 07
53
kernel 3.3.4 damages filesystem (?)
Hallo, "never change a running system" ... For some months I run btrfs unter kernel 3.2.5 and 3.2.9, without problems. Yesterday I compiled kernel 3.3.4, and this morning I started the machine with this kernel. There may be some ugly problems. Copying something into the btrfs "directory" worked well for some files, and then I got error messages (I''ve not