search for: volume_id

Displaying 15 results from an estimated 15 matches for "volume_id".

Did you mean: volume_ids
2018 Aug 30
3
[PATCH v2 0/2] v2v: Add -o openstack target.
v1 was here: https://www.redhat.com/archives/libguestfs/2018-August/thread.html#00287 v2: - The -oa option now gives an error; apparently Cinder cannot generally control sparse/preallocated behaviour, although certain Cinder backends can. - The -os option maps to Cinder volume type; suggested by Matt Booth. - Add a simple test.
2018 Aug 29
2
[PATCH 0/2] v2v: Add -o openstack target.
This patch implements output to OpenStack Cinder volumes using OpenStack APIs. It has only been lightly tested, but appears to work. There are some important things to understand about how this works: (1) You must run virt-v2v in a conversion appliance running on top of OpenStack. And you must supply the name or UUID of this appliance to virt-v2v using the ‘-oo server-id=NAME|UUID’ parameter.
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
...pem master.stime_xattr_name: trusted.glusterfs.a1c74931-568c-4f40-8573-dd344553e557.d62bda3a-1396-492a-ad99-7c6238d93c6a.stime changelog_log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-changes.log socketdir: /var/run/gluster volume_id: a1c74931-568c-4f40-8573-dd344553e557 ignore_deletes: false state_socket_unencoded: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168...
2020 Aug 25
0
[PATCH v2v] v2v: -o openstack: Allow guests to be converted to UEFI (RHBZ#1872094).
...ck.ml +++ b/v2v/output_openstack.ml @@ -390,7 +390,7 @@ object | None -> "" | Some op -> " -op " ^ op) - method supported_firmware = [ TargetBIOS ] + method supported_firmware = [ TargetBIOS; TargetUEFI ] (* List of Cinder volume IDs. *) val mutable volume_ids = [] -- 2.28.0.rc2
2006 Jan 30
2
Exporting which partitions to md-configure
I'm putting the final touches on kinit, which is the user-space replacement (based on klibc) for the whole in-kernel root-mount complex. Pretty much the one thing remaining -- other than lots of testing -- is to handle automatically mounted md devices. In order to do that, without adding userspace versions of all the paritition code (which may be a future change, but a pretty big one)
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
...usted.glusterfs.a1c74931-568 > c-4f40-8573-dd344553e557.d62bda3a-1396-492a-ad99-7c6238d93c6a.stime > changelog_log_file: /var/log/glusterfs/geo-replica > tion/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A% > 2F%2F127.0.0.1%3Amvol1-changes.log > socketdir: /var/run/gluster > volume_id: a1c74931-568c-4f40-8573-dd344553e557 > ignore_deletes: false > state_socket_unencoded: /var/lib/glusterd/geo-replicat > ion/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168. > 178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket > log_file: /var/log/glusterfs/geo-replication/mvol1/s...
2011 Feb 02
2
Gluster 3.1.2 and rpc-auth patch
Hi, Fist of all thanks for all the work you put into gluster this product is fantastic. In our setup, we have to have some kind of nfs authentication. Not beeing able to set the rpc-auth option using the cli was a big draw-back for us. Setting the option auth.allow only set the gluster auth.addr.allow option in the bricks themselves but did not do any good regarding nfs access. Setting the
2009 May 29
0
[PATCH server] Add more debugging to storage tasks
...tic end begin - libvirt_pool = LibvirtPool.factory(db_pool) + libvirt_pool = LibvirtPool.factory(db_pool, @logger) begin libvirt_pool.connect(@session, node) @@ -733,9 +733,9 @@ class TaskOmatic volume = @session.object(:object_id => volume_id) raise "Unable to find newly created volume" unless volume - @logger.info " volume:" + @logger.debug " volume:" for (key, val) in volume.properties - @logger.info " property: #{key}, #{val}" +...
2018 Jan 19
2
geo-replication command rsync returned with 3
...pem master.stime_xattr_name: trusted.glusterfs.2f5de6e4-66de-40a7-9f24-4762aad3ca96.256628ab-57c2-44a4-9367-59e1939ade64.stime changelog_log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1-changes.log socketdir: /var/run/gluster volume_id: 2f5de6e4-66de-40a7-9f24-4762aad3ca96 ignore_deletes: false state_socket_unencoded: /var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1.socket log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.1...
2009 Nov 04
4
[PATCH server] Update daemons to use new QMF.
...phys_libvirt_pool.connect(@session, node) + phys_libvirt_pool.connect(@qmfc, node) end begin libvirt_pool = LibvirtPool.factory(db_pool, @logger) begin - libvirt_pool.connect(@session, node) + libvirt_pool.connect(@qmfc, node) volume_id = libvirt_pool.create_vol(*db_volume.volume_create_params) - volume = @session.object(:object_id => volume_id) + volume = @qmfc.object(:object_id => volume_id) raise "Unable to find newly created volume" unless volume @logger.debug " v...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...ed.glusterfs.a1c74931-568c-4f40-8573-dd344553e557.d62bda3a-1396-492a-ad99-7c6238d93c6a.stime > changelog_log_file: > /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1-changes.log > socketdir: /var/run/gluster > volume_id: a1c74931-568c-4f40-8573-dd344553e557 > ignore_deletes: false > state_socket_unencoded: > /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.socket > log_file: > /var/log/glusterfs/geo-r...
2018 Jan 19
0
geo-replication command rsync returned with 3
...tr_name: >trusted.glusterfs.2f5de6e4-66de-40a7-9f24-4762aad3ca96.256628ab-57c2-44a4-9367-59e1939ade64.stime >changelog_log_file: >/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1-changes.log >socketdir: /var/run/gluster >volume_id: 2f5de6e4-66de-40a7-9f24-4762aad3ca96 >ignore_deletes: false >state_socket_unencoded: >/var/lib/glusterd/geo-replication/mvol1_gl-slave-01-int_svol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1.socket >log_file: >/var/log/glusterfs/geo-replication/mvol1/ssh...
2018 Oct 28
0
[PATCH nbdkit 4/4] Add floppy plugin.
...6); + floppy->bootsect.physical_drive_number = 0; + floppy->bootsect.extended_boot_signature = 0x29; + /* The volume ID should be generated based on the filesystem + * creation date and time, but the old qemu VVFAT driver just used a + * fixed number here. + */ + floppy->bootsect.volume_id = htole32 (0x01020304); + pad_string (label, 11, floppy->bootsect.volume_label); + memcpy (floppy->bootsect.fstype, "FAT32 ", 8); + + floppy->bootsect.boot_signature[0] = 0x55; + floppy->bootsect.boot_signature[1] = 0xAA; + + return 0; +} + +static int +create_fsinfo (s...
2008 Jan 22
2
forced fsck (again?)
hello everyone. i guess this has been asked before, but haven't found it in the faq. i have the following issue... it is not uncommon nowadays to have desktops with filesystems in the order of 500gb/1tb. now, my kubuntu (but other distros do the same) forces a fsck on ext3 every so often, no matter what. in the past it wasn't a big issue. but with sizes increasing so much, users are
2018 Oct 28
6
[PATCH nbdkit 0/4] Add floppy plugin.
Add nbdkit-floppy-plugin, “inspired” by qemu's VVFAT driver, but without the ability to handle writes. The implementation is pretty complete, supporting FAT32, LFNs, volume labels, timestamps, etc, and it passes both ‘make check’ and ‘make check-valgrind’. Usage is simple; to serve the current directory: $ nbdkit floppy . Then using guestfish (or any NBD client): $ guestfish --ro